OOPM/RT : a mutimodeling methodology for real-time simulation

MISSING IMAGE

Material Information

Title:
OOPM/RT : a mutimodeling methodology for real-time simulation
Physical Description:
vi, 136 leaves : ill. ; 29 cm.
Language:
English
Creator:
Lee, Kangsun
Publication Date:

Subjects

Subjects / Keywords:
Real-time data processing   ( lcsh )
Real-time control   ( lcsh )
Object-oriented methods (Computer science)   ( lcsh )
Simulation methods   ( lcsh )
Computer and Information Science and Engineering thesis, Ph.D   ( lcsh )
Dissertations, Academic -- Computer and Information Science and Engineering -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph.D.)--University of Florida, 1998.
Bibliography:
Includes bibliographical references (leaves 131-135).
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
by Kangsun Lee.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002427506
oclc - 41875620
notis - AMD2609
System ID:
AA00003591:00001

Full Text










OOPM/RT : A MUTIMODELING METHODOLOGY FOR REAL-TIME
SIMULATION














By

KANGSUN LEE


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


1998
























To God

And to my parents, Kichang Lee and Seungbok Paik

for their sacrificing love that the world can't know of














TABLE OF CONTENTS




ABSTRACT. .............. ............... ...... v

CHAPTERS ............. .... ................ 1

1 INTRODUCTION ............. ................ 1

1.1 Problem Statement ............................ 2
1.2 Overview of OOPM/RT ......................... 3
1.3 Guide for the Reader ......... ............... 7

2 BACKGROUND ............... ................ 9

2.1 Term inology . . . . 9
2.2 Real-Time Scheduling ........................... 11
2.3 Real-Time Artificial Intelligence ................. ... 13
2.3.1 Iterative Refinement .......... .... ... ..... .. 14
2.3.2 Multiple Methods ......................... 16
2.4 Summary ...... ...... .... .. ..... ...... 19

3 MODEL GENERATION METHODOLOGY . ...... 22

3.1 Structural Abstraction . ....... .. ..... 23
3.2 Behavioral Abstraction ................... ...... 27
3.2.1 Behavioral Abstraction Process . 28
3.2.2 Behavioral Abstraction Techniques . 31
3.3 Abstraction Mechanism of OOPM/RT . .... 40
3.4 Summary ............... ............. 45

4 MODEL SELECTION METHODOLOGY . .. 47

4.1 Construction of the Abstraction Tree .................. 48
4.2 Selection of the Optimal Abstraction Level . 53
4.3 IP (Integer Programming)-Based Selection . ..... 55
4.3.1 Formulation .................. .. ....... .. 56
4.3.2 Analysis ................ ... .. 58
4.3.3 Experiments ........ . ......... 63
4.4 Search-Based Selection ....... . .. 64
4.4.1 Analysis ................... ......... 65
4.4.2 Experiments ........ . . .... 67
4.5 Composition of the Optimal Abstraction Model . .... 67
4.6 Summary ................ ... .. ......... .. 70








5 FULTON EXAMPLE : A STEAMSHIP MODELING.


5.1 Model Generation .................... ........ 72
5.1.1 Structural Abstraction of FULTON . .... 73
5.1.2 Behavioral Abstraction of FULTON . .. 75
5.2 Assessment of Execution Time and Precision . .... 77
5.3 Construction of the Abstraction Tree . ... 78
5.4 Selection of the Optimal Abstraction Model . .... 82
5.5 Sum m ary . . . ..... 85

6 APPLESNAIL EXAMPLE : A POPULATION MODEL OF APPLE SNAILS 87

6.1 M odel Generation ................ ......... 88
6.1.1 Structural Abstraction of APPLESNAIL . .. 89
6.1.2 Behavioral Abstraction of APPLESNAIL . ... 92
6.2 Assessment of Execution Time and Precision. . 96
6.3 Construction of the Abstraction Tree . ... 97
6.4 Selection of the Optimal Abstraction Model . ... 99
6.5 Sum m ary . . . .. 104

7 CONCLUSIONS ................... ............. 106

7.1 Summary ........................ .... .... 106
7.2 Contribution to Knowledge ........................ 108
7.3 Future Research .............................. 110
7.3.1 Behavioral Abstraction Process . ... 111
7.3.2 Quality Assessment . ..... ..... 112
7.4 Conclusions . . . .. 113

APPENDIX . ..... . .... .. 115

A EXAMPLE ABSTRACTION OF BOILING WATER .... ....... .. 115

A.1 Structural Abstraction .................... ..... 115
A.2 Behavioral Abstraction ................... ....... 117
A.2.1 Linear Regression ......................... 118
A.2.2 Backpropagation Network . ... 119
A.2.3 System Identification . . 120
A.2.4 ADALINE Neural Network . .. .. 121
A.2.5 Gamma Network ......................... 121

B EXAMPLE ABSTRACTION OF HEMATOPOIESIS . .... 124

B.1 Structural Abstraction .......................... 124
B.2 Behavioral Abstraction ................... ....... 128
B.2.1 System Identification . . 128
B.2.2 ADALINE neural network . .. .. 129
B.2.3 WAVELET network ........ ........ ...... 129

REFERENCES ................................ ... 131

BIOGRAPHICAL SKETCH .......................... 136














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy


OOPM/RT : A MODELING METHODOLOGY
FOR REAL-TIME SIMULATION

By
KANGSUN LEE

December, 1998


Chairman: Dr. Paul A. Fishwick
Major Department: Computer and Information Science and Engineering


When we build a model of real-time systems, we need ways of representing the

knowledge about the system and also time requirements for simulating the model.

Considering these different needs, our question is "How to determine the optimal

model that simulates the system by a given deadline, while still producing a good

quality at a right level of detail?" We have designed the OOPM/RT (Object-Oriented

Physical Multimodeling for Real-Time Simulation) methodology as an answer to this

question. The OOPM/RT framework has three phases: 1) generation of multimodels

in OOPM using both structural and behavioral abstraction techniques, 2) generation

of the AT (Abstraction Tree) that organizes the multimodels based on the abstraction

relationship to facilitate the optimal model selection process, and 3) selection of the

optimal model that guarantees to deliver simulation results by the given amount of

time. A more detailed model (low abstraction model) is selected when we have enough

time to simulate, while a less detailed model (high abstraction model) is selected when

a deadline is imminent. The basic idea of selection is to trade structural information








for a faster runtime while minimizing the loss of behavioral information. We propose

two possible approaches for the selection: Integer Programming (IP)-based approach

and search-based approach. By systematically handling simulation deadlines while

minimizing the modeler's interventions, OOPM/RT provides an efficient modeling

environment for real-time systems.













CHAPTER 1
INTRODUCTION

Real-time systems refer to systems that have hard real-time requirements for in-

teracting with a human operator or other agents with similar time-scales. An efficient

simulation of real-time systems requires a model that is accurate enough to accom-

plish the simulation objective and is computationally efficient [1, 2]. We define model

accuracy in terms of the ability of a model to capture the system at the right level

of detail and to achieve the simulation objective within an allowable error bound.

Computational efficiency involves the satisfaction of timeliness requirements to sim-

ulate the system, in addition to the efficiency of model computation. In existing

applications, it is a user's responsibility to construct the model appropriate for the

simulation objective. This is a difficult, error-prone and time-consuming activity

requiring skilled and experienced engineers [3, 4, 5].

Most CASE tools [3] try to help the modeling process by providing an extensive

library of functions that allow a modeler to specify numerous aspects of an applica-

tion's architecture. These tools deal with static models suitable for producing design

documents with limited facilities for simulating the models, analyzing the results of

such simulations, running what-if questions, or translating the paper models to pro-

totype code. However, these tools do not provide support for specifying the real-time

constraints of an application's functions [5, 6].

Our objective is to present a modeling methodology in which the real-time systems

can be modeled efficiently to meet the given simulation objective and the model's time

requirements.








1.1 Problem Statement

Real-time systems differ from traditional data processing systems in that they

are constrained by certain non-functional requirements (e.g., dependability and tim-

ing). Although real-time systems can be modeled using the standard structured

design methods, these methods lack explicit support for expressing the real-time con-

straints [4, 6, 5]. Standard structured design methods incorporate a life cycle model

in which the following activities are recognized [5]:


1. Requirements Definition an authoritative specification of the system's required

functional and non-functional behavior is produced.

2. Architectural Design a top-level description of the proposed system is devel-

oped.

3. Detailed Design the complete system design is specified.

4. Coding the system is implemented.

5. Testing the efficacy of the system is tested.


For hard real-time systems, this methodology has a significant disadvantage: timing

problems are recognized only during testing, or worse, after deployment. Researchers

have pointed out that the timing requirements should be addressed in the design

phase [5, 6]. Two activities of the architectural design are defined [5]:


the logical architecture design activity

the physical architecture design activity


The logical architecture embodies commitments that can be made independently of

the constraints imposed by the execution environment and is primarily aimed at sat-

isfying the functional requirements. The physical architecture takes these functional









requirements and other constraints into account, and embraces the non-functional

requirements. The physical architecture forms the basis for asserting that the ap-

plication's non-functional requirements will be met once the detailed design and im-

plementation have taken place. The physical design activity addresses timing (e.g.

responsiveness, orderliness, temporal predictability and temporal controllability), de-

pendability requirements (e.g., reliability, safety and security), and the necessary

schedulability analysis that will ensure that the system, once built, will function cor-

rectly in both the value and time domains. Appropriate scheduling paradigms are

often integrated to handle these non-functional requirements [5]. The following issues

arise :

How to capture the logical aspects of the real-time systems

How to assess duration and quality associated with each model

How to resolve the timing requirements

How to support both logical and physical activities under one modeling and

simulation framework so that the resulting model is guaranteed to function

correctly in both the value and time domains

Several assumptions are made to answer these questions :

Sacrificing solution quality can be tolerated so that the systems can be mod-

eled by multiple solution methods, producing tradeoffs in solution quality and

execution time

Duration and quality associated with a method are fairly predictable
1.2 Overview of OOPM/RT

We propose OOPM/RT (Object-Oriented Physical Multimodeling for Real-Time

Simulation) for aiding modelers to meet arbitrary time and quality constraints im-

posed upon the simulation. OOPM/RT adopts a philosophy of rigorous engineering








design, an approach which requires the system model to guarantee the system's time-

liness at design time [6]. OOPM/RT uses OOPM for the logical architecture design

activity. OOPM [7] is an approach to modeling and simulation which promises not

only to tightly couple a model's human author into the evolving modeling and sim-

ulation process through an intuitive HCI (Human Computer Interface), but also to

help a model author to perform any or all of the following objectives [8]:


to think clearly about, to better understand, or to elucidate a model

to participate in a collaborative modeling effort

to repeatedly and painlessly refine a model as required with heterogeneous

model types, in order to achieve adequate fidelity at minimal development cost

to painlessly build large models out of existing, working smaller models

to start from a conceptual model which is intuitively clear to domain experts,

and to unambiguously and automatically convert this to a simulation program

to create or change a simulation program without being a programmer

to perform simulation model execution and to present simulation results in a

meaningful way so as to facilitate the prior objectives


By using OOPM for the sources of multiple methods, we can model a system efficiently

with different model types. In time-critical systems, we may prefer models that

produce less accurate results within an allowable time over models that produce more

accurate results after a given deadline. The key to our method is to use an abstraction

technique as a way of handling real-time constraints given to the system. We generate

a set of models for the system at different levels of detail through a model abstraction

methodology, and then choose a model that has the optimal abstraction degree to

simulate the system using a given deadline. The decision supporting tool is added








to OOPM in order to take these constraints into account and determine the optimal

abstraction degree. Based on the determined abstraction degree, the optimal model

is composed. The decision process is removed from the modeling process, therefore

modelers are relieved from considering time constraints that are not supposed to be

part of modeling. The OOPM/RT structure is shown in Figure 1.1.

Our method has three phases :


1. Generating a set of models at different abstraction levels

2. Arranging a set of models under the abstraction relationship and assessing the

quality/cost for each model

3. Executing model selection algorithms to find the optimal model for a given

deadline


In the first phase, a set of methods is generated at different degrees of abstraction.

Many studies have been performed on the model abstraction techniques, especially in

the modeling and simulation area [9, 10, 11, 12]. Based on the study of model abstrac-

tion techniques, we propose a systematic abstraction methodology to build multiple

methods of the system at different degrees of abstraction. The second phase is to

assess the expected quality/runtime of each model and to organize a set of models in

a way that facilitates the selection algorithm for real-time simulation. In the third

phase, we select an optimal model from alternatives by deciding the optimal abstrac-

tion level to simulate the system within a given time. A more detailed model (low ab-

straction level) is selected when we have ample time, while a less detailed model (high

abstraction level) is used when there is an imminent time constraint. Our method

does not focus on proposing a new method to measure/predict model execution time

for the second phase. Instead, we use an available method and don't go over the

model quality and time assessment issue in detail.















Model Author



HCI

Modeler
Structural Behavioral Scenario
Abstracter Abstracter


LIBRARY




Back End
r---------------- -- -------

Structural Behavioral
Model ----- Model Engine
Translator Translator






Decision Supporting Tool

Duration/ Abstraction Optimal Optimal
Quality Abstraction Abstraction Model
Estimator Builder Selecter Composer




Figure 1.1. OOPM/RT : OOPM is used for the logical architecture design activity.
The Decision Supporting Tool is integrated to handle physical architecture design
activity. The three components of OOPM (HCI, Library, and Back End) are shown
outlined with dashed line boxes. Parts within each component are shown outlined
with solid boxes: HCI (Human Computer Interface) appears within Modeler and
Scenario; Back End appears within Translator and Engine; Library appear within
MMR(OOPM Model Repository) and MOS(OOPM Object Store). Principal inter-
actions are shown with arrows. A model author interacts with both parts of the HCI.
The Decision Supporting Tool interacts with OOPM being hidden from a model au-
thor









One of the contributions of our research is that with the ability to select an

optimal model for a given deadline, we provide a way to handle real-time constraints

for the simulation group. Another contribution is that by generating a set of multiple

methods through abstraction techniques and selecting the optimal abstraction degree

to compose a model for the real-time simulation, we meet not only the real-time

constraints, but also gain the perspective, which allows modelers to see the system for

a given time-constraint situation. We expect that the proposed method can provide

better sources of multiple methods for real-time scheduling groups.

1.3 Guide for the Reader

In Chapter 2, we present a review of prior work on our problem statement. The

section begins with the terminology that will be used throughout this dissertation,

and discusses a review of recent work in Real-time Scheduling and Real-time Artificial

Intelligence (A.I.). Based on the comparisons of the background research, we discuss

important issues to be considered for our problem statement and show how we try to

resolve these issues in OOPM/RT.

In Chapter 3, we present our model generation methodology for performing logical

design activities for real-time systems. Two types of abstraction technique are used to

the proposed model generation methodology: structural and behavioral abstraction.

Abstraction process and techniques for each type are discussed. The performance

results of the abstraction techniques are documented in the Appendix.

In Chapter 4, we discuss how to perform physical design activities for real-time

simulation. The basic idea is to determine the optimal abstraction model for a given

deadline and cut out less important structural information to save simulation time.

We organize the model's methods in a way which facilitates the selection process

of the optimal abstraction level. Two selection approaches are presented : integer

programming based selection and search based selection. Analysis and experimental

results are presented with the detailed algorithm for each approach.





8


In Chapter 5 and Chapter 6, we present two examples to illustrate OOPM/RT

methodology: FULTON (a steamship modeling) and APPLESNAIL (a population

modeling of apple snails). The entire process from the model generation phase to the

selection of the optimal abstraction model is presented.

In Chapter 7, we review the work presented in this dissertation, and then discuss

our perspective on the significance and the contributions of this work. Finally, we

point to future research directions.














CHAPTER 2
BACKGROUND

Several areas of research relate to our problem statement. (1) Real-time schedul-

ing determines a schedule that defines when to execute what tasks to meet a given

deadline and the objective of the system. Systems are modeled by a set of tasks. If

more tasks exist than the system can process for a given time, the decision about

which tasks to ignore is typically based on task priority [13, 14]. (2) Real-time

Artificial Intelligence is a field that studies how to control deliberations in real

environments. Agents in real environments must be able to cease these deliberations

when action is demanded, and they must be able to use the available time for deliber-

ation to execute the most profitable computations. In this chapter, we review several

studies related to our problem statement and discuss common issues on modeling

real-time systems. In Section 2.1, we define some useful terms first, which will be

used throughout this dissertation. Several studies in real-time scheduling and real-

time artificial intelligence are discussed in Section 2.2 and Section 2.3, respectively.

Section 2.4 ends this chapter with brief explanations about how our approach applies

to each of the issues pointed out through the background research.

2.1 Terminology

A system is a part of some potential reality in which we are concerned with

spacetime effects and causal relationships among parts of the system. By identifying

a system, we are necessarily putting bounds around nature and the machines that we

build [15]. To model is to abstract from reality a description of a dynamic system.

Modeling serves as a language for describing systems at some level of abstraction or,

additionally, at multiple levels of abstraction.









A task is a granule of computation treated by the scheduler as a unit of work to

be scheduled or allocated processor time. A simulation model is composed of static

and dynamic methods [7], which is a granule of computation as a unit of execution.

Therefore, a method corresponds to a task. If a model has only one method, the

model corresponds to a task.

Modeling about real-time systems should take timeliness requirements into ac-

count. A strict definition of real-time is one which the system guarantees that tasks

will receive up to their worst-case runtime and resource requirements, which presum-

ably means that tasks will produce the best possible results (highest quality results).

The less strict definition of real-time is usually based on high-level goal achievement,

rather than worst-case requirements. One common definition is that the system will

statistically (e.g., on average) achieve the required quality value by the required time,

but no guarantee is made about any particular task [1, 2]. Throughout this disser-

tation, we follow the less strict definition of real-time. If the real-time systems are

safety-related or otherwise employ critical equipment, then every task must be com-

pleted before its deadline or a disaster might occur. These systems are called hard

real-time systems, and are only judged to be correct if every task is guaranteed to

meet its deadline. Other systems (such as multimedia systems) have deadlines, but

are judged to be in working order as long as they do not miss too many deadlines.

These systems are called soft real-time systems. In a soft real-time systems, a task

is executed to completion even if it has missed its deadline. In this dissertation, we

are only interested in hard real-time systems. Real-time simulation is an execution

of a simulation model under hard real-time constraints.

Model quality or accuracy, especially in real-time systems, is the ability to

capture the system at a right level of detail and to achieve the simulation objective for

a given amount of time. Failure to produce a result within the required time interval

may cause severe damage to the system or environment. Therefore, timeliness should









be part of the criteria to assess model accuracy. Model cost is defined by the amount

of time which the model needs to complete its execution.

We define an optimal model as a model that is executed by the given deadline

and achieves the best tradeoff model accuracy for time. The optimal model has

the optimal abstraction level to simulate the system for a given amount of time.

In this sense, the optimal model selection process is the procedure that finds the

optimal abstraction level to simulate the base model-the most detailed model of

the system-for a given amount of time. Also, the optimal model selection process

corresponds to the process of finding an optimal scheduling strategy for given task

sets, as tasks correspond to models. Detailed discussions on this correspondence is

found in Chapter 4.

2.2 Real-Time Scheduling

Real-time scheduling determines a schedule that defines when to execute what

task to meet a deadline and the objective of the simulation. Scheduling is a way of

modeling nonfunctional requirements imposed on real-time systems.

For periodic task sets, the task arrival times, execution times, and deadlines are

very predictable. Assume that each task executes one of n jobs. A request to execute

job i occurs one every Ti seconds. The previous execution of the job must be completed

before the new job execution is requested. That is, the start time of the new task is

the deadline of the old task. Since only one instance of a job is executing at a time,

we will label the task executing job Ji as ri. A description of the task set reduces to

a description of the period and execution times of the n jobs:


V= {Ji = (C,Ti)l < i

A schedule is a set A of execution intervals, described as:


(Si, fi, ti) ii = 1'... 1








where si is the start time of the interval, fi is the finish time of the interval, and ti is the

task executed during the interval. Typical approaches to determine a scheduling set,

A, assume that task priorities and resource needs are completely known in advance

and are unrelated to those of other tasks, so that a control component can schedule

tasks based on their individual characteristics. If more tasks exist than the system

can process, the decision about which tasks to ignore is simple and local, usually

based only on task priority as in the case of rate-monotonic priority assignment, and

deadline monotonic priority assignment [13, 14].

Research in real-time scheduling provides extensive sources of scheduling method-

ologies to handle timing constraints. Schedulability analysis methods [14, 13] provide

a theoretical solution to determine whether a given set of tasks can meet deadlines

before actually exposing to real-time simulations. Many applications are found to

use schedulability analysis methods as part of the architecture in real-time applica-

tions [16]. There are two problems in real-time scheduling approaches to model and

simulate real-time systems :


1. Difficulty in assigning priorities to tasks: The resulting schedule of tasks does

not reflect the real objective of the simulation when the selection is made based

only on the task priority. The typical priority scheme is to assign a higher pri-

ority to the task that has an immediate deadline. However, if the high priority

task does not contribute significantly to the overall objective of the system, the

resulting schedule cannot properly handle functional and nonfunctional require-

ments given to real-time systems. Finding a good priority scheme that meets

not only time constraints but also simulation objectives is a difficult problem.

Research on scheduling imprecise computations studies how to solve this prob-

lem. There are two task categories in imprecise computations: mandatory tasks

and optional tasks. The system schedules and executes tasks to complete all









mandatory tasks before their deadline, but may leave less important tasks un-

finished. A basic strategy to minimize the bad effects of timing faults is to

leave less important tasks unfinished if necessary. By trading off between result

quality and computation time, imprecise computing can consider the overall

simulation objective as well as timing constraints [17]. Tasks should be prop-

erly classified as either a mandatory part or an optional part, according to their

degree of contribution to the simulation objective, as well as timing urgency.

2. Lack of task design methods: The main interest in Real-Time Scheduling is

to provide an efficient scheduling algorithm that handles timing constraints

given to each of the tasks, assuming that tasks are properly modeled. However,

the task modeling process itself should be studied together with scheduling

algorithms to complete the logical design activity and physical design activity

as we discussed in Section 1. Tasks should be modeled to represent all functional

requirements for the given real-time system, so that the scheduling process can

focus on handling nonfunctional requirements.
2.3 Real-Time Artificial Intelligence

As AI systems move into more complex domains, all problems become real-time

because the agent will never have enough time to solve the decision problem ex-

actly. D'Ambrosio described the real-time AI as a type of problem-solving method:

"given a time bound, dynamically construct and execute a problem solving proce-

dure which will (probably) produce a reasonable answer within (approximately) the

time available" [18]. Most AI systems use some form of approximation to reduce the

nondeterminism and make system performance more predictable for use in real-time

situation.

At least two broadly different kinds of approximation algorithms have been used

in real-time AI research [1, 2] to handle functional and nonfunctional requirements

inherent to real-time systems. They are :









1. Iterative refinement- where an imprecise answer is generated quickly and refined

through a number of iterations.

2. Multiple methods where a number of different algorithms are available for

a task, each of which is capable of generating solutions. These algorithms

emphasize different characteristics of the problem which might be applicable

in different situations. These algorithms also make tradeoffs of solution versus

time.


In this section, we examine each of two approximation algorithms in some detail.

2.3.1 Iterative Refinement

Anytime algorithm is an algorithm whose output quality improves gradually over

time, so that it has a reasonable decision ready whenever it is interrupted. It is ex-

pected that the quality of the answer will increase as the anytime algorithm is given

increasing runtime up to some maximum quality. Associated with each anytime al-

gorithm is a performance profile, which is a function that maps from the time given

to an anytime algorithm and the value produced by that algorithm. Table 2.1 is a

simple anytime algorithm for solving the Traveling Salesman Problem (TSP). This

algorithm quickly constructs an initial tour, registers that result (making it available

when the algorithm should be halted), then repeatedly chooses two random edges

and evaluates whether switching them results in a better tour; if it does, the algo-

rithm updates the registered solution to that new tour. Anytime algorithms have

the advantage of always having an answer at hand, so they can respond quickly to

changing environmental situations. They also provide maximum flexibility to control

mechanisms by allowing any incremental amount of extra work to result in an incre-

mentally improved result. These features make anytime algorithms particularly useful

in many real-time applications. Examples are found in real-time decision making and

diagnosis systems [19, 20].









Table 2.1. An example of anytime algorithm

Anytime-TSP(V,iter)
1: Tour +- INITIAL-TOUR(V)
2: cost +- COST(Tour)
3: REGISTER-RESULT( Tour)
4: for +- 1 to iter
5: el +- RANDOM-EDGE( Tour)
6: e2 +- RANDOM-EDGE(Tour)
7: +-- COST(Tour) COST(SWITCH(Tour, el, e2))
8: if 6 > 0 then
9: Tour <- SWITCH( Tour, el, e2)
10: cost +- cost 6
11: REGISTER-RESULT(Tour)
12: SIGNAL(TERMINATION)
13: HALT



Real-time search techniques are another example of the iterative refinement ap-

proach to handling physical requirements given to real-time systems. Examples of

this type are found in modern chess programs [21, 22]. Virtually all performance

chess programs in existence today use full-width, fixed-depth, alpha-beta minimax

search with node ordering, quiescence, and iterative-deepening for real time problem

solving. They make very high quality move decisions under real-time constraints

by properly controlling the depth of search (or move) and having a good heuristic

function that guides the search (or move). RTA* [22] is an example of a real-time

search algorithm that effectively solves normal state-space search problems based on

A* algorithm. The basic idea is to interleave moving down what appears to be the

best path so far with refining the idea of what the best path is. The refining of the

best path is done using a simple search algorithm that searches to a fixed depth. The

fixed depth is chosen depending on the amount of time allowed for each move, which

is determined using a heuristic estimate of the number of moves required to get to a

goal state and the total amount of time allowed. The result is that the total search

time and the search time per move are tightly controlled. The quality of the result








depends significantly on the accuracy of the heuristics used to estimate the distance

from a goal state and the ability of limited depth search to recommend moves in the

direction of a goal state.

The key to these approaches is to represent the knowledge about the modeled

system in a problem solving method and make the single problem solving method that

achieves a better result as more time is given. Information about the objectives (goals)

of real-time simulation is encoded into search heuristics, and the optimal schedule for

a given time constraint is searched based on the heuristics. Therefore, unlike the

real-time scheduling approach, the resulting schedule from the iterative refinement

approaches provides a good framework for representing not only timing constraints,

but also the overall simulation objectives of the system. Our claims for iterative

refinement approaches are :


1. Iterative-refinement approaches rely on the existence of algorithms that pro-

duce incrementally improving solutions as they are given increasing amounts of

runtime. Clearly, such algorithms work well in some problem cases, but also

there are problems that will be difficult to solve in an incremental fashion [1, 2].

2. Iterative-refinement based approaches are made on the assumption that one

anytime algorithm iterativee refinement algorithm) is available that is expected

to work effectively in all environmental situations. However, when we model

complex systems, it is very difficult to develop a comprehensive multipurpose

anytime algorithm to cover all possible environmental situations. This task is

much more complicated when some situations conflict with others [23, 24, 1, 2].

2.3.2 Multiple Methods

An alternative to iterative-refinement approach is to have multiple methods to

model the system that make tradeoffs in cost versus quality, and which may have

different performance characteristics in different environment situations. The multiple








method approach has at least two potential advantages over an anytime algorithm

approach:


1. The multiple methods based approach does not rely on the existence of iterative

refinement algorithms.

2. Multiple methods do not just make quality/duration tradeoffs; they can be

entirely different approaches to solving the problem. These approaches can have

very different characteristics depending on particular environmental situations.

That is, the quality/duration tradeoffs made by multiple methods can be very

different in different environmental situations.


Garvey and Lesser proposed the Design-to-Time [2, 25] method. Design-to-time

assumes that we have multiple methods for the given tasks. After generaing multiple

methods for the given tasks, Design-to-Time finds a solution to a problem that uses

all available resources to maximize solution quality within the available time. For

the sources of multiple methods, Design-to-time uses approximate processing. Ap-

proximate processing is an approach to real-time problem solving in situations where

sacrificing answers are acceptable and some combination of data, knowledge, and con-

trol approximations are available. Several approximation techniques are studied in

complex signal interpretation tasks. Three approximation methods are proposed: 1)

approximate algorithms, 2) data approximations, and 3) knowledge approximations.

The usefulness of these approximations is found in the Distributed Vehicle Monitor-

ing Testbed (DVMT). An example of approximate knowledge from the DVMT is an

approximation that can replace the multiple steps involved in interpreting low level

sensor data. If a vehicle level knowledge involves group level and signal level data,

the knowledge approximation is performed by skipping the intermediate level, which

is the group level. Data approximation can appear in the DVMT in the form of time

skipping and clustering. In time skipping, some cycles are skipped, instead of using









data from every sensor interpretation cycle. Algorithm approximation is related to

the use of algorithms where the form of solutions to at least some intermediate prob-

lems is shared. This allows some intermediate problem solving steps to be skipped

or rearranged to save time. After generating multiple methods based on approxi-

mation techniques, this work uses a model of computational tasks known as T9EMS.

T9EMS models the problems as consisting of independent task groups that contain

possibly dependent tasks. The task/subtask relationship among tasks and within a

task group forms a directed acyclic graph and is used to calculate the quality of a

task (i.e, the quality of a task is a function of the qualities of its subtasks). Leaf nodes

of this graph are executable methods which represent actual computations that can

be performed by the system. Besides task/subtask relationships, tasks may also have

other interdependencies with other tasks in their task group (e.g., the execution of

one method enabling the execution of another, or the use of a rough approximation

by one method negatively affecting (hindering) the performance of a method that

uses its result). These interdependencies can have a quantitative effect on the quality

and duration of affected methods. The methodology is known as design-to-time be-

cause it advocates the use of all available time to generate the best solutions possible.

Schedulings algorithms are performed based upon the T9EMS structure. Briefly, the

algorithm recursively finds all methods for executing each task in the task structure,

pruning those methods that are superseded by other methods that generate greater

or equal quality in equal or less time. In the worst case, this algorithm shows O(m!)

where m is the number of executable methods. If all methods are strictly ordered by

precedence constraints, then the complexity is reduced to 2m.

The multiple methods approach has a clear framework to model/simulate real-time

systems:









1. Approximation deals with functional requirements given to the real-time system.

2. Scheduling algorithm handles nonfunctional requirements to simulate the mul-

tiple methods.

3. A model of computational tasks, such as T.EM, bridges logical design activity

and physical design activity by organizing multiple methods in a way to facilitate

the scheduling process.

Though Design-to-time models functional aspects of real-time systems using dif-

ferent types of approximations, the multiple methods for a system only differs in

the degree of behavioral detail; therefore, the resulting multiple methods may not

provide useful information about viewing/analyzing the system in both its structural

and behavioral aspects. A different sampling rate cannot abstract away unnecessary

structural details to view the system under a given amount of time. Also, skipping

the intermediate levels cannot cut out the actual base methods, since an intermediate

level is just a collection of the executable methods. Sharing intermediate information

between methods does not cut out the shared structural information between multiple

methods. All these observations suggest that the selected methods may not provide

an optimal abstract model, in which a system should be viewed under a time critical

situation, both in terms of its structural and behavioral aspect.

2.4 Summary

Background research for the problem statement pointed to several issues, which

are basically summarized with two fundamental questions:

How to generate tasks that capture functional requirements given to real-time

systems

How to simulate the tasks to meet a deadline while satisfying the simulation

objective









Real-time scheduling group provides an extensive set of scheduling methodologies

based on a priority scheme. However, the generation of task sets for a given system

has not been well-addressed in the group. Also, the selection of tasks to determine

the schedule does not consider the overall objective of the system. Anytime algorithm

group finds optimal scheduling through an iterative refinement technique. Through

the advantage of producing results in any time, anytime algorithms have been used

to construct large scale real world applications. However, finding the multipurpose

iterative algorithm for all environmental situations is a difficult job, especially when

some situations may conflict with others. The multiple methods approach found in

Design-to- Time can be used where sacrificing solution quality can be tolerated so that

the domains can be modeled by multiple solutions methods (which make trade-offs in

solution quality and execution time). Separating the generation process of multiple

methods from the determination of the optimal scheduling process provides a flexible

structure to separately perform logical design activity and physical design activity.

However, the generated multiple methods only differ in their degree of behavioral

detail, and may not provide a perspective to view the structure of the system for

a given amount of time. Based on the issues learned from background research, we

believe the following requirements are useful in order to provide an efficient modeling

framework for real-time systems :

1. An efficient modeling methodology to generate multiple methods of the real-

time system

2. Separation of the scheduling process from the modeling process

3. Decision of the optimal scheduling based on the simulation objective

Our approach for the generation of multiple methods is to use an efficient model

abstraction technique. The model abstraction technique should consider how to cap-

ture the modeled system, both in structural and behavioral terms, in order to meet









the simulation objective. We use OOPM for this purpose. Multimodeling is the model

abstraction methodology that OOPM follows. In the multimodeling methodology, we

start from a simple model type, and refine it as the model author needs more fidelity.

Refinement of a model can be done with the same model type or different model

types. Multimodeling provides a way to combine heterogeneous model types under

one structure. This allows modelers to use different model types to describe the sys-

tem according to the characteristics of the component being modeled in a way that

reflects the objective of the modeling and simulation. The selection of the optimal

scheduling is done based on the multimodeled components. The selection process is

separated from the model generation process; in this way, modelers are placed out of

the selection process. The scheduling process decides an optimal abstraction level of

the constructed model. Less important structural information is abstracted away to

save simulation time. The best tradeoffs of model accuracy versus time are achieved

through the selection process. Therefore, the selected model can be a useful device

that reflects the right perspective to view the system for a given amount of time;

additionally, it achieves the best tradeoffs in model execution time versus accuracy.

In Chapter 3, we discuss a methodology for generating tasks as our way of logical

design activity. Chapter 4 corresponds to physical design activity. We present the

selection method for the optimal model and discuss how we solve the issues pointed

out through this chapter.














CHAPTER 3
MODEL GENERATION METHODOLOGY

This chapter discusses how we generate a set of models to capture functional

requirements inherent to real-time systems. Real world dynamic systems involve a

large number of variables and interconnections. Unnecessary details about the system

are often omitted in order to represent the system on digital computers. Abstraction

is a technique of suppressing details and dealing instead with the generalized, idealized

model of a system. Computational efficiency and representational economy are main

reasons for using abstract models in simulation [26, 27, 28], as well as in programming

languages [29, 30, 31]. Our model generation methodology uses abstraction techniques

to efficiently represent complex systems.

Our model abstraction methodology has two types of abstraction: 1) structural ab-

straction, and 2) behavioral abstraction. Structural abstraction is a process of organiz-

ing the system hierarchically, while behavioral abstraction focuses only on behavioral

equivalence without structural preservation. We explore both of these abstraction

types together when constructing systems. Structural abstraction is an iterative pro-

cedure in which a model is designed with simple model types first, and refined with

more complex model types later until the desired fidelity is achieved. Behavioral

abstraction is applied on the abstraction hierarchy that has been constructed from

the structural abstraction process. When we want to isolate an abstraction level,

we apply the behavioral abstraction method at that level, and generate a black-box

that approximates the behaviors of the rest of the hierarchy. By combining structural

and behavioral abstraction, each level of abstraction becomes independent from the

lower abstraction levels. Therefore, a level can be executed apart from the rest of









Table 3.1. Model generation methodology : we generate a set of models based on
abstraction methodology. Two types of abstraction are employed: structural and
behavioral abstraction. Behavioral abstraction is performed based on the abstraction
hierarchy constructed from the structural abstraction

Model Generation Methodology
Perform structural abstraction using Multimodeling methodology;
B = 0;
While ( there is a need to save simulation execution time) do
begin
For the entire multimodeling structure
Check if a model component, M, is within user interest;
If M is irrelevant to user interest
B = B UM;
end
Apply behavioral abstraction to A E B
end



the hierarchy, unlike traditional hierarchical modeling methodologies [32, 15]. The

overall procedure is shown in Table 3.1.

In Section 3.1, we first discuss the structural abstraction process. The process of

behavioral abstraction follows in Section 3.2.

3.1 Structural Abstraction

Models that are composed of other models in a network or graph are called mul-

timodels. Most real-world models are multimodels, since normal models are good at

portraying only a subset of the overall device or system behavior [27, 15]. Structural

abstraction is a process of generating multimodels using refinement and homomor-

phism. Refinement is the process of refining a model to more detailed models of the

same type (homogeneous refinement) or different types (heterogeneous refinement);

homomorphism is a mapping that preserves the behavior of the lower-level system

under the set mappings [15]. In homogeneous-structural abstraction, dynamical sys-

tems are abstracted with only one model type. Each model component is modeled

with one model type and refined with the same model type. Selection of specific




















Figure 3.1. Taxonomy of structural abstraction methods

model type is important in homogeneous structural abstraction, and depends on the

information that one expects to receive from analysis. For example, one would not

choose to model low-level physical behavior with a Petri net, since a Petri net is an

appropriate model type for a particular sort of condition within a system, where there

is a contention for resources by discretely-defined moving entities. In heterogeneous-

structural abstraction, a system is abstracted with different model types by allowing

either homogeneous or heterogeneous model types together under one structure. Fig-

ure 3.1 summarizes our taxonomy on the structural abstraction.

We use multimodeling methodology for the structural abstraction phase. Multi-

modeling is a methodology [33, 34, 35, 36, 37] that provides a way of structuring a

heterogeneous and homogeneous set of model types together so that each type per-

forms its part, and the behavior is preserved as levels are mapped [38, 28, 39]. By

using multimodeling as a method for the structural abstraction, we cover all categories

of the taxonomy in Figure 3.1.

Modelers start the multimodeling process by applying simple types first and then

refining them as more fidelity is needed. Therefore, two activities are performed in

structural abstraction process: 1) selection of the model type, and 2) refinement.


Selection of the model type

The decision about the proper model type is made by the following heuristics :


Structural Abstraction


Refinement Homomorphism


Homogeneous Heterogeneous Homogeneous Heterogeneous









1. If the system has discrete states or events, specify them using a declara-

tive model.

2. If there are phases of a process, use a declarative model to model phase

transitions. Phase transitions can be temporal or spatial if the spatial

regions are irregular.

3. If the problem is given in terms of distinct physical objects which are

connected in a directed order, use a functional model.

4. If the problem involves a material flow throughout the system, use a func-

tional model.

5. If the problem can be viewed with balance (e.g., laws of nature) and in-

variance, use a constraint model.

6. If the problem involves a set of premises and consequences, use a rule-base

model.

Examples of each model type are summarized in Table 3.2. Once the model type

is determined, the model is refined by homogeneous or heterogeneous models based

on the model type.

Refinement :

Refinements are performed to

1. state, event, and transition in declarative modeling

2. blocks in FBM and level, auxiliary, rate, source, constant, flow arc,

cause-and-effect arc, and sink in SD [15].

3. coefficients and non-state variables in constraint modeling

4. premises and consequences in rule-based modeling

Refinement step iteratively occurs until the desired fidelity is achieved. Multimod-

eling methodology provides a mechanism that each model component is connected

























Figure 3.2. Structural abstraction : Multimodeling tree structure for model refine-
ment. Selective refinements achieve required fidelity. Extensibility facilitates model
development. The polygons in the Figure depict the heterogeneous nature of multi-
modeling: each type of polygon represents one type of dynamic model. Bi represents
FBM, Fi represents FSM, Si represents SD, Ei represents EQN, and R, represents
RBM

through the refinement relationship. A model constructed from the structural ab-

straction process becomes the base model of the system. The base model simulates

the system at the highest detail. Figure 3.2 shows an example of the abstraction

hierarchy that is constructed from structural abstraction process. Blocks in FBM

model B1 are refined into different model types, S2 (SD), E2 (EQN), and F2 (FSA).

States and premises in F2 are further refined into different model types.

By modeling a system with different model types together, we provide a way to

capture the system with different perspectives. The selection of the model type is

made based on the characteristic of each component being modeled; different kinds

of information, for instance, state transition information along with functional direc-

tionality information, are organized hierarchically to form a base model. Selective

refinement provides a way to represent simulation objective. If a system component

has great importance for the simulation objective, refinment steps are iteratively ap-

plied to it until the desired fidelity is achieved.









Table 3.2. Structural abstraction techniques

Model type Techniques
Declarative Modeling Finite State Machine (FSM)
Finite Event Machine (FEM)
Functional Modeling Functional Block Model (FBM)
System Dynamics (SD)
Constraint Modeling Algebraic equations
Ordinary/Partial Differential Equations
Rule-Based Modeling Knowledge-Based Simulation (KBS)


While the multimodel approach is sound for well-structured models defined in

terms of state space functions and set-theoretic components, selecting system com-

ponents in each level are dependent on the next-lowest level, due to the hierarchical

structure. For example, in Figure 3.2, suppose a modeler wants to look at the model

at level, F2. The information under F2 should be omitted to simulate the model.

However the execution needs to know R3, F3, and S3, since they are where the part

of the execution information of F2 is located. This implies that we are unable to run

each level independently. It is possible to obtain output for any abstraction level but,

nevertheless, the system model must be executed at the lowest levels of the hierarchy,

since there is where we find the actual functional semantics associated with the model.

A new definition and methodology is needed to better handle abstraction of systems

and components. This is where the behavioral abstraction approaches are employed.

By incorporating behavioral abstraction approaches into multimodeling methodology,

the multimodeling methodology allows each level to be understood and executed in-

dependently of the others. In this way, discarding all the abstractions below any given

level will still result in a complete behavioral description of a system [9, 10, 11, 40].

3.2 Behavioral Abstraction

Behavioral abstraction is where a system is abstracted by its behavior. We replace

a system component with a generic black-box that approximates the behavior of the








system without structural preservation. Behavior is defined as a set of input-output

data pairs.

We have two approaches of specifying system behavior:

Static approach : we take a system and capture only the steady state output

value instead of a complete output trajectory. The input value is defined to be

the integral of time value over the simulation trajectory.

Dynamic approach : we need to associate time-dependent input and output

trajectories.

Though static and dynamic approaches describe different allowable behaviors of

the same phenomenon, abstraction techniques for dynamic approach can be also ap-

plied to static approach. Therefore, we'll focus on dynamic behavioral abstraction to

illustrate behavioral abstraction process and techniques.

3.2.1 Behavioral Abstraction Process

The process of our behavioral abstraction is equivalent to meta modeling process

as shown in Figure 3.3. As the real-world systems are represented on a computer

throughout the modeling process, a model of the simulation model is constructed

through the meta modeling process. In general, metamodels are used either to perform

sensitivity analysis, to optimize a system, to identify important factors, or to gain

understanding about the relation between the inputs and outputs of a system [41, 42].

Our objective is to use metamodels to speed up simulations by making each level of

the abstraction hierarchy able to be executed independently.

The behavioral abstraction process takes three steps :


Step 1. Obtain Data

We denote the output of a simulation model at time t by y(t) and the input

by u(t). The data, defining system behavior, are assumed to be collected in



























Analytic Rule-Based Neural Network Based
Meta Model Meta Model Meta Model

Figure 3.3. Behavioral abstraction process : Behavioral abstraction process corre-
sponds meta modeling process, in the sense that behavioral abstraction model is a
model of the simulation model

discrete time. At time t 1 we have the data set

Zt-1 = y(1), u(1),..., y(t 1), u(t 1) (3.1)


Step 2. Fit the metamodel

A meta model of a dynamical system can be seen as a mapping from past data
Zt-1 to the next output y(t):


9(t) = (Zt-1) (3.2)

A "hat" on y is to emphasize that the assigned value is a prediction rather than

a measured, "correct" value for y(t). The problem of dynamic behavioral ab-

straction is to find a mapping that gives good prediction in equation 3.2 using

the information in a data record Zt-l, as in the case of meta modeling. Accord-

ing to the methodology to produce different types of behavioral abstraction








model can be generated; 1) Analytical meta models, 2) Rule-Based meta

models, and 3) Neural Network Based meta models. Table 3.3 summarize

respective advantages of each metamodeling technique. Based on Table 3.3, a

proper type of behavioral abstraction technique is selected for a system compo-

nent being examined.

SStep 3. Assess the validity of the model

Behavioral abstraction model, 4, is valid, when the behavior from 4 approxi-

mates that of the corresponding structural model, S, in some sense, to within

a desired degree of precision. In other words, given

1. a structural model S

2. a class of input { u(t) } = U

3. a behavioral model 4

4. a criterion function Fc(, S) which is a measure of the goodness of fit

between yg(t) and ys(t), where y is an output vector

5. an error criterion f

Then, 4 is valid if

F,(4, S) < (3.3)

A possible choice for Fc is SSE (Sum of Squared Error) between the output yg

and ys, i.e.,

F (,S) = E(y(t) ys(t))2 < E (3.4)

Or a suitable norm, i.e.,

Fe(4, S) =11 y(t) ys(t) |1< e (3.5)


In the next section, we illustrate available techniques to generate g.









Table 3.3. Advantages of behavioral abstraction models

Model Advantages
Analytic Models Only very little memory is used, not time-consuming
A clear understanding of and insight into the effects
of factors and response
Independent structure
Rule-Based Models Easily understandable
Represent modular knowledge
Modularity of rules facilitates future updating
Self-explanation about the effects of factors and response
Neural Network Address nonlinearity
Based Models Reduce time and memory
Updating weights can integrate certain future changes of
the system


3.2.2 Behavioral Abstraction Techniques


We have studied three techniques for behavioral abstraction process: 1) System

Identification, 2) Neural Network, and 3) Wavelets. In this section, we discuss

each technique in detail.

System Identification

System identification [43, 44] is the theory and art of building mathematical (an-

alytical) model of t.

For a dynamic system with the input at time t denoted by pt(t) and the output

at time t denoted by y(t) the data will be a finite collection of observations:


zN = {[y(t), p(t);t= 1,..., N]}


(3.6)


System identification infers some patterns from the "training set" in equation 3.6

and guesses about y(N+ 1) based on (p(N+ 1). The mathematical modeling approach

is to construct a function g1N(t, p(t)) based on the training set ZN, and to use this

function for pairing y(t) to new p(t):










y (t) = 50N(t, W(t)) (3.7)

The function g^N is thus the mathematical model of how p(t) and y(t) relate. The
observed pattern in equation 3.6 has been condensed into the mapping equation 3.7.
Two model types are available for gN: 1) parametric model, and 2) nonparametric
model. Parameter models are described (parameterized) in terms of a finite number
of parameters. These parameters are denoted by 0. The family of candidate model
functions are called a model structure, and we write the functions as


g(t, c(t)) (3.8)

The value y(t) is thus matched against the "candidate" g(t, 0, W(t)):


y(t) g(t, 0, 0(t)) (3.9)

The search for a good model function is then carried out in terms of the parameters
0, and the chosen value 0N gives us


g'N(t, 0(t)) = g(t, VN, p(t)) (3.10)

Commonly used parameter models are ARX (Auto Regressive), ARMAX (Auto Re-
gressive Moving Average), OE (Output Error) and BJ (Box-Jenkins) [43, 45, 46]. For
example, a first order ARMA model of {y(t)}


y(t) + ay(t 1) = u(t) + ue(t 1) (3.11)


is obtained for









0 = (a,c)

(t) = (y(0),(1),...,y(t- 1)) (3.12)

g(t, 0, ((t)) = E (c 1)(-c)t-k-ly(k)

Other parametric model examples are found in the Appendix with performance results

in actual applications.

If a system is assumed to belong to a very broad class of systems that cannot be

parameterized by a finite number of parameters, we model it with non-parametric

model structure. Non-parametric model can be thought of as the limit of increasing

model structures [44], which are parameterized by more parameters. Mathematically,

this means that the work with the model structure


U gd(t, d, (t)) (3.13)
d=l
where the vector 0d contains d parameters. Typically the functions gd have the char-

acter of function expansions of one or another type. If p(t) is scalar, we can picture gd

as any function approximation scheme over the real axis, like polynomial expansion

to degree d, piecewise constant functions over R divided into d intervals and so on.

After a model structure is determined, the parameters are tuned, so that behavior

predicted by the model coincides with measurements from the real system. Param-

eter estimation procedure provides a search through parameter space effectively to

achieve a close-to optimal mapping between the actual values of the system and the

approximate abstract system. Comprehensive discussions on the models and parame-

ter estimation procedures are found in Refs. [44, 47, 43]. Appendix shows performance

results of system identification methods in actual applications.









Neural Networks

Neural networks have been established as a general approximation tool for fit-

ting models from input/output data [48, 49, 50, 51]. Backpropagation, recurrent

and temporal neural networks have been shown to be applicable to identify general

systems [52, 9, 10, 11, 40, 53].

Neural network topology is shown in Figure 3.4. A network is composed of non-

linear computational elements called cells, which are organized in layers. There are

three kinds of layers: the input layer, the hidden layer(s), and the output layer. The

neural network's architecture is often specified using notation of the form L1-L2-...Lf,

where L1 denotes the number of cells in the input layer, and L2,...,LfI denote the

number of cells in each of the hidden layers. Lf denotes the number of cells in the

output layer. Each cell of a layer is connected to each cell of any previous and follow-

ing layers by links, which represent the transmission of information between cells. A

set of weights, associated with the links, characterizes the state of the network. We

denote the weight of the link between cell j and cell i by wij. The weighted sum of the

output S1,S2,...,S, of the n cells of the previous layer, Ej wij Sj, is calculated first.

The output of the layer, Si' is then determined by a nonlinear function, such as the

sigmoid function, frequently used in practice. Figure 3.4 shows the output calculation

process. The parameter 0 is called a threshold. The neural network's outputs are

the outputs of the cells of the output layer. A neural network learns by updating its

weights according to a learning rule used to train it. For example, Backpropagation

networks are based on the generalized delta rule. This algorithm provides a method

of updating the weights so as to minimize the errors, that is, the differences between

the desired output and the output values computed by the neural network. Weights

are updated during the learning stage according to examples that are presented to

the network. For example, the neural network computes from the p inputs (which

are presented as input values to the input layer) the n predicted outputs (i.e. the








Hidden Layer(s)
Ot Outputs
Sl Wli

S2 W 2i
Ewji x Sj
Si'
Inputs s
Sj wji

Sn W.ni



(a) Neural Network Topology (b) The cell level

Figure 3.4. Neural network topology: A neural network is composed of cells, which
are nonlinear computational elements organized in layers. The cell level structure is
shown in (b)

values taken by the cells of the output layer). The differences between the predicted

outputs (computed by the neural network) and the desired outputs (given by the ex-

ample) is computed and back propagated (the weights are modified according to error

gradient computations) [42, 54].

If the neural network learns from the examples generated by a structural abstrac-

tion model, the neural network becomes a behavioral abstraction model. The powerful

learning capabilities of neural networks render them useful to predict the outputs to

a simulation model given to its inputs. The ability of certain neural networks with a

sufficient number of hidden cells to approximate nonlinear functions (i.e., the simula-

tion outputs in outs case) allows them to compete very well with statistical techniques

in terms of accuracy [42]. To be used as a behavioral abstraction model, a neural

network must be designed and trained in a proper manner. Its architecture has to be

designed accordingly, as to establish a mapping between 1) the input layer and the

vector of the p input variables, and 2) the output layer and the vector of the n output

variables. Therefore, the neural network architecture must be of the following form:



















a wavelon

Figure 3.5. The wavelet network. Dashed arrows figure output connections to other
wavelons

p-L2 ..., -Lf- -n, where L2, ..., Lf denote the number of cells in the hidden lay-

ers. In order to be capable of predicting output values from input values, in the same

way as the simulation model, the neural network must be trained appropriately [54].

The performance results of Backpropagation neural network, Adaptive Linear Ele-

ment neural network (ADALINE), and Gamma temporal network are shown in the

Appendix.

Wavelets

Wavelet decomposition achieves the same quality of approximation with a network

of reduced size by replacing the neurons by "wavelons", i.e. computing units obtained

by cascading an affine transform and multidimensional wavelets [55]. For a wavelet

function p : Rd -+ R, the wavelet network is written as follows
n
fn(x) : ui(p(ai (x ti))
i=1
where ui E R, ai E Rd, ti E Rd, and "*" means component-wise product of two

vectors. The structure of the wavelet network is depicted in Figure 3.5 [55, 56].

Parameterized function family in system identification method can be thought of

as function expansions


g(p, 0) = E0(k)gk(o)


(3.14)









where gk are the basis functions and the coefficients 0(k) are the coordinates of g

in the chosen basis. Wavelet networks and wavelet transforms are used for the basis

functions, gk. Via dilation parameters, wavelet can work with different scales simulta-

neously to pick up both local and non-local variations. With appropriate translations

and dilations of a single suitably chosen function k (the mother wavelet), we can

make the equation 3.14 orthonormal [44]. The choice of the Gaussian bell function

as the basic function without any orthogonalization is found in wavelet networks [55].

Parameter 0 is estimated by minimizing a criterion of fit, for instance, sum of squared

error or norm. We use Zhang's wavelet network to construct estimators with non-

orthogonal wavelets. Performance results of the wavelet network for the behavioral

abstraction technique is shown in the Appendix.

Discussions

The performance of three abstraction techniques is shown in the Appendix. Through-

out this chapter, we have shown behavioral abstraction techniques available in differ-

ent groups. As we stated, the behavioral abstraction process is the meta modeling

process; therefore, behavioral abstraction only takes place after the structural abstrac-

tion and the input/output data sets are collected enough to learn the relationship.

By combining structural abstraction and behavioral abstraction, we have the advan-

tages brought by both types; in addition, we compensate for the disadvantages of one

abstraction technique from another. Advantages from behavioral abstraction process

are [42] :


1. A behavioral abstraction model generally needs less time than the corresponding

simulation model to provide the results. In a structural abstraction model, the

time necessary to obtain the results typically depends upon such criteria as

the depth of the abstraction hierarchy and the complexity associated with each

level of the abstraction hierarchy. In a behavioral abstraction method, the









computing time needed is constant; it only depends on its architecture, for

instance, the number of layers and number of cells in each layer in the case

of backpropagation neural network. The backpropagation model, for instance,

does not depict the dynamic behavior of the system, but only transfers the

inputs into outputs using simple arithmetic operations, so that the resulting

computing time is generally much lower. This advantage is directly used for

modeling and simulating real-time systems. When one structural abstraction

model is much simpler than the behavioral abstraction model, the amount of

speedup might be neglectable or even negative. However, as we abstract more

components of the structural abstraction model, the more speed up we generally

can get. Therefore, the speedup advantage of the behavioral abstraction might

be increased as the modeled system is complex.

2. When some characteristics of the system can change over time, the simulation

model should be modified in order to remain valid. Due to their prediction

capability and learning capability, the behavioral abstraction model is capable

of updating its weights according to new examples coming directly from the

system, so that it can integrate certain changes in the system without changing

the model structure. If more inputs or outputs had to be taken into account,

the behavioral abstraction structure would also have to be modified.


Several inconveniences of using behavioral abstraction models have been pointed out.


1. The development of behavioral abstraction models may be time consuming. De-

pending on the case (number of inputs and complexity of the system behavior)

and the precision needed, numerous simulations may be necessary to build the

training set. Moreover the selection of the neural network parameters and the

learning phase, for instance, are also likely to be time consuming.









2. Precision is lost compared to the results of the simulation model. The preci-

sion that can be obtained by a neural network depends on such criteria as the

"regularity" of the behavior of the system under study, and the variance of the

outputs.

3. Behavioral abstraction techniques are used in order to gain insight into the

relationships between inputs and outputs of the system under study. In a neural

network behavioral abstraction model, for instance, the knowledge about how

the inputs are transformed into outputs is represented through the weights.

Hence, the analysis of these weights may provide useful insight about the system

behavior. However, the information contained in the neural network weights is

not explicit.


The first inconvenience of the behavioral abstraction can be overcome by com-

bining the structural abstraction. We try to minimize the unit that the behavioral

abstraction techniques need to be applied. The structural abstraction process orga-

nizes the system into a set of methods and defines the partial orderings among the

methods. Therefore, a method is the basic unit that behavioral abstraction tech-

niques are applied to. The number of input parameters and output parameters for

one method are fairly small; therefore, the development time of behavioral abstrac-

tion models may be reduced. The second inconvenience can be overcome from our

assumptions in Chapter 2. Our objective is to achieve the high level goal of the sys-

tem. Sacrificing solutions are acceptable as long as the high level goal of the system is

achieved. Also, by applying behavioral abstraction techniques to a certain component

of the structural abstraction model, we minimize the propagation of the precision loss.

For example, if a method Mi of class Cj is behaviorally abstracted, the precision loss

from Mi is localized only to Cj. Therefore, the precision loss from Mi is minimized

when we look at the model as a whole.








In the next section, we show the implementation of the proposed abstraction

methodology in OOPM/RT.

3.3 Abstraction Mechanism of OOPM/RT

OOPM/RT is an implementation of the proposed abstraction methodology. Mod-

elers start from the structural abstraction phase and then apply behavioral abstrac-

tion as necessary. OOPM/RT supports 6 model types for the structural abstraction

phase : FBM, FSM, RBM, SD, EQM and CODE. CODE is used where a modeler

cannot characterize the system with a specific model type. Modelers can express the

knowledge about the system with C++ syntax in CODE method. Each model type

has its own editor to help a modeler easily represent his/her mental model on the

computer, as shown in Figure 3.6. More detailed discussions on HCI components for

each of the model types are found in Refs. [8].

Modelers load the method name to be used for the behavioral abstraction. The se-

lected method will approximate all the structural methods inside with one behavioral

method. Therefore, the behavioral abstraction method will not go to the bottom level

to get the functional semantics associated with the level. The input and output pa-

rameters of the selected method are obtained through MMR component of OOPM [7]

and collected for the first step of behavioral abstraction process.

The second step of behavioral abstraction process is to fit the model to the col-

lected data. Three model types are supported in OOPM/RT: (1) ADALINE (Adap-

tive Linear Element) neural network, (2) Backpropagation neural network, and (3)

Box-Jenkins ARMA model.

The ADALINE-based behavioral abstraction method is created in Figure 3.7.

The adaptive linear element was developed by Widrow and Hoff. Their neural net-

work model differs from the perception in that their neurons have a linear transfer

function. This allows outputs to take on any value, as opposed to just the ones

and zeros of the perception. It also enables the Widrow-Hoff learning rules, also


















F[ E:UVINCTOAL ILOCKNRODl (SteaShipSi

fk E Fla Lm 1


a) FBM editor


b) FSM editor


Ir a"at 3 A D woi < 70
maw Marsb:: Bedlaq
If -oh >- 10
mwEW SsilPp* MmbrAn


~it without M aS




Li .... '



. 2-


-Lk

i
wedhng




arm ade


~X: p. -


d) RBM editor


Figure 3.6. Modeling editors for OOPM/RT structural abstraction process


c) SD editor


PrSll~pls~oH~aagp~








known as the Least Mean Square (LMS) rule, to adjust weights and biases according

to the magnitude of errors, not just their presence [43]. A set of input vectors is

used to train a network (by adjusting its weights and biases) to produce the cor-

rect target vector upon presentation of any of input vectors. Our ADALINE has

delay parameter to compose the temporal input vectors. If a modeler specifies n

for delay parameter, (It-(n-1), It-(n-2), ..., It-, It) is composed for the input set and

(Ot-(n-1), Ot-(n-2), ..., Ot-1, Ot) is composed for the output set. Based on the com-
posed input/output signals, ADALINE's learning process is started. A learning rate

is determined by a modeler.

Application of the Widrow-Hoff rule may not result in a perfect network, but

the sum of the squared errors will be minimized, provided that the learning rate is

sufficiently small. Changes in the weights and biases are proportional to the weight's

effect on the sum of the squared error of the network. This is an example of a gradient

descent procedure. ADALINE networks can be trained to associate input vectors with

a desired output and to approximate any reasonable function linearly [43, 54].

The backpropagation-based behavioral abstraction method is generated through

an user interface as shown in Figure 3.8. A training file is composed of input signals

and target signals, while a test file is composed of only input signals to test the gen-

erated backpropagation neural network. Weight vectors are initialized randomly or

with a pre-existing data file. Modelers specify the neural network structure; 1) the

number of layers, and 2) the number of nodes in each layer. A momentum parameter

and a noise factor can be specified for the learning process. The backpropagation

learning process continues until either the learning results reach the specified tol-

erance or the neural network has been trained for the specified number of learning

cycles. Backpropagation was created by generalizing the Widrow-Hoff learning rule

to multiple layer networks and nonlinear differentiable transfer functions. Networks









Class/Method/Input-Output Window


tog i Window
PrMm Iete LWindo

SAbtkmchan T7chape
SADALNE
Spa Baml ianmo
6 Syat=l1dwm
B DaiaAbrtmachn




St Figur ADANE nmo be hvioral a tion proce
Depy Learmg Raoeye2
I CLASS Pol




Siog Window ADAoutput
so t ePameter Window

Figure 3.7. User interface for ADALINE behavioral abstraction process


with biases, at least one sigmoid neuron layer, and a linear output neuron layer are

capable of approximating any reasonable function [57].

Box-Jenkins ARMA model is implemented for the behavioral abstraction process

as shown in Figure 3.8. Input signal files and output signal files are specified to start

the learning process. Each input/output signal has a minimum lag and a maximum

lag, each of which specifies how many past signals will be used for the prediction. If

lag of an input, I, is 3, then I(t) is determined by I(t 3), I(t 2), I(t 1), and the

associated weights are generated through the Box-Jenkins learning process.

Figure 3.7 shows the GUI for the behavioral abstraction process. The learning

result is plotted in a plot window, while the text outputs are shown in the log window.

Based on the final weight vectors, a C++ based code is automatically generated











iData Defl'!n
bwwi fi Itiariv.da1



amsln pa a~u d
fname IacaId
wor wcae 1I -



11110-9o I.J-lta I=]
jeniPUlmu paraff4er 10

(a)Bar co mode110
start WlliortS9 .b. I*l in~al P.
F;, _d~w;'T 14-
layer smictum 1484


(a) Backpropagation model


ARMA model nama I
Nunbe ol predchow-

Iingx DataFie Lag: Mi Max
tamp _____a _____-__F3


Outp Daa Fie Lag.Min Mai

Pro F. da t F -

pod ____dat1 1 rl
Iregomdat F1 F1


(b) Box-Jenkins model


Figure 3.8. User interface for the behavioral abstraction process


from the behavioral abstraction translator discussed in Chapter 1. The behavioral

abstraction method takes the following structure :


MethodReturnType Classname::Methodname (Input/Output parameters)

{


1. Declaration of local variables

2. Assigning the weight vectors which has been

learned from the behavioral abstraction process

3. Composing input signals based on a delay or a lag

parameter

4. Determining output parameters based on the

output calculation algorithms of the behavioral

abstraction techniques


The output of the simulation method is compared to the trained output from

behavioral abstraction method. The sum of the squared error or RMS, which is a









square root of the mean of the squared error, are presented after the learning process.

The error measurements are listed in the log window to assess the validity of the

generated behavioral abstraction method.

3.4 Summary

As we model more complex systems, abstraction is an essential mechanism to

economically represent the system by omitting unnecessary details based on a given

simulation objective. The proposed abstraction methodology is constructed by model

engineering perspective: when a system is first being developed, one should use struc-

tural abstraction to organize the whole system hierarchically with simple system types

first, and then graduate to more complex model types. Below the structural abstrac-

tion, each component is a black-box with no detailed internal structure. Behavioral

abstraction is used to represent the black-box by approximating the behavior of the

system components. By combining structural and behavioral abstraction, each level

of abstraction becomes independent from the rest of the hierarchy, so a level can be

executed alone, unlike the traditional hierarchical modeling methodologies [39, 58].

We discussed several abstraction techniques for structural and behavioral abstrac-

tion process. The performance of each technique is shown in the Appendix. We

showed how to overcome the disadvantages of structural abstraction by behavioral

abstraction, and how to cope with the inconveniences of behavioral abstraction by

using structural abstraction. OOPM/RT is an implementation of the proposed ab-

straction methodology. Modelers capture a system through the GUI in the structural

abstraction process. Learning parameters for the behavioral abstraction are specified

on the GUI component. Structural abstraction translator accepts the pictures which

modelers draw in the structural abstraction process and automatically translates them

into C++ procedures. Behavioral abstraction translator receives the learning results

from the behavioral abstraction process. Based on the learned weights and bias,









behavioral abstraction translator automatically constructs C++ procedure for the

behavioral abstraction methods.

Multimodels constructed in OOPM/RT can simulate the system at any random

level. The resulting model structure provides a basis where the timing requirements

inherent to real-time systems are naturally resolved. The basic idea is to control the

level of abstraction. When we have more time to simulate the model, we use a low

abstraction level for the constructed model. When we have little time to simulate

the model, we use a high abstraction level to deliver the simulation results by a given

deadline while sacrificing the accuracy of the model. By taking a high abstraction

level, we lose structural information to save simulation time.

In the next chapter, we present a methodology to control the abstraction degree in

a way to minimize the accuracy loss, while satisfying the given deadline for real-time

simulation.














CHAPTER 4
MODEL SELECTION METHODOLOGY

This chapter illustrates how we handle non-functional requirements, i.e., timing

constraints, inherent to real-time systems. In Chapter 3, we showed that the con-

structed model through our model generation methodology can simulate the system

at any random level. Our idea to guarantee the model's timeliness is to control

the level of abstraction with which simulation can deliver the results by the given

amount of time. The determined abstraction level is optimal in the sense that the

resulting model will satisfy the timing constraint and minimize the accuracy loss that

the behavioral abstraction methods might yield during real-time simulation. A lower

abstraction level is used when we have enough time to simulate the base model, while

a higher abstraction level is needed when we have a tight deadline to meet. This

chapter covers the second and third phase of the OOPM/RT methodology, which are

(1) Arranging a set of models under the abstraction relationship and assessing the

quality/cost for each model, and (2) Executing model selection algorithms to find

the optimal model for a given deadline. Through these two phases, we guarantee a

model's timeliness at design time.

In Section 4.1, we first propose AT structure to arrange the base model according

to the abstraction relationships. Three selection algorithms for the optimal abstrac-

tion model are proposed in Section 4.2 Section 4.4.








4.1 Construction of the Abstraction Tree

AT extends the tree structure to represent 1) all the methods that comprise the

base model and 2) refinement/homomorphism relationships among the methods. Ev-

ery method that comprises the base model is represented as a node. Each node takes

one of three types : Mi, Ai or Ii.

Mi High resolution method. It takes the form of dynamic or static meth-

ods of OOPM. We have FBM (Functional Block Model), FSM (Finite State

Machine), SD (System Dynamics), EQM (EQuational Model) and RBM (Rule-

Based Model) choices for the dynamic method, and the CODE method for the

static method.

Ai Low resolution method. It takes the form of a neural network or a system

identification model.

Ii Intermediate node to connect two different resolution methods, Mi and Ai.

Ii appears where a method i has been applied to behavioral abstraction and
the corresponding behavioral abstraction method has been generated for a low

resolution method to speedup the simulation

The refinement/homomorphism relationship is represented as an edge. If a method

Mi is refined into N, N2, N3,...,Nk method, an edge(Mi, Nj), for j = 1,2,...,k, is

added to the AT. AND/OR information is added on the edge to represent how to

execute Mi for a given submethod Nj, for j = 1, 2,..., k.

AND Mi is executed only if Nj is executed, V j, j = 1, 2, ..., k

OR Mi is executed only if one of any Nj, j = 1, 2,..., k, is executed

The decision of AND/OR is made based on 1) the node type of Mi and 2) the model

type of Mi.








1. Node type : An intermediate node Ii is executed either by Hi or Li, where Hi

is a high resolution method and Li is the corresponding low resolution method.

Therefore, Ii is connected with the OR relationship.

2. Model type : If a method Mi takes the form of an FBM, and each of the

blocks that comprise Mi is refined into B1, B2,..., Bk, then the execution of Mi

is completed when Bj, V j, = 1,2, ..., k, are executed. Therefore, an FBM

method Mi is connected with the AND relationship. However, other model

types can take the OR relationship. If a method Mi takes the form of FSM,

and each state of the FSM is refined into S1, S2, ..., Sk, then the execution of

Mi is completed when any one of Sj, j = 1, 2, ..., k, is executed. The decision

of j is made according to the predicate that the FSM method, Mi, satisfies at

time t. Therefore, FSM method Mi is executed with the OR relationship. The

decision of AND/OR is made for all model types of OOPM:


(a) AND FBM, SD, EQM, CODE method

(b) OR FSA, RBM method

Each node, T, in AT has duration D(T) and quality Q(T). Q(T) summarizes three

properties of the quality associated with node T.

1. QA(T) Degree of abstraction. Degree of abstraction represents how much

structural information would be lost if the execution occurs at node T. The base

model will not be executed at the associated leaf nodes when the method T is

selected for the behavioral abstraction. QA(T) is defined by how many methods

are being discarded if behavioral abstraction occurs at node T, compared to the

case where no behavioral abstraction is applied to the base model.

2. QI(T) Degree of interest loss. A modeler may specify the classes which he/she

wants to observe with special interest throughout the simulation. Degree of








interest loss represents how much interesting information will be lost, if the

behavioral abstraction occurs at node T. As we minimize the use of behavioral

abstraction methods to the interesting methods, the more appropriate results

may be produced. QI(T) is defined by how many interesting methods are being

discarded if behavioral abstraction occurs at node T, compared to the case

where no behavioral abstraction is applied to node T.

3. QP(T) Degree of precision loss. Degree of precision loss represents how ac-

curately the behavioral abstraction method approximates the high resolution

method for node T. The precision can be assessed by testing the trained neural

network or system identification models. Several techniques for estimating er-

ror rates have been developed in the fields of statistics and pattern recognition,

which include hold out, leave one out, cross validation, and bootstrapping [59].

The holdout method is a single train-and-test experiment where a data set is

broken into two disjoint subsets- one used for training and the other for testing.

A sufficient number of independent test cases are needed for reliable estima-

tion. Leave one out repeats n times for n cases, each time leaving one case out

for testing and the rest for training. The average test error rate over n trials

is the estimated error rate. This technique is time-consuming for large sam-

ples. K-fold cross-validation repeats k times for a sample set randomly divided

into k disjoint subsets, each time leaving on set out for testing and the others

for training. Thus, we may call this technique "leave some out", and "leave

one out" is a special case of this general class. Bootstrapping is a method for

random resampling and replacement for a number of times, and the estimated

error rate is the average error rate over the number of iterations. The basic idea

behind error estimation or validation is that the test set must be independent

of the training set, and the partition of a sample into these two subsets should

be unbiased. Moreover, the respective sample size should be adequate, and the








estimated error rate should refer to the test error rather than the training error
rate. To maximize the use of every sample, it is preferable to take each case for
training at one time and for testing at another [54]. Our focus is not to propose
a new error estimation method. Instead, we use the available error estimation
methods to properly assess QP(T) for the behavioral method of node T.

Based on the three properties, Q(T) is defined by :



N(T)
QA(T) = N
N
Ns (T)
QI(T) = N

QP(T) = E(T)
Q(T) = QA(T) + QI(T) + QP(T)
Q(T) =
q


where N(T) for the number of nodes in a subtree that has T as a root node, Ni(T)
for the number of interesting nodes in a subtree that has T as a root node, Ni for the
total number of interesting nodes in a given AT and N for the total number of node
in a given AT. E(T) is the normalized error rate of behavioral abstraction method for
node T, which is estimated from one of the techniques previously discussed. N(T)
of QA(T) is set to 1 for a leaf node. q represents the number of quality properties
specified for node T (1 < q < 3). For an intermediate node, Ii



Q(I,) = Q(Hi) Q(Li)

where Hi is the high resolution method for node Ii and Li is the low resolution method
for node Ii.
We assume that duration (cost, execution time) of the leaf methods has been well
assessed by the time estimation techniques available in real-time system literature [60,









61]. D(T) is defined based on AND/OR relationship and node types. For a node type,

Ii, and the corresponding two different resolution models, Hi and Li,



D(Ii) = D(Hi) D(Li) +0

where 0 represents system factor, 0 considers if the target platform is different from

the environment in which the execution time has been measured. Therefore, D(Ii)

represents the amount of speedup that the behavioral abstraction method makes for

the base model in any platform. For other node types, duration of a node is defined

based on its AND/OR relationship. For an AND related node,


k
D(T) = Nj + 6(T) +
j=1
For an OR related node,

D(T) = Max (N, N2, ...,Nk) +(T) +

where Nj, for j = 1,..., k, is the method that T calls to complete its high resolution

execution and k is the number of children nodes that the node, T, has. 6(T) is the

amount of time that method T takes for its own execution. For example, in the

case of an FSM, checking the current state and determining the next state based on

the predicates might take 6(T) time, while the execution of each state is assessed

in the summation term. 0 is a system factor as in the case of Ii node. D(T) of an

OR node is set to the worst case execution time by taking the maximum duration

of the possibilities. This worst case assignment securely guarantees the resulting

model's timeliness. The quality and duration function are constructed recursively

until individual methods at the leaf level are reached.

Figure 4.1 shows an example of AT. M21 is an AND related method that calls

M31 and then M32 followed by M33. M22 is an AND related method that calls M34








Ill
220

M11 24o All 2o

121 11 122

S130 20 01
M21 A21 A2220 M2211"

131 132 133 134 135
/ 10 40 20 20 50
M31 A31 M32 A32 M33 A33 M34 A34 M35 A35
30 20 60 20 40 20 40 20 70 20

Figure 4.1. Example of AT (Abstraction Tree)

and then M3s. A method Mn1 calls M21 and then M22. We suppose that a behavioral

abstraction method {Anl, A12,..., A35} has been generated for each of the correspond-

ing high level method Mij. Each Aij may take different model type according to the

behavioral abstraction technique. The intermediate nodes relate a high resolution

method to a corresponding behavioral abstraction method. These nodes are sym-

bolized by {Iu, 112, ...,135}. If all {I11, 12, ..., 35} are executed by high resolution

methods only, the resulting structure is the base model which has been constructed

through the structural abstraction process. The quality and duration of each node is

determined by recursively applying the quality and duration equations. 6 and 0 are

assumed to be 0 for each internal node in Figure 4.1.

4.2 Selection of the Optimal Abstraction Level

Under uniprocessor scheduling, a schedule is an assignment of the CPU to the real

time tasks such that at most one task is assigned to the CPU at any given amount.

More precisely, a schedule is a set of A of execution intervals described as :


A = I(si, fi, ti) I i = 1, ..., nj








where si is the start time of the interval, fi is the finish time of the interval, and

ti is the task executed during the interval. The schedule is valid if:


1. For every i = 1,..., si < fi

2. For every i = 1,..., fi < si+i

3. If ti = k, then Sk < Si and fi < Dk


where, Sk is the release time and Dk is the deadline. Condition 1 requires that

execution intervals are really intervals. Condition 2 requires that the intervals be

time ordered. Condition 3 requires that a task only be executed between its release

time and its deadline. A task set is feasible if every task Tk receives at least Ck seconds

of CPU execution in the schedule. More precisely, let


A(Tk) = {a = (si, fi, ti)la E A andti = k}

Then, the schedule is feasible, if for every Tk E V,



E A-- si > Ck
(si,fi,k))eA(7"k)


A set of tasks is feasible if there is a feasible schedule for the tasks. The goal of

real-time scheduling algorithms is to find a feasible schedule whenever one exists.

In a schedule that satisfies the 0/1 constraint (the system must either execute a

task to completion before its deadline or not schedule it at all) and timing constraints,

as well as to minimize the total error, is NP-complete when the tasks have arbitrary

processing time. A task is a granule of computation treated by the scheduler as a

unit of work to be allocated processing time. In our case, task corresponds to method

that comprises the base model. Similarly, a task group corresponds to a method that








calls other submethods to complete its execution. Then, the problem of selecting the

optimal abstraction level translates to the scheduling problem, which is a process of

finding an ordered set of methods that satisfies the conditions of the valid schedule

and maximize tradeoffs of quality for time. We select V methods among V methods

in a way to achieve all given deadlines, as stated in the condition 1 3. Moreover,

the selected task set, V, achieves the best tradeoff quality for time (minimizes the

total quality loss for a given amount of time). Therefore, the selection of the optimal

abstraction level is as hard as the scheduling problem. More detailed analysis on the

complexity of the optimal scheduling problem is found in Refs [62, 17, 63].

Section 4.3 and 4.4 present three algorithms for scheduling a set of real-time tasks,

equivalently saying, the algorithms for determining the optimal abstraction level that

produces an optimal schedule. We assume that there are no interactions between task

groups. The optimal scheduling set V is constructed by :

1. Apply the proposed scheduling algorithms to each independent task group,

tl, ..., t,. Let ti produce the scheduling result Ji.

2. Collect the result from each task group and form the complete trajectory,

(s1,..-, 8n)

Without loss of generality, we assume that the base model is composed of one task

group. Therefore, the proposed algorithms will be applied to the entire AT.

4.3 IP (Integer Programming)-Based Selection

Two Integer Programming (IP)-based techniques for the optimal abstraction level

are presented: IP1 and IP2. Operations Research (OR) is a field that is concerned

with deciding how to design and operate systems optimally under the conditions

requiring the allocation of scarce resources. Our optimal level selection problem falls

under the OR umbrella, since the selection should be made for the best model that has

an optimal abstraction level to simulate a given system under conditions requiring








the allocation of scarce resources, such as time and accuracy. The essence of the

operations research activity lies in the construction and the use of mathematical

models. The term linear programming defines a particular class of programming

problems when the problem is defined by a linear function of the decision variables

(referred to as the objective function), and the operating rules governing the process
can be expressed as a set of linear equations or linear inequalities (referred to as the

constraint set) [64]. Integer programming refers to the class of linear programming

problems wherein some or all of the decision variables are restricted to integers. Pure

integer programming is the problem where all the decision variables are restricted to

integer values. Our problem is a special case of IP where the decision variables take

binary values to indicate whether or not to select the examining node of a given AT

for the behavioral abstraction. Several algorithms are developed to determine the

optimal integer solution. In practice, the Branch and Bound algorithm [64] is the

most widely used method for solving integer programming problems. Basically, the

branch and bound algorithm is an efficient enumeration procedure for examining all

possible integer feasible solutions.

We formulate the problem as two IP models in Section 4.3.1. A simple example

is given to illustrate the proposed methods in Section 4.3.2.

4.3.1 Formulation

Let a binary integer variable Lij denote the decision to select or not to select the

node ij for the behavioral abstraction.

L =( 1 if behavioral abstraction occurs at the lij node
i ~ 0 otherwise

Then, the objective function for IP1 selection is defined in equation 4.1

L ni
Minimize E E Lij (4.1)
i=1 j=1









subject to



i ni
E E jLj < a. (4.2)
=ij=1


i=1 j=l

and, for each parent node Lik of a given AT


ni
E Li+1,k < ni(1 Lik), for each i (4.4)
k=l



where 1 is the maximum level of the AT and ni is the number of nodes at level i.

aij represents the quality loss and tij represents the expected duration, which are

assessed by the equations in Section 4.1. ac defines the accuracy constraint given to

the system, while t, is the amount of desired speedup to meet a given deadline, D.

Therefore, tc = execution time of the base model D.

The objective function reflects the fact that the smaller number of behavioral

abstraction methods is desirable as long as the resulting model satisfies the given

time deadline and accuracy constraint. IP1 is useful when a simulation objective is

to minimize the loss of structural information as long as the desirable speedup and

accuracy are guaranteed.

The objective function for IP2 selection is defined in equation 4.5

1 ni
Minimize E aijL (4.5)
i=1 j=1

subject to



I ni
EE tyLi, tc (4.6)
i=1 j=1









and, for each parent node Lik of a given AT


E Li+1,k < ni(1 L,), for each i (4.7)
k=1



The objective function of IP2 minimizes expected quality loss while satisfying

the timing constraint defined in equation 4.6. IP2 is useful when the objective of

simulation is to minimize the total quality loss and does not necessarily minimize

the loss of structural information. For instance, if a model, A, which cuts out three

structural components and is expected to produce more accurate results than a model,

B, which cuts out only one structural model, IP2 keeps A for a candidate of the

optimal abstraction model.

4.3.2 Analysis

Consider AT in Figure 4.1. We associate each Iij node with a binary variable Lij

discussed in the previous section. Then the objective function of IP1 for a given AT

is defined as :



Minimize (L1 + L21 + L22 + L31 + L32 + L33 + L34 + L35) (4.8)

For simplicity, we assume that any behavioral abstraction method takes 20 units.

0 is assumed to be 0, which means real-time simulation will be performed in the same

platform with which the expected duration has been measured. Then, tij is defined

as follows:



tll =240 20 =220

t21 =130 20 =110

t22 =110- 20 = 90






59


t31 =30-20 =10

t32 =60 20 =40

t33 =40 -20 =20

t34 = 40 20 = 20

t35 =70 20 =50




For the simplicity of illustration, we consider only QA(Iij) to assess the quality

loss, aij, when the corresponding node Iij is selected for the behavioral abstraction.

Then, aij is simplified as follows:



SCij + Zc=1 i+l,k
N



where Cij is the number of children that Lij has. Since the behavioral abstraction

at this level discards all the structural information of the lower levels, we believe that

the accuracy loss is proportional to the number of descendants that a node has. The

right hand side of aij is to find out the number of descendants that node Lyi has. For

a given AT in Figure 4.1, aij is defined as follows.




a31 8=





a34 =8

a35 =8

-2 3
a21 8









a22 =

S- 2+2+3 _
all 8 8

Then, for a given accuracy loss constraint, ac, the accuracy constraint is defined as :



(anL11 + a21L21 + a22L22 + a31L31 + a32L32 + a33L33 + a34L34 + a35L35) < ac (4.9)

Also, for a given time constraint, tc, the speedup constraint is defined as :



(t1L11 + t21L21 + t22L22 + t31L31 + t32L32 + a33L33 + a34L34 + a35L35) c tc (4.10)

To define parent and child relationships in the given AT, we have a set of equations

for all the child nodes of the given AT.



L31 + L32 + L33 > 3(1 L21)

L34 + L35 2(1 L22)

L21 L22 2(1-L11)

are equivalent to,


3L21 L31 + L32 + L33 3
2L22 + L34 + L35 > 2 (4.11)
2L1 + L21 + L22 > 2

Then, the IP1 selection of the optimal abstraction level for a given AT is to solve

the objective function defined in equation 4.8 subject to the constraints defined in

equations 4.9- 4.11.

Figure 4.2 shows the result when we have 120 units for a deadline(t, = 120) and

(50%) for an accuracy constraint(ac = 0.5). The answer suggests that applying be-

havioral abstraction at L21 and L34 gives the optimal abstraction level that simulates









Ll1 L21 L22 L31 L32 L33 L34 L35
Objectie 0 1 0 0 0 0 1 0
variable



speedup 220 110 90 10 40 20 20 50 130' = 120
quality 0.875 0.375 0.25 0.125 0.125 0.125 0.125 0.125 0.5 a =05
loss c



3



Quality Parent and Child Speup
of Li. Relationship Constraints of L Rult

Figure 4.2. Solver solution when a given deadline is 120 and accuracy loss is 50%


the system within a given deadline while satisfying the accuracy constraint. An alter-

native solution is L22 and L32, which also achieves given time and accuracy constraint

as shown in Table 4.1. Figure 4.3 shows the corresponding AT for these constraints.

121 node is executed by its behavioral abstraction method, A21. Also, 134 node is

executed by its behavioral abstraction method, A34, instead of the high resolution

method, M34. Other intermediate nodes (135, 122, 1I1) are executed by the high res-

olution methods. The intermediate nodes, I31, '32, and 133 are not considered since

the behavioral abstraction occurs at the parent node I21. The execution time of the

AT is saved by the speedup amount of A21 and A34.

We omit the analysis of IP2, since the execution steps are basically equal to IP1,

except the objective function and the constraint. The objective function of IP2 is

defined as:


Minimize(allL11 + a21L21 + a22L22 + a3313 + a32L32 + a33L33 + a34L34 + a35L35)











S All 20

121 20 122 o
110

s20 A22 oo*o


131 132 133 70
o33 2020

40 20 700 20
4M31 A31 M32 A32 M33 A33 A.1
30 20 60 20 40 20 *20 70 20

Figure 4.3. Optimal abstraction tree decided from the proposed integer programming
approach

subject to

(tlL11 + t21L21 + t22L22 + t31L31 + t32L32 + a33L33 + a34L34 + a35L35) > tc

The worst case running time of IP1 and IP2 is determined by the magnitude of the

values of variables in an optimal solution to the pure-integer programming problem.

An instance of IP1 and IP2 problem is defined as :



min{cx : Ax < b,x E Bn}

The instance is specified by an integral mx(n + 1) matrix (A, b) and an integral n-

vector c, where m is the number of integer variables in the objective function and n

is the number of inequality constraints. The problem can be solved by a brute-force

enumerative algorithm (such as the branch-and-bound method) in O(2"mn) [65].

Therefore, the time complexity of IP1 and IP2 is defined as :



O(2nmn)

where m is the number of I-nodes in the given AT. n is determined by :











+f 2 IP1
1 IP2

where k is the number of parent and child relationships among the given I-nodes. IP1

has two additional equations for accuracy and time constraints, while IP2 has one

additional equation for time.

4.3.3 Experiments

We implemented the proposed integer programming solution with solver on Excel.

For a small problem space as in the case of Figure 4.1, solver of Excel might be a

good choice. However, if the problem size is large, we can use CPLEX [66] which is

an optimization callable library designed for large scale problems.

For the exact solution method, we use the branch and bound method, a classical

approach to find exact integer solutions. Through the branch and bound method, the

binary variable Lij is either 0 or 1.

Table 4.1 shows some results from IP1 selection. Except case 10 and 12, solver

found the optimal solution for a given time and accuracy constraint. Case 10 and 12

explain the situation where the given accuracy constraint is too tight to be achieved

within the given deadline. Relaxing the accuracy constraint gives the answer, as we

can see in cases 11 and 13. When the base model meets a given deadline, as in case 1,

no behavioral abstraction is suggested. Also, when a given deadline is immediate, the

entire AT is behaviorally abstracted into one method as in case 15. Other cases employ

one or two behavioral abstraction methods to meet a given accuracy constraint, while

satisfying a time constraint.

Table 4.2 shows some results from IP2 selection. Note that the number of selected

nodes are equal to or greater than the cases of IP1. This is because the loss of

structural information is not minimized as long as the quality loss is minimized. Case








Table
level


4.1. Experiment results from IP1 for the selection of the optimal abstraction


No Deadline ac t Selected node Achieved a, Achieved deadline
1 240 0.5(50%) 0 n/a 0 240
2 220 0.5(50%) 20 L34 0.125(13%) 220
3 200 0.5(50%) 40 L35 0.125(13%) 190
4 180 0.5(50%) 60 L22 0.25(25%) 150
5 160 0.5(50%) 80 L22 0.25(25%) 150
6 140 0.5(50%) 100 L21 0.375(38%) 130
7 120 0.5(50%) 120 L21, L34 0.5(50%) 110
8 100 0.5(50%) 140 L21, L35 0.5(50%) 80
9 80 0.5(50%) 160 L21, L35 0.5(50%) 80
10 60 0.5(50%) 180 infeasible -
11 60 0.7(70%) 180 L21, L22 0.625(63%) 40
12 40 0.7(70%) 200 infeasible -
13 40 0.9(90%) 200 L11 0.875(88%) 20
14 20 0.9(90%) 220 L11 0.875(88%) 20


11 shows different result from IP1 selection, as the objective of IP2 is to minimize the
expected quality loss rather than to minimize the amount of structural information.

4.4 Search-Based Selection

Our hypothesis is that as we need more tight time constraints, we tend to employ
more behavioral abstraction methods. We increase the number of behavioral abstrac-
tion methods as we require tighter deadlines for the search heuristic. The selection
algorithm starts from one behavioral abstraction. If this abstraction satisfies the
time constraint, we stop and do not go further to examine other possibilities, with
the understanding that increasing the number of behavioral abstraction methods will
result only in a less accurate model. If the time cannot be met by one behavioral
abstraction method, we examine how many behavioral abstraction methods will be
needed for a given time constraint. This is done by examining r fast behavioral ab-
straction methods. If combining r fast behavioral abstraction methods satisfies the









Table 4.2.
level '


Experiment results from IP2 for the selection of the optimal abstraction


No Deadline t, Selected node Achieved ac Achieved deadline
1 240 0 n/a 0 240
2 220 20 L34 0.125(13%) 220
3 200 40 L35 0.125(13%) 190
4 180 60 L32, L34 0.25(25%) 180
5 160 80 L22 0.25(25%) 150
6 140 100 L22, L33, L35 0.375(38%) 80
7 120 120 L22, L32 0.375(38%) 110
8 100 140 L22, L32, L33 0.5(50%) 90
9 80 160 L21, L35 0.5(50%) 80
10 60 180 L21, L22 0.625(63%) 40
11 40 200 L21, L22 0.625(63%) 40
12 20 220 L11 0.875(88%) 20


time constraint, then the optimal abstraction level will be determined by r behavioral

abstraction functions. At this point, we pick r behavioral abstraction functions until

the most accurate combination is found which still satisfies the given time constraint.

Algorithm 1 shows the overall method in detail.

4.4.1 Analysis

This algorithm reads abstraction information about a given base model. The

information contains methods, parent/child relationships between the nodes, and

duration/quality information for each method. Based on the given information, the

algorithm constructs an AT as in the first line. The execution time of the base

model that has no behavioral abstraction methods is calculated in Line 2. Lines 3

- 5 examine whether we need to employ behavioral abstraction methods to meet a

given deadline. If the calculated duration of the base model is less than or equal

to the given deadline, we do not have to employ behavioral abstraction methods;

thus, the algorithm terminates. If the duration of the base model is greater than









Algorithm 1. Optimal Abstraction Level Selection
1: nodes e= at. ConstructAT(fid)
2: baseCost 4= nodes[O].getCost(
3: if baseCost < deadline then
4: return(0)
5: endif
6: at.CollectOrNodes(nodes,orNodes)
7: size <= at.SelectOneByDeadline(orNodes,deadline)
8: if size > 0 then
9: at.SelectByAccuracy(orNodes)
10: OptimalAbstraction <= orNodes[0].getName(
11: else
12: degree <= OptimalAbsNumber(orNodes,deadline,baseCost)
13: if degree == -1 then
14: return(-1)
15: else
16: OptimalSet = OptimalCombination(ornodes,deadline,baseCost,degree)
17: OptimalCost = OptimalSet.cost
18: OptimalQualityLoss = OptimalSet.qualityloss
19: end if
20: end if
21: return(0)


the given deadline, then it becomes necessary to use behavioral abstraction methods.

Upon recognizing the necessity of using behavioral abstraction methods, the algorithm

collects OR nodes that contain the information about the behavioral abstraction

methods. Line 7-8 examine whether one behavioral abstraction method will resolve

the timeliness requirement. If the returned size is greater than 0, as in Line 9, we

know that one behavioral abstraction is enough to meet a given deadline. Then, the

algorithm looks up the most accurate method to ensure that the selected method will

have the best quality while satisfying the given deadline. If one behavioral abstraction

is not enough (size < 0), the algorithm determines how many behavioral abstraction

methods can achieve the given deadline. If we increase the number of behavioral

abstraction methods to be used in the base model, then the simulation time of the

base model will be reduced. Our objective is to minimize the usage of behavioral









abstraction methods as long as the resulting model meets the given deadline. A simple

method is :

1. i = 2

2. select i behavioral methods that will bring the maximum time savings to the

given base model;

3. if the use of i behavioral methods can not resolve the required speedup, increase

i by 1 and go to step 2;

degree in Line 12 is determined by this method.

At this point, the algorithm knows how many behavioral abstraction methods

will be needed for a given deadline. If the returned degree is -1, it means the given

deadline cannot be met even if we use all available behavioral abstraction methods.

Lines 16 18 look for the best combination that will lead the most accurate model.

The algorithm examines all nCr combinations, where n represents the number of

behavioral abstractions available to the given base model (the number of I-nodes in

the given AT) and r is the calculated degree in Line 13, 0 < r <.

4.4.2 Experiments

We use Figure 4.1 for the experiments. The accuracy constraint a, is set to 1.0 for

all cases; therefore, we do not have the results as shown in case 10 and 12 of Table 4.1.

The results are same as in the cases of IP1 approach except in case 7. Since the search-

based approach always finds the minimum accuracy loss while minimizing the number

of behavioral abstraction methods, we select L22 and L32. Table 4.3 summarizes the

results.

4.5 Composition of the Optimal Abstraction Model

Given an optimal abstraction level determined by the selection algorithms, the

Optimal Abstraction Model Composer looks at the methods that will be used for the








Table 4.3. Experiment results from the search-based approach for the selection of the
optimal abstraction level


No Deadline a,_ t, Selected node Achieved ac Achieved deadline
1 240 0.5(50%) 0 n/a 0 240
2 220 0.5(50%) 20 L34 0.125(13%) 220
3 200 0.5(50%) 40 L35 0.125(13%) 190
4 180 0.5(50%) 60 L22 0.25(25%) 150
5 160 0.5(50%) 80 L22 0.25(25%) 150
6 140 0.5(50%) 100 L21 0.375(38%) 130
7 120 0.5(50%) 120 L22, L32 0.375(38%) 110
8 100 0.5(50%) 140 L21, L35 0.5(50%) 80
9 80 0.5(50%) 160 L21, L35 0.5(50%) 80
10 60 0.7(70%) 180 L21, L22 0.625(63%) 40
11 40 0.9(90%) 200 L11 0.875(88%) 20
12 20 0.9(90%) 220 L11 0.875(88%) 20


optimal abstraction model. The optimal abstraction model is composed by observing
the partial temporal orderings of the selected methods. For the determined sequence
of the methods, the composer assigns start time, si, and finish time, fi, to each of
the methods. By determining si and fi for each method, the optimal schedule is
determined.
For a given AT in Figure 4.4, the execution sequence of the methods, O :



0 = (Ni,, Ni,2, ..., Ni,k-, Ni,k)

Then, si and fi for each method in O is determined by :


= (so,so + t(Ni,))


(Si,2, f,2) = (fi, Si,2 + t(i,2))


(sloi, l)


















Figure 4.4. Scheduling example : Ni-_l, is executed by Ni,1, Ni,2, .**, Ni,k

(Si,k-l, i,k-1) = (fi,k-2, Si,k-1 + t(Ni,k-l))

(Si,k, fi,k) = (fi,k-1, Si,k + t(Ni,k))

where t(Nij) denotes the duration of the corresponding method Nij which has been

assessed through the discussions in Section 4.1, and so is the start execution time of

the given AT.

Consider the optimal abstraction level which is shown in Figure 4.3. The optimal

model, 0, is composed of :



O = (A21,A34, M35)

Suppose D(Mij) and D(Aij) are properly assessed. For a starting time so, the

scheduling of the optimal model, 0, is determined by :



S = ((SA2, A21), (A34, fA34)1 (SM35, fM35))

where



SA21 = S, fA2z = SA21 + tA21

SA34 = fA21, fA34 SA34 + tA34

SM35 = fA21, fM35 = SM35 +tM35

The resulting scheduling diagram is shown in Figure 4.5.









Base Model
M32 M34
M31 MM33 M35

0 30 90 130 170 240



A34
21M M35

0 20 40 110

Figure 4.5. Optimal model composition and resulting scheduling diagram

4.6 Summary

Most CASE tools are useful for analyzing the logical requirements given to the

system, for instance, debugging codes and well-defined specifications to capture the

knowledge about the system. However, most CASE tools fail to analyze the schedu-

lability of the system. In this chapter, we discussed how to analyze the schedulability

issue of the system and guarantee the model's timeliness at design time. We handle

the timeliness issue by determining the optimal abstraction level to simulate the sys-

tem for a given amount of time. By finding the places where behavioral abstraction

methods are to be used, we discard unnecessary structural information that can not

be simulated within a given amount of time. The resulting abstraction level is optimal

in the sense that the determined level guarantees to deliver simulation results by the

deadline and also has the maximum accuracy among possible combinations.

We illustrated two phases for the problem statement: (1) Constructing AT struc-

ture to organize the methods of the base model in a way to facilitate the selection

process and assessing runtime/quality for each method, and (2) Applying the selection

algorithms to determine the optimal abstraction level of the base model. Quality is

assessed by (1) degree of abstraction, (2) degree of interest loss, and (3) precision loss









to reflect the simulation objective. The quality of each node in AT is determined ac-

cording to AND/OR relationship and the node type (A, I, M). The duration of each

node in AT is assessed by the method type (FBM, FSM, RBM, EQM, CODE), and

the node type. Based on the constructed AT, we proposed two possible approaches

to select the optimal abstraction level. In Section 4.3, we mapped the selection prob-

lem to Operations Research and proposed two IP-based solutions. IP-based solutions

provided a mathematical foundation for the selection problem. Search-based solution

in Section 4.4 minimized the number of behavioral abstraction methods to be used

for the real-time simulation, while delivering simulation results by the given dead-

line. By minimizing the number of behavioral abstraction methods, the determined

abstraction level is expected to achieve the minimum quality loss. We showed some

selection results from the proposed approaches. The optimal abstraction model is the

model that simulates the system at the optimal abstraction level. We showed how

to compose the optimal abstraction model from the determined abstraction level. A

scheduling process is done to assign a start time and a finish time for each of the

methods that comprise the optimal abstraction model. By determining the sched-

ule of the optimal model, we illustrate how the resulting model delivered simulation

results by the given deadline.

In Chapter 5, we will illustrate OOPM/RT methodology though an example, from

the model generation phase to the optimal model selection phase.













CHAPTER 5
FULTON EXAMPLE : A STEAMSHIP MODELING

Consider a steam-powered propulsion ship model named FULTON shown in Fig-

ure 5.1. In FULTON, the furnace heats water in a boiler: when the fuel valve is

OPEN, fuel flows and the furnace heats; when the fuel valve is CLOSED, no fuel

flows and the furnace remains at ambient temperature. Heat from the furnace is

added to the water to form high-pressure steam. The high-pressure steam enters

the turbine and performs work by expanding against the turbine blades. After the

high-pressure steam is exhausted in the turbine, it enters the condenser and is con-

densed again into liquid by circulating sea water [67, 68]. At that point, the water

can be pumped back to the boiler. Figure 5.1 shows a high-level view of the FULTON

example.

5.1 Model Generation

A conceptual model is constructed on OOPM. It is designed in terms of classes,

attributes, methods and relationships between classes (inheritance and composition).

Figure 5.2 shows the class hierarchy of FULTON, which basically follows the physical

composition of a steamship. Classes are connected by a composition relationship as

denoted by rectangular boxes in Figure 5.2. V denoted in the white box specifies 1

for the cardinality of the associated class.

In Figure 5.2, class Ship has a Boiler, a Turbine, a Condenser and a Pump. Class

Boiler has a Pot and a Knob. Each class has attributes and methods to specify its

dynamic behaviors.





















CONDENSER


Figure 5.1. High-level view of a shipboard steam-powered propulsion plant, FULTON

5.1.1 Structural Abstraction of FULTON

Figures 5.3, 5.4 and 5.5 show structural abstractions of FULTON. Since FULTON

can be configured with 4 distinct physical components and a functional directionality

between them, we start with an FBM. The FBM is located in class Ship, as shown in

Figure 5.3. Figure 5.3 has 4 blocks: L1 for modeling the functional requirements of a

boiler, L2 for turbine, L3 for condenser, and L4 for pump. Boiler assembly (L1) has

distinct states according to the temperature of the water. L1 is refined into :


1. B1 : CODE method of class Knob, which provides fuels to the boiler

2. B2 : FSM, M7, as shown in Figure 5.4, which determines the temperature of

the boiler and makes state transitions according to the temperature

3. B3 : CODE method of class Boiler, which provides high-pressure steam


Each state of Figure 5.4 (Cold (M13), Heating (M14), Boiling (M15), and Cool-

ing (M16)) is refined into an algebraic equation, which calculates the temperature

based on the position of the knob (Open, Closed).






















Figure 5.2. Conceptual model of FULTON: FULTON is modeled within OOPM. Con-
ceptual model is designed in terms of classes, attributes, methods (dynamic method
and static method) and relationships of classes (inheritance and composition)


Figure 5.3. Top level : structural abstraction for FULTON


L2 is refined into two functional blocks: M9 and M10. M9 gets the high-pressure

steam from the boiler. M10 is decomposed into two temporal phases as shown in Fig-

ure 5.5: Exhausting (M17) and Waiting (M18). If there is no steam to exhaust, M10

resides in the waiting state. Otherwise, M10 exhausts steam to generate the work of

the steamship. L3 is also refined into two functional blocks: M11 and M12. M11

gets the exhausted steam from the turbine. M12 has two distinct temporal phases:

Condensing (M19) and Cooldown (M20), in Figure 5.5. Condenser decreases the


























Figure 5.4. Structural abstraction of Mi: FSM has 4 states (Cold, Cooling, Heating
and Boiling)

temperature in Cooldown state, waiting for the turbine to send more steam. Other-

wise, M12 resides in Condensing state where the steam from the turbine turns into

liquid again.

5.1.2 Behavioral Abstraction of FULTON

We start with the observed data set of (input, output) from the simulation of the

base model. With this prior knowledge, the method of behavioral abstraction is to


(a) Structural abstraction of Mio


(b) Structural abstraction of M12


Figure 5.5. Structural abstraction of Mo1 and M12









Table 5.1. New multimodels of FULTON with behaviorally abstracted componentss:
A capital letter represents a full-resolution model, while a small letter represents a
low-resolution model. The low resolution model is generated through behavioral
abstraction



no Model Abstracted Abstracted
Method Component
C1 BTCP N/A N/A
C2 BTcP M12 Condenser
C3 BtCP Mio Turbine
C4 BtcP Mio, M12 Turbine, Condenser
C5 bTCP M7 Boiler
C6 bTcP My, M12 Boiler, Condenser
C7 btCP M7, Mio, Boiler, Turbine
Cg btcP M7, M1o, M12 Boiler, Turbine, Condenser



generate a C++ procedure that codes input/output functional relationship using a

neural network model (ADALINE, Backpropagation) or a Box-Jenkins ARMA model.

We abstract the multimodel method of M7, Mo1 and M12 with the Box-Jenkins

model. Given three behavioral abstraction methods for M7, M10 and M12, 8 (23)

new models can be generated with different degrees of abstraction. Table. 5.1 shows

the possible combinations of the behavioral abstraction methods. The upper and

lower case letters in each model name indicate the level of detail of the corresponding

component. A capital letter indicates that the model incorporates the high resolution

version; a lower case letter indicates that the model incorporates the low resolution

version. For example, C6 uses two behavioral abstraction methods for M7 and M12.

Therefore, the structural information associated with M7 and M10 methods, which

are both FSMs, are abstracted.

When modelers create a behavioral abstraction method, they pick the dynamic

function to abstract; then, the learning process begins. Figure 5.6 shows the Box-

Jenkins abstraction process for M7. Based on the learning parameters of the Box-

Jenkins model, we learn the input/output functional relationship of M7. Once the








performance of the Box-Jenkins model is accurate enough, we generate a behavioral

abstraction method based on the resulting weight and offset vector. One important

observation is that the execution time of any behavioral abstraction method is pre-

dictable. The contents associated with it represent an algebraic-based routine that

calculates an output based on the output calculation algorithm of the Box-Jenkins

model. As long as the output calculation algorithm is the same, the execution time

of the behavioral abstraction method differs only by the size of the weight and bias

vectors. The RMS error is produced after the learning process to help a modeler

assess the precision loss of the Box-Jenkins behavioral abstraction method.

5.2 Assessment of Execution Time and Precision

The model sets in Table 5.1 provide alternatives that vary in the degree of ab-

straction; it thus allows us to investigate the effects of the model structure on model

accuracy and on computation time. We measured the execution time of each model

by varying the simulation logical clock from 50 to 500 using a time step of 50. As

shown in Figure 5.7, the most detailed model, C1, takes the longest time, and the

least detailed model, C8, runs faster than the other models. The output of each model

is compared to the most detailed model, BTCP, and the relative precision loss to

the most detailed model, BTCP, is measured by sum of the squared error through

testing process. Input signals different from the training signals are used for the test-

ing process, and the precision loss is cumulated as we increase the simulation logical

clock from 50 to 500 using a time step of 50. For the least detailed model, Cs, has

the maximum accuracy loss, while C2 shows the minimum accuracy loss over time.

Figure 5.8 and Figure 5.9 show the speed up and accuracy loss according to the

structure of each model, when the simulation logical clock time is 500. Speedup and

precision loss is measured relative to the most detailed model. btcP has least detailed

structural information among other models and takes the least time to simulate the

system.



































Learning Parameters for
Box-Jenkins model


Figure 5.6. Behavioral abstraction : A modeler selects a dynamic function to ab-
stract, Pot::Heatpot, which is an FSM. States and their transitions will be lost, but
behavioral information will be preserved to some level of fidelity. In the learning
process, a modeler gives several parameters, for example, lag for input and output
signals


5.3 Construction of the Abstraction Tree


Figure 5.10 summarizes the structural abstraction of the FULTON. The resulting

hierarchical structure becomes the base model which simulates the FULTON at the

most detailed level. Each method is connected to other methods through refinement

relationships.

By applying the behavioral abstraction technique to three methods (M7, M10,

M12), and producing (A7, A10, A12), respectively, we generate the AT as shown in














Red time meau mentof mulimodek


700



I w. "i
I.







1 2 3 4 S 6 7 8 9
Sinu Miun Tni

(a) Execution time of multimodels


(b) Accuracy loss of multimodels


Figure 5.7. Execution time/ accuracy loss of models: Behavioral abstraction yields
shorter elapsed times. As we increase the level of abstraction by using more behavioral
abstraction methods, the model execution time decreases. Accuracy is lost as we
increase the number of behavioral abstraction methods in the base model








180 -
160
140
120
100
80 I Speedup
60
40
20
0

M M= r.


Figure 5.8. Speedup for the set of fulton models relative to the most detailed model,
BTCP for a simulation time of 500. Execution times are reported by clock(, running
on WindowsNT, x86 Family, Intel 133 MHZ












4.5
4
3.5
3
2.5
2 Accuracy Loss
1.5

0.5





Figure 5.9. Accuracy loss for the set of fulton models relative to the most detailed
model, BTCP, for a simulation time of 500


Figure 5.11. Ii node is positioned where the two associated different resolution meth-

ods reside. Intermediate nodes Ii are connected to the children by an OR relationship

as discussed in Chapter 4.

The execution time and accuracy loss of internal nodes are determined by recur-

sively applying the cost/quality assessment equations discussed in Chapter 4. Many

studies have been done to estimate the execution time of program codes before ac-

tually running to real-time applications. Most of the methods analyze the program

structure and assign different time scales according to the type of program state-

ments [69]. Our focus does not lie in proposing a new process for assessing the

execution time of a method. Instead, we use available methods to assess the execu-

tion time for each of least methods in the AT. We assume that each leaf method of

AT (M6, M13, M14,M15,M16,M8,M9,M17,M18, M11, M19, M20, M5) is applied

to a time estimation method and (2,2,8,4,6,2,2,4,3,2,2,4,2) is assessed, respectively,

as for the execution time for each of the methods. Also, the execution time of (A7,

A10, A12) is assumed to (4, 2, 2), respectively. Then, non-leaf nodes iteratively look

for child's execution time and calculate its own execution time by applying the time











Ship::

---. ------------- FBM------- ------------------------------
M2--- M3 M4 U- S
Boiler:: _____ Turbine;: ___e Condenser; __ Pump::
EEcxe TBETtec CDEeec Store

-FBM FBM F--M B

Knob:: Po:: Boiler: Turbine:: T ine: Condenser: Condenser:
geos HeatPo makeSeam getSteam Propel getEsteam Condense
MS ,-' '-. 7 MU -, M10 Mil .' M12

FSA
M 4 1 FSA FSA
cold hea tin S E Z

M17 M1t M19 M20
cooho -----
M15 Mf1


Figure 5.10. Abstraction hierarchy of FULTON example

Ml




M2 M3 M4 M5


M6 7 M8 M9 0 Mil 12


M7 A7 M10 A10 M12 12


M13 M14 M15 M16 M17 M18 M19 M20

Figure 5.11. Abstraction tree for the FULTON example : AT has AND/OR nodes.
When there's a behavioral abstraction method, OR nodes exist in order to specify
either one of the method is executed for a simulation


assessment equations discussed in Chapter 4. System factor, 6 is assumed to be 0;

therefore the target platform in which the model will be executed is the same one in

which the model execution time is measured. By iteratively applying this procedure,

the root node has 26 for the execution time. We assume that there is no special

interesting class for the simulation; thus, a modeler would look at every class equally.

We also assume that the precision loss of each behavioral abstraction method is as-

sessed properly. Currently, we assess the precision loss of each behavioral abstraction








method manually, through the hold-out technique discussed in Chapter 4; the assess-

ment process usually involves exhaustive testing of the methods. For the simplicity of

illustration, we assume that the precision loss of (A7, A10, A12) is properly assessed

to (0.4, 0.2, 0.3). Then, the quality loss of each method, a7, alo and a12 is defined

by 0.325, 0.125 and 0.175 respectively, according to quality loss equation discussed

in Chapter 4. In this case, the simulation objective takes the form of : "Given a

deadline of d, simulate FULTON with no special interest in any of Boiler, Turbine,

Condenser and Pump classes. Try to minimize the overall quality loss, and simulate

FULTON model at the lowest abstraction level, provided that the given deadline d is

not violated.

5.4 Selection of the Optimal Abstraction Model

The base model of FULTON takes 26 units to complete the simulation. Suppose

we have 20 units for a deadline. Upon receiving the time constraint, we immediately

know that the behavioral abstraction is needed to make the simulation faster. The

optimal abstraction level is determined by the algorithm discussed in Section 4.4 and

IP-based approaches, IP1 and IP2), discussed in Section 4.3.

For a given AT in Figure 5.11, the objective function of IP1 is defined as:


Minimize (17 + o10 + 112) (5.1)

subject to

a717 + alolo + a12112 < ac

t717 t0l + tolo 12112 > tc

a7 = 0.375, alo = 0.125, a12 = 0.175 (5.2)

t7= 4, tio = 2, t12 = 2








Then, the selection of the optimal abstraction level solves the objective function

defined in equation 5.1 with the constraints defined by equation 5.2.

The objective function of IP2 is defined as :


Minimize (17 a7 + Ilo a0o + 112 a12) (5.3)


subject to


t717 + tiolo + t12112 > tc

a7 = 0.375, alo = 0.125, a12 = 0.175 (54)
(5.4)
t7= 4, to0 = 2, t12 = 2

Then, the selection of the optimal abstraction level solves the objective function

defined in equation 5.3 with the constraints defined by equation 5.4. Since the desired

speedup to be achieved for a given deadline is 26 20 = 6, we assign 6 to tc. To

find out the most accurate combination, we assign 1.0 to ac. The accuracy is not

constrained to a certain bound; rather the algorithm has the freedom to find the

most accurate combination.

The search-based algorithm increases the number of behavioral abstraction meth-

ods to be used for the deadline. The algorithm examines whether one behavioral

abstraction method will resolve the time constraint. Neither of the candidates meets

the deadline. Therefore, the algorithm increases the number of behavioral abstrac-

tion methods to use for the simulation. When two behavioral abstraction methods

are used, the fastest behavioral abstraction A7 achieves the deadline if either of Ao0

or A12 is combined with A7. Therefore, the algorithm concludes that using 2 behav-

ioral abstraction methods will resolve the timeliness requirement. At this point, the

algorithm begins to find the most accurate combination. Possible combinations are

(A7, Alo), (A7, A12) and (Alo, A12). The most accurate combination is (Alo, A12);





84


Ml



M2 M3 M4 M5

M6 M8 M9 I1 Mil 112

SA12

M13M14 M15 M16 M19 M20

Figure 5.12. Optimal abstraction degree for a deadline of 20
Base Model
M6 M7 M M9 M10 M1 M12 M5


2 4 6 8 10 12 14 16 18 20 22 24 26

Optimal Abstraction Model for deadline = 20
M6 M8 M9 Mll M12 M5


2 4 6 8 10 12 14 16 18 20 22 24 26

A7 A10 deadline

Figure 5.13. Scheduling diagram for a deadline of 20


however it can not achieve the timeliness requirement. The next accurate combi-

nation, which is (A7, A10), achieves the timeliness requirement. So, the algorithm

declares (A7, A10) as the optimal abstraction degree for a given AT and a deadline of

20. Figure 5.12 shows the optimal abstraction level of the given AT. The execution

of 17 is made by A7, as o10 by A10, and as 112 by M12. Then, the optimal abstraction

model is composed of the sequence (M6, A7, M8, M9, Ao1, M11, M12, M5). Note that

FSM models of Boiler and Turbine classes have been cut off to save simulation time.

The corresponding scheduling diagram is shown in Figure 5.13.

Table 5.2 shows other selection examples. The IP2 method produces a different

answer for a deadline of 22. The IP1 and the search-based methods minimize the









Table 5.2. Selection examples of three algorithms for FULTON


deadline 20 22 24
IP1 I7, o 10 I 1 10
IP2 17, 110o 10, 112 I10
Search 17, 11o 17 10



number of behavioral abstraction methods during the selection in order to minimize

the loss of structural information. When modelers want to minimize the loss of

structural information and preserve the base model structure as much as possible, the

optimal abstraction occurs at 17. However, if the simulation objective is to minimize

the expected quality loss, we apply behavioral abstraction methods at o10 and /12, as

suggested from IP2.

5.5 Summary

In this Chapter, we illustrated the OOPM/RT methodology with an actual ex-

ample, FULTON. FULTON has four distinct physical units, which are connected

in a directed order. FBM is a natural representation frame to model functional

relationships between four distinct components. Therefore, we start the structural

abstraction process with a simple four-block FBM. Each of the four blocks is refined

into another model type, FSM, since the functional aspects associated with each of

the four blocks can be viewed with states and the transitions between the states.

Multimodeling methodology provides a mechanism to group all heterogeneous model

types together under one hierarchical structure. By allowing different model types to

capture the system, modelers better capture the functional requirements given to the

system. The structural abstraction process constructs a base model that simulates

the system at the highest detail. Behavioral abstraction process is applied to the base

model by abstracting any level with a generic Box-Jenkins model. We showed how








execution time of the base model could be reduced by using behavioral abstraction

methods.

When the base model cannot simulate the system by a given deadline, the level

of abstraction is controlled to deliver simulation results by the given deadline. Three

algorithms were applied to find the optimal abstraction level of the base model by

considering both timing constraints and quality loss. The optimal abstraction model

is determined by either 1) minimizing the number of behavioral abstraction methods

to be used for real-time simulations, 2) minimizing quality loss of the base model, or

3) minimizing the number of behavioral abstraction methods and quality loss for real-

time simulations, all of which must be achieved while still satisfying a given timing

constraint.













CHAPTER 6
APPLESNAIL EXAMPLE : A POPULATION MODEL OF APPLE SNAILS

The Across Trophic Level System Simulation (ATLSS) is a set of ecological mod-

els designed to evaluate the ecological effects of different water management plans for

the Everglades National Park (ENP) and Big Cypress National Preserve (BCNP).

The South Florida Everglades are characterized by complex patterns of spatial het-

erogeneity and temporal variability; water flow is the major factor controlling the

trophic dynamics of the system. A key objective of modeling studies for the system

is to compare the future effects of alternative hydrologic scenarios on the biotic com-

ponents of the system [70]. The goal of APPLESNAIL is to predict the population

levels of apple snails in a time-varying temperature of the hydrology. Figure 6.1 shows

the overall apple snail life cycle in a system dynamics flow graph. System Dynam-

ics (SD) is a specific methodology for engineering simulation models when a system

can be viewed as variables (or levels) and dependencies between the variables [15]. In

Figure 6.1, the symbols resemble water valves are examples of rates: oviposition rate,

hatch rate, egg loss rate, growth rate, death rate, maturation rate, and senescence

rate. The rectangles are examples of levels: number of eggs, number of juveniles,

number of preadults, number of reproadults, and number of postadults. The symbols

of the system dynamics flow graph provide us with a hydraulic (or fluidic) metaphor

for systems problem solving. Water represents individual entities passing through the

system (in this case, number of eggs, number of juveniles etc). The water valve (or

rate) can be adjusted just like a water tap, and the rectangles can be seen as contain-

ers (the levels) for the water flow. Other symbols, such as circles, represent auxiliary
























Figure 6.1. APPLESNAIL life cycle model


variables for determining rates. The SD flow graph is translated into a set of differen-

tial equations and difference equations for simulation purposes. Refer to reference [15]

for the general algorithm for taking a system dynamics flow graph and producing a

set of equations.

6.1 Model Generation

A conceptual model of APPLESNAIL is constructed on OOPM. It is designed

in terms of classes, attributes, methods and relationships between classes using in-

heritance and composition. Figure 6.2 shows the class hierarchy of APPLESNAIL,

which basically follows the age class of apple snails: Egg (E), Juvenile (J), Pread-

ult (P), Reproadult (R) and Postadult (A). Classes are connected by a composition

relationship as denoted by rectangular boxes in Figure 6.2. V and 1, denoted in the

white boxes, specify one for the cardinality of the associated class. In Figure 6.2,

class Marsh is composed of 8 by 12 Patch. Each Patch has one SnailPop and one

Hydrology. SnailPop is composed of EggPop, JuvPop, PreadultPop, ReproPop, and

PostPop, each of which determines the population of its age class. In the following

section, we discuss attributes and methods of each class to describe the dynamic

behaviors associated with the life cycle of apple snails.




























Figure 6.2. Conceptual model of APPLESNAIL


6.1.1 Structural Abstraction of APPLESNAIL

Figures 6.3 6.9 show the structural abstraction of APPLESNAIL. The starting

method, M1, is located in the Patch class as shown in Figure 6.3. M1 reads water tem-

perature from the Hydrology class. The water temperature of a month is determined

by:

temperature = 32 SINWAVE(1, 12)

Based on the water temperature, M2 calculates the population for each age class of

apple snails. M2 is an FBM as shown in Figure 6.4. M2 specifies how the popu-

lation of each age class is determined based on the causal relationships among the

APPLESNAIL classes. M4 calculates 1) the population of the egg class, and 2) the

hatch rate based on the current temperature and reproadult population.










eggs(t) = eggs(t-dt) + (oviposition hatch rate egg loss rate)*dt
initial condition: eggs = 0
inflows: oviposition = (reproadult*0.5)*120*mate index (6.1)
outflows :
hatch rate = eggs*hatch fraction / incubation time
egg loss rate = eggs ( 1- hatch fraction) / incubation time

Figure 6.5 shows an SD model for equation 6.1. Each symbol of M4 is refined into

another model type; For instance, Mn is a RBM which determines the mate index

by the following rule:



IF (temperature > 20) THEN 1 ELSE 0

Other symbols are refined into CODE methods, M10, M12, M13, and M14 to represent

the detailed egg behaviors discussed in equation 6.1.

M5 determines 1) the growth rate and 2) the population of the juvenile class,

based on the hatch rate from M4.


juv(t) = juv(t-dt) + (hatch rate growth rate juvenile death rate) dt
initial condition: juv = 0
inflows: hatch rate = eggs hatch fraction / incubation time
outflows: (6.2)
growth rate = juv / growth time
juvenile death rate = juv 0.4


Rate symbols are refined into CODE methods, M16, M17, and M18, to represent the

behaviors in equation 6.2. Figure 6.6 shows the SD model of M5.

The growth rate is sent to M6 and used to calculate 1) the population of the

preadult class, and 2) the maturation rate of the preadult class.










preadult(t) = preadult(t-dt) +
(growth rate maturation rate preadult death rate) dt
initial condition: preadult = 0
inflows: growth rate = juv / growth time (
outflows :
maturation rate = preadult / maturation time
preadult death rate = preadult 0.2


Rate symbols are refined into CODE methods to describe the detailed behaviors in

equation 6.3: M20 defines the growth rate, M21 defines the preadult death rate, and

M22 defines the maturation rate. Figure 6.7 shows the SD model of M6.
M7 receives the maturation rate from Ms and calculates 1) the senescence rate,

and 2) the population of the reproadult class.


reproadult(t) = reproadult(t dt)+
maturatee rate senescence rate reproadult death rate) dt
initial condition: reproadult = 1000
inflow: maturation rate = preadult / maturation time (6.4)
outflow :
senescence rate = reproadult / senescence time
reproadult death rate = reproadult 0.2


Rate symbols are refined into CODE methods to describe the detailed behaviors in

equation 6.4: M24 defines maturation rate, M25 defines reproadult death rate, and M26

defines senescence rate. Figure 6.8 shows the SD model of Mr.

The determined senescence rate goes to Ms, where the senescence rate is used to

calculate the population of postadults. Each population information is collected in

Ms to be reported on OOPM.


postadult(t) = postadult(t dt) + (senescence rate postadult death rate) dt
initial condition: postadult = 0 (
inflows: senescence rate = reproadult / senescence time
outflows: postadult death rate = postadult 0.9





















Figure 6.3. Top level FBM, Mi: starting method

Rate symbols are refined into CODE methods to describe the detailed behaviors in

equation 6.5: M28 defines the senescence rate, and M29 defines the postadult death

rate. Figure 6.9 shows the SD model of Ms.

6.1.2 Behavioral Abstraction of APPLESNAIL

The process of the behavioral abstraction starts with the observed data set of

(input,output) from the simulation of the base model. Simulation (input,output)

data are collected for a simulation logical clock of 100 with a time step of 0.1. With

this prior knowledge, we abstract the multimodel methods of M4, Ms, M6, M7, and

Ms with the Box-Jenkins models.

Figure 6.10 shows the behavioral abstraction process of Ms. Inputs for the Box-

Jenkins model are 1) water temperature, and 2) reproadult population. Based on

the learning parameters specified in the parameter window, the Box-Jenkins learning

algorithm reads the input and output data and produces weights and offsets that

explain the input/output functional relationship. Though the output from the Box-

Jenkins model cannot exactly follow the trajectory of the original growth rate, the

overall behaviors are well-learned and predicted. (A4, As, A6, A7, As) are generated

from the behavioral abstraction process of (M4, M5, M6, M7, M8). We double the

simulation time and obtain the prediction performance for each of the behavioral































Figure 6.4. Structural abstraction of M3: M3 describes how the population level is
determined for each age class of apple snails. Each block of M3 is refined into an SD
model


Figure 6.5. Structural abstraction of M4: M4 defines the egg population in a SD
model


































Figure 6.6. Structural abstraction of M5: M5 defines the juvenile population in a SD
model


Figure 6.7. Structural abstraction of M6: M6 defines the preadult population in a SD
model