TWO METHODS FOR EXPLOITING ABSTRACTION IN
SYSTEMS
Paul A. Fili i' 1: and Kangsun Lee
Computer and Information Science and Engineering Department
University of Florida
Gainesville, FL 32601
Abstract
As complex models are used in practice, modelers
require ways of abstracting their models and having the
ability to traverse levels of abstraction. I!., use of ab
straction in modeling is spread over ......., disciplines
and it is (of, difficult to locate an abstraction method
ology or a set of practical techniques to help the mod
eler to i . f,! the abstraction. Several approaches
have been discussed in the general simulation litera
ture: (1) variable resolution (......n, ',.. (2) combined
......, I /',.; (3) .....,li! ......nl /', and (4) m etam odeling.
Our premise is that there are two 1/T. ,. i approaches
to abstraction: behavioral and structural. We present
one /I'.,,', 'l example of heat !/..... ,. and ./'1 /,/.., the
1'[TT. !, .! abstraction approaches on this example. I!,
approach taken to abstraction is an important design
approachto break a ,i" r .i. into hierarchical levels.
Behavioral abstraction serves to ',1!/'1 f the .1.,.,''.
of a w,!,. .. without gaining the kind of reductionist
knowledge one obtains through hierarchical decompo
sition. I! lb work provides a comprehensive approach
to ,r ... abstraction, while including ,,, 'l; practical
behavioral methods to achieve abstract ,i. ... descrip
tions.
1 Introduction
Dynamical  I i, can be treated with only one
kind of model, depending on the level of information
that one expects to receive from ;., ~ i Increas
i_1 however, models must be multi 1 i. i so that
different abstraction levels of the model respond to
different needs of the o!! 1 I These levels are of
ten best defined using modeling approaches which are
specifically targeted for a level. For instance, one
would not choose to model lowlevel 1'1! i. 1 behavior
with a Petri net since a Petri net is an appropriate
model I *" for a particular sort of condition within
a  1. 11 one where there is contention for resources
by discretelydefined moving entities. To incorporate
different levels together, we have constructed a mul
timodeling !i. i, ..1. .1. . [8, 13, 9, 10, 12], which pro
vides a way of structuring a heterogeneous set of model
S" together in a way so that each I I." performs
its part, and the behavior is preserved as levels are
mapped [7, 20, 23].
While the multimodel approach is sound for well
structured models defined in terms of state space func
tions and settheoretic components, there are signifi
cant problems with it. The multimodeling approach
provides for a hierarchical model structure, but select
 1, i components in each level are dependent on the
nextlowest level. This implies that we are unable to
run each level independently. It is possible, to ob
tain output for ;,i! abstraction level but, nevertheless,
the  , i model must be executed for the hierar
chy as a whole. A new definition and < li. !,...l,
is needed to better handle abstraction of  1 i and
components. We define ,,,. .... abstraction to be one
of two I" behavioral and structural. The concepts
of behavior and structure are fundamental to 1 I 
theory [15, 17], and have also been formalized within
discrete event simulation [21, 22]. We need a way to
augment multimodeling so that abstraction can take
place on these two fronts. Behavioral abstraction is
one where a 1. i, is abstracted by its behavior
that is, we replace a  I i11 component, which 11i
be currently subdefined structurally, with something
more generic which produces similar behavior. i i in 
tural abstraction is equivalent to multimodeling. We
abstract structurally using homomorphic relationships
of one level to another.
We will use a simple , 11, of boiling water to
demonstrate both methodologies. Ths I , 11i was
chosen since it represents the simplest form of low
level dynamics: first order (exponential) behavior. At
the same time, the  i. i, allows for timedependent
input and output f i. i. 1 i. so one can characterize
the  ,. ii using transient or steady state behavior.
This is not true of some other simple examples, such
as modeling the flight of a ballistic shell, where the in
puts of orientation and position are not timevarying
(or if they were, they would have no effect on the shell
after it leaves the gun barrel).
The outline of the paper is as follows. We specify
the domain example in Sec. 2. After the example, we
'[,i. 1 overview the current multimodeling approach
which captures the structural method of  I. i ab
straction, in Sec. 3. The bulk of the paper relates to
experiments on the behavioral approach (Sec. 4). We
close with a summary in Sec. 5.
2 Sample System
Ki iII
Input
Output 0
40Trajectory
Water
0o flCopper Pot
.Heatmg Element
Knob
V _(I I
1 I ,.I 1: Boiling Water System
Consider a pot of water in Fig. 1. Here we show
a picture of the boiling pot along with an input and
output trajectory. The input reflects the state of the
knob, which serves to specify external events for the
* . in The output defines the temperature of the
water over time. Newton's law of cooling states that
Rqh = AT = TI T2 where Ti is the temperature of
the source (heating element), and T2 is the tempera
ture of the water. qh is heat flow. Since T2 is our state
variable we let T = T2 for convenience. By combining
Newton's law with the capacitance law, and using the
law of capacitors in series, we arrive at:
C1 + C2
k (1)
RC1C2
k(Ti T) (2)
3 Structural Abstraction
The structural approach to  1. i abstraction for
the boiling water is defined in a recent text [11] where
the boiling water is included as a , 1. . ii within a
1. in of two flasks and a human operator who mixes
the flasks once the liquids are boiling. In the struc
tural abstraction approach to 1. I i we first need to
define our levels of abstraction and then choose which
models i" to use at each level. In [8, 13, 11], we
have the following model levels:
1. Level 1: Petri net, defines the action of the human
operator and the mixing process.
1 ,i II. 2: Six I 1 i. Automaton Controller for the Boil
ing Water Multimodel
2. Level 2: FSA net, defines the phases of water
during heating and cooling.
(a) Sublevel 2.1: FSA, defines two states: cold
and not cold.
(b) Sublevel 2.2: FSA, defines four states un
derneath not cold: 11 ....', ,.,....1:,; boiling
and exception.
(c) Sublevel 2.3: FSA, defines two states un
derneath exception: (., [ .f ... and ..... ., 1 .f..
3. Level 3: Block model, defines Newton's Law
of Cooling subdefining both cooling and heating
phases.
We show part of the multimodel in I i, 2 and 3.
The first model is a compressed version of all the hi
erarchy specified in Sublevels 13 above. Fig. 3 shows
Newton's law of cooling in a functional block form.
1 i, !i. 3: Decomposition of Heating '1 ,1,
1 i,,. 4: 1.1 i , L time vs. Temperature
4 Behavioral Abstraction
Behavioral abstraction is one where we approxi
mate the behavior of one I. 11i component using
a generic model or network of some I i" We have
two approaches with recording the  I. ii behavior:
(1) static approach and (2) dynamic approach. In the
static approach, one takes a  1. 1 and captures only
the steady state output value instead of a complete
output trajectory. Moreover, in the static approach,
we have single values for input as well. In the dynamic
approach, we need to associate timedependent input
I ii., ,, i. with output i 1. ,, 
4.1 Static Approach
In the static approach, we're interested only in the
final temperature of the water. Our two inputs are:
(1) total amount of elapsed time for the input tra
jectory and (2) integral time value of the input tra
jectory. The output is the temperature. A graph of
elapsed time versus temperature is shown in Fig. 4.
This information is obtained directly from the under
lying simulation of the boiling water  1. 11 We chose
a subset of all possible input time i i, i. in such
a way that some nonlinearity was introduced into the
graph in Fig. 4. This was done to challenge the behav
ioral parameter estimation methods in creating a good
fit. This explains why Fig. 4 contains a small area of
discontinuity in the region between steady state tem
perature values of 20 and 40.
4.1.1 Linear Regression
In general, a polynomial fit to data in vectors x and y
is a function p of the form:
p(x) = cla +c_, + ... + Cd
The degree is n and the number of ... !n .I~ II is
d = n + 1. The regression .... th, i. iI Cl, 2, ..., C are
determined by solving a 1. 11, of simultaneous linear
equations: Ac = y [16]. Fig. 8 shows the result. The
approximation is poor in the graph's central region be
cause linear regression is done by polynomial fit, and
so it generates a ......... .... i i, increasing function.
4.1.2 Backpropagation Network
One of the traditional uses of a neural network is func
tion approximation. Abstraction can be considered to
approximate an output function given input samples.
The I 1 two layer architecture used for a function
approximation network is shown in Fig. 5. It has one
hidden layer of sigmoidal neurons which receive in
puts directly and then broadcast their outputs to a
layer of linear neurons, which compute the network
output [2, 14]. Fig. 9 shows the approximation result.
This also shows poor performance in abstracting the
sharp changing part like the linear regression model.
4.2 Dynamic Approach
In the dynamic approach, we're interested in time
dependent behavior. In this case, we are concerned
not only in the steady state temperature but also the
way in which the temperature changes over time. For
Neuron Layer 2 Input
I ,,.i. 5: Backpropagation Network
this approach, we chose a I I. i with just one input
and one output, both time i il,_ h i., 1. li. The
input is the input "I!.,. i. off/knob I,, trajectory and
the output is the temperature trajectory.
4.2.1 Linear System Identification
The I. i identification problem is to estimate a
model of a  . i,, based on observed inputoutput
data. This parameter estimation procedure provides a
search through parameter space, effectively, to achieve
a closeto optimal mapping between the actual simu
lated values of the  I. i and the approximate ab
stract  I. i, The BoxJenkins method is a method
used frequently in time series;, i! i [19, 1]. Its struc
ture is given by
B(q)
y(t) = (q)u(t
F(q)
C(q)
nk) + e(t)
D(q)
(4)
1/
Where
R # of Inputs, S # Neurons
1 ,.i. 6: ADA 1.1 \l. Network
errors, not just their presence. The Ali .\1. i\1. net
work for our example is shown below with one layer
of S neurons connected to R inputs through a matrix
of weights W.
Fig. 12 shows the output signal of the linear neuron
with the target signal.
with
4.2.3 Gamma Network
y(t)
B(q)
F(q)
C(q)
D(q)
bl + b2q1 + ... + bnbq b
1 fiq^ +...+ f,1 c alq
1 + cq1 + ... + f ctq
1 + lq1 + ... + Cdcqrc
S+ dlq1 + ..+ dndqrd
The numbers nb, nc, nd and nf are the orders of the
respective polynomials and q is the shift operator. The
number nk is the number of .1. 1  from input to out
put. Fig. 10 shows the approximation result.
4.2.2 ADALINE Neural Network
The adaptive linear element (.\ I.\ .I \ i.) was devel
oped by \\ l n1..' and Hoff [3]. Their neural network
model differs from the perception in that their neu
rons have a linear transfer function. The ADA 1.\1.
network also enables the \\ !l ..' Hoff learning rule,
known as the Least Mean Squar( i. IS) rule, to ad
just weights and biases according to the magnitude of
The Gamma network can be regarded as an exten
sion of the Multilayer Perceptrno.,. II.P). It includes
memory structures so that temporal patterns can be
converted into static input patterns. A Gamma net
work focuses on the network architecture whose mem
ory structures are implemented only at the input layer
to alleviate the l !i 111 in determining the memory
order [4, 18]. A schematic diagram of Gamma network
for our example is shown in Fig. 7.
The abstraction result for the Gamma network is
shown in Fig. 14.
5 Summary and Conclusions
We have provided two practical approaches to sys
tem abstraction. The first is to use structural abstrac
tion and the second is to use behavioral abstraction.
In most cases, one should explore both of these meth
ods when constructing  . I, For instance, when a
Input Neuron Layer 1
Neuron Layer
Gamma Memory
11 1g 1P
] i // 1 Z
1i ,,.. 7: Gamma network
1. i, is first being designed, one should construct it
hierarchically, with simple 1. in Ii at first, and
graduating to more complex model I later. We
term this iterative procedure model engineering. The
more complex I ". are refinements of model compo
nents at the next highest abstraction level. Hierarchi
cal I. ii, development has been commonly used for
ii! ii, years in software and  I in engineering [5, 6],
but the multimodeling approach makes it more flexi
ble by allowing for heterogeneous, interlevel coupling
of . i! After creating the hierarchy, we i!i want
to isolate abstraction levels, so a level can be executed
apart from the rest of the hierarchy. This is where the
behavioral approaches are !i.1... ' 1 1.I use the hier
archy in order to "i i, or estimate model param
eters. Still, with behavioral abstraction, an apriori
model I I. needs to be postulated. This is true of ,,,
identification or estimation procedure, unless the au
tomated search involves an additional search through
variable structure models. We discussed several ap
proximation methods and further subdefined the be
havioral approach into static and .,/.....i' Our re
search so far has been to characterize 1. I abstrac
tion in a clean way and to illustrate alternatives, but
we have not yet promoted; :ii one 1 1" For instance,
is it good to use a recurrent neural network such as
the Gamma network, or is it it!hi !. I. to use a non
recurrent structure such as backpropagation? There is
a problem with choice of model, but the problems do
not end there. There also is the consideration of the
search through parameter space. Some of the param
eters, especially, for neural networks, are chosen by
hand and others are chosen during parameter training.
Is this effective? A completely automated solution to
S1 i i, abstraction is not apparent at this point, but at
least we can categorize what is available, along with
some relative advantages and disadvantages of each
technique.
Acknowledgements
We would like to acknowledge the following fund
ing sources which have contributed towards our study
of modeling and implementation of the MOOSE sys
tem: (1) Rome Laboratory, (;! it Air Force Base,
New York under contract I ;1 ,1i 295C0267 and grant
1 lii .19510031; (2) Department of the Interior
under grant 144500091544154 and the (3) Na
tional Science Foundation Engineering Research Cen
ter (i.l i') in Particle Science and T. i!!.1..' at the
University of Florida (with Industrial Partners of the
.i :l1) under grant EEC9402989. In particular, we
would like to thank Alex Sisti, Rome Laboratory, for
his encouragement of producing a more comprehen
sive study of the problem of abstraction in I. in
modeling and simulation.
References
[1] S,. .. LI '1;,,',,,. Toolbox. The MathWorks,
Inc., 1991.
[2] Neural Network Toolbox. The MathWorks, Inc.,
1992.
[3] B.\\ nhI... and S.D. S. i! Adaptive Signal Pro
cesing. PrenticeHall, 1985.
[4] Bert de Vries and Jose C. Principe. The Gamma
ModelA New Neural Model for Temporal Pro
cessing. Neural Networks, 5:565576, 1992.
[5] Paul A. I Il 1. !: A T....!..!i for Simulation
Modeling Based on a Computational Framework.
IE I ...... .. ... on IE Research, Submitted Au
guest i ''
[6] Paul A. I il! h I: Dynamic Models as Patterns
for System and Software Development. Commu
nications of the ACM, Submitted January 1996;
Special issue on Design Patterns and Pattern
Languages.
[7] Paul A. iI 1. !: The Role of Process Abstrac
tion in Simulation. IEEE I, ...'.; ... on Sys
tems, Man and C.l,, .d 18(1):18 39, Jan
uary/February 1988.
[8] Paul A. I il!1 i !: Heterogeneous Decomposi
tion and Coupling for Combined Modeling. In
1991 Ii ,. Simulation C4., ,. ... pages 1199
1208, Phoenix, AZ, December 1991.
[9] Paul A. i il! I: An Integrated Approach to
System Modelling using a Synthesis of Artificial
Intelligence, Software Engineering and Simula
tion Methodologies. AC. f I.......; ... on Mod
eling and Computer Simulation, 1992. (submitted
for review).
[10] Paul A. 1 l I l: A Simulation Environment
for Multimodeling. Discrete Event D,. ...... Sys
tems: I .....! and Applications, 3:151171, 1993.
[11] Paul A. I !1! i. !: Simulation Model Design and
Execution: Building Digital Worlds. Prentice
Hall, 1995.
[12] Paul A. 1 iI !:, Hari N ,! Jon Si :.
and Andrea Bonarini. Multimodel approaches for
. I i, reasoning and simulation. IEEE I ........ 
tions on S.,,,i .... Man and C!1., ..,. 1994. to
be published.
[13] Paul A. I i! i. !: and Bernard P. Zeigler. A Mul
timodel 'I. ..1.l. for Qualitative Model En
gineering. .I 1 ...!.... ; .f.. on Modeling and
Computer Simulation, 2(1):5281, 1992.
[14] Li Min Fu. Neural Networks in Computer Intel
ligence. McGrawHill, 1994.
[15] R. E. Kalman, P. L. Falb, and M. A. Arbib. Top
ics in Mathematical ,,.i ...I. 1!...I!'. McGraw
Hill, New York, 1962.
[16] Averill M. Law and W. David Kelton. Simulation
Modeling and .1 ...,,'1 McGrawHill, 1991.
[17] Louis Padulo and Michael A. Arbib. Si!J ..I
b1 ...', A U.'r'; 1 s.I. Space Approach to Con
tinuous and Discrete .*!. .... W. B. Saunders,
Philadelphia, PA, 1974.
[18] Jose C. Principe and Pedro G. de Oliveira. The
Gamma 1 il i A New ('  of Adaptive IIR Fil
ters with Restricted Feedback. IEEE transactions
on signal ;.,..... '.,, 41(2):649656, 1993.
[19] 7 .i ..i Tang and Paul A. i bI I: F,, r,. ward
Neural Nets as Models for I ... Series Forecast
ing. TR91008 Computer and Information Sci
ences, U i . i of Florida, 1991.
[20] Bernard P. Zeigler. Towards a Formal Theory of
Modelling and Simulation: Siin, Iii. Preserving
Morphisms. Journal of the Association for Com
puting Machinery, 19(4):742 764, 1972.
[21] Bernard P. Zeigler. I!b ., :j of Modelling and Sim
ulation. John \\ !l and Sons, 1976.
[22] Bernard P. Zeigler. MultiFacetted Modelling
and Discrete Event Simulation. Academic Press,
1984.
[23] Bernard P. Zeigler. Object Oriented Simulation
with Hierarchical, Modular Models: Intelligent
Agents and Endomorphic S.,,i. ... Academic
Press, 1990.
2 80
S +
S +
40
+
20
0 20 40 60 80 100 120 140 160 180 200
Time
1 i..i. 8: Linear Regression
1
0 
05
3 2
+
35
41 +
^25 4
35
4 +
2 1 5 1 05 0 05 1 15 2
Time
1 i 11 9: Back propagation Network
200 400 600 800 1000 1200 1400 1600 1800
Time
i 1..1. 10: BoxJenkins method
0 200 400 600 800 1000 1200 1400 1600 1800
Time
I ,'i. 11: Abstraction error in BoxJenkins method
I o I I' i S
* 80 
t t
60 
40
20
0
0 2 4 6 8 10 12 14 16 18
Time
1 i,. 12: Al i.\LiM. network
60
40
20
0 L
20
40 
0 2 4 6 8 10 12 14 16 18
Time
1 ,1,.1 13: Abstraction error in Al 11.\L. I1. network
0 200 400 600 800 1000 1200 1400 1600 1800
Time
i 11,. 14: Gamma Network
0 200 400 600 800 1000 1200 1400 1600 1800
Time
i .I. 15: Abstraction Error in Gamma Network
