Citation
Intelligent autonomous systems

Material Information

Title:
Intelligent autonomous systems controlling reactive behaviors with consistent world modeling and reasoning
Creator:
Bou-Ghannam, Akram A.
Publisher:
Akram A. Bou-Ghannam
Publication Date:

Subjects

Subjects / Keywords:
Arbitration ( jstor )
Architectural models ( jstor )
Data visualization ( jstor )
Knowledge representation ( jstor )
Motivation ( jstor )
Reasoning ( jstor )
Robotics ( jstor )
Robots ( jstor )
Sensors ( jstor )
Sonar ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
25222039 ( oclc )
001693355 ( alephbibnum )

Downloads

This item has the following downloads:


Full Text











INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING
















By

AKRAM A. BOU-GHANNAM


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1991







































(C) Copyright 1991 by Akram A. Bou-Ghannam. All rights
reserved.



































To my daughter Stephanie

Through her eyes I see peace on earth,
and receive my motivation.
















ACKNOWLEDGEMENTS

It takes lots of special people to make a dissertation.

First and foremost, I would like to thank my wife Nada and

my daughter Stephanie, for giving me lots of love and moral

support. I am also very grateful to the IBM Corporation for

the financial support and wonderful treatment. My IBM

managers and colleagues at the Boca Raton site were

extremely helpful and understanding. I could always count on

their support whenever I needed parts, equipment, technical

material, or attended technical conferences. Special thanks

go to the following IBMers: Rit VanDuren, Jerry Merckel, Tom

Kludt, Barabra Britt, Vic Moore, Rick Mendelson, Steve

Henderson, and Sharon Amato.

Within the university, I would like to thank my advisor

and all my committee members responsible for the

intellectual stimulation that shaped this dissertation.

Special thanks go to Dr. Carl Crane for sponsoring and

encouraging the cooperation between the robotics groups in

electrical, mechanical, and nuclear engineering. This

fruitful cooperation allowed me to put my ideas to practice

and run live experiments. Guido Reuter worked closely with

me on the "map builder" implementation and deserves much

thanks. Also, thanks go to Tom Heywood for his help in the

system setup and robot control.









TABLE OF CONTENTS


ACKNOWLEDGEMENTS ......................................... iv

ABSTRACT ................................................. vii

CHAPTER 1: INTRODUCTION .................................. 1
1.1 Philosophical Underpinnings and Overview .............7
1.2 Contributions of the Research ........................11

CHAPTER 2: SENSOR DATA FUSION: MANAGING UNCERTAINTY .....12
2.1 Motivation ...........................................12
2.2 Sensors and the Sensing Process ......................14
2.3 Classification of Sensor Data ........................15
2.4 Levels of Abstraction of Sensor Data ................. 18
2.5 Sensor Data Fusion Techniques ........................20
2.5.1 Correlation or Consistency Checking ...........22
2.5.2 Fusion at Lower Levels of Abstraction ........ 24
2.5.3 Fusion at Middle and High Levels of
Abstraction .........................................27
2.5.3.1 Bayesian Probability Theory ........... 28
2.5.3.2 Certainty Theory ........................29
2.5.3.3 Fuzzy Set Theory ........................31
2.5.3.4 Belief Theory ...........................32
2.5.3.5 Nonmonotonic Reasoning ................. 35
2.5.3.6 Theory of endorsements ................. 36
2.6 Implementation Examples ..............................37

CHAPTER 3: INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS .... 40
3.1 Behavior-Based Approaches to Robot Autonomy ..........43
3.1.1 Lessons from animal behavior .................. 43
3.1.2 Current behavior-based approaches to robot
autonomy ............................................48
3.1.3 Limitations of the subsumption architecture ...50
3.2 Control Architecture: Behavior-Based vs.
Traditional Control ...................................... 52
3.3 Issues in World Model Construction ................... 55
3.3.1 Position Referencing for a Mobile Robot .......56
3.3.2 World Model Representation .................... 57
3.3.3 Managing World Model Inconsistencies ..........59
3.4 Direction of Proposed Research .......................62

CHAPTER 4: THE PROPOSED HYBRID CONTROL ARCHITECTURE ...... 68
4.1 The Planning Module: A Knowledge-Based Approach ......72
4.2 Lower-Level Behaviors ................................75
4.2.1 Avoid Obstacles Behavior ......................78
4.2.2 Random Wander Behavior ........................79
4.2.3 Boundary Following Behavior ................... 80
4.2.4 Target-Nav Behavior ........................... 81
4.3 Arbitration Network ..................................82
4.4 Map Builder: A Distributed Knowledge-Based
Approach ................................................. 87











CHAPTER 5: EXPERIMENTAL SETUP AND IMPLEMENTATION .........93
5.1 Implementation of the Planning Module ................97
5.2 Implementation of the Map Builder .................... 00
5.2.1 The EOU Knowledge Source ......................102
5.2.2 The Filter-Raw Knowledge Source ...............107
5.2.3 The 2-D Line Finding Knowledge Source .........107
5.2.4 The Consistency Knowledge Sources .............112
5.2.4.1 The Match/Merge Knowledge Source .......113
5.2.4.2 The Re-reference Knowledge Source ......117

CHAPTER 6: EXPERIMENTAL RESULTS ..........................120
6.1 Results From A Live Experimental Run ................. 120
6.2 Discussion of Results ................................128

CHAPTER 7: SUMMARY AND CONCLUSIONS .......................132

APPENDIX: INTRODUCTION TO CLIPS ..........................139

REFERENCES ..................................................145

BIOGRAPHICAL SKETCH ...................................... 153










Abstract of Dissertation Presented to the Graduate School of
the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING

By

Akram Bou-Ghannam

August 1991

Chairman: Dr. Keith L. Doty
Major Department: Electrical Engineering



Based on the philosophical view of reflexive behaviors

and cognitive modules working in a complementary fashion,

this research proposes a hybrid decomposition of the control

architecture for an intelligent, fully autonomous mobile

robot. This architecture follows a parallel distributed

decomposition and supports a hierarchy of control with

lower-level reflexive type behaviors working in parallel

with higher-level planning and map building modules. The

behavior-based component of the system provides the basic

instinctive competence for the robot while the cognitive

part is characterized by knowledge representations and a

reasoning mechanism which performs higher machine

intelligence functions such as planning. The interface

between the two components utilizes motivated behaviors

implemented as part of the behavior-based system. A

motivated behavior is one whose response is dictated mainly

by the internal state (or the motivation state) of the










robot. Thus, the cognitive planning activity can execute its

plans by merely setting the motivation state of the robot

and letting the behavior-based subsystem worry about the

details of plan execution. The goal of such a hybrid

architecture is to gain the real-time performance of a

behavior-based system without losing the effectiveness of a

general purpose world model and planner. We view world

models as essential to intelligent interaction with the

environment, providing a "bigger picture" for the robot when

reactive behaviors encounter difficulty.

Another contribution of this research is the Sensory

Knowledge Integrator proposed as the underlying model for

the map builder. This proposed framework follows a

distributed knowledge-based approach to the fusion of sensor

data from the various sensors of a multi-sensor system in

order to provide a consistent interpretation of the

environment being observed. Within the various distributed

knowledge sources of the Sensory Knowledge Integrator, we

tackle the problems of building and maintaining a consistent

model of the world and robot position referencing.

We describe a live experimental run of our robot under

hybrid control in an unknown and unstructured lab

environment. This experiment demonstrated the validity of

the proposed hybrid control architecture and the Sensory

Knowledge Integrator for the task of mapping the

environment. Results of the emergent robot behavior and










different map representations of the environment are

presented and discussed.














CHAPTER 1
INTRODUCTION




Organisms live in a dynamic environment and tailor

their actions based on their internal state and on the

perceived state of the external environment. This

interaction with the environment becomes more complex as one

ascends the ladder of hierarchy of organisms, starting with

the simplest ones that follow a stimulus-response type of

interaction where actions are a direct response to the

sensory information, and ending with humans that are endowed

with intelligence. Intelligence enables humans to reason

with symbols, to make models of the world and to make plans

to favorably alter the relationship between themselves and

the environment. Given the ability to reason does not mean

that people are devoid of the primitive instinctive type of

behaviors. As a matter of fact, reflexive responses account

for most of what people do when walking, eating, talking,

etc.... Less than half the brain is dedicated to higher-

level thinking [Albus 81].

Relative to the development of a machine (robot) which

exhibits various degrees of autonomous behavior, which of

the following two approaches is most appropriate: 1) Should

one design machines to mimic human intelligence with

symbolic reasoning and symbolic models of the world?, 2)










Should one design machines that mimic "insect intelligence"

with no central brain and symbolic models of the world? This

deep difference in philosophy currently divides the

artificial intelligence community into two camps: 1) The

"traditionalists," constituting the majority of researchers

who have long assumed that robots, just like humans, should

have models of their world and should reason about the next

action based on the models and current sensor data [Ayache

87] [Crowley 85] [Giralt 84b] [Kriegman 89] [Shafer 86]. 2)

The behavior-based camp of researchers [Brooks 86a] [Connell

89] [Payton 86] [Anderson 88] [Agre 90] who avoid symbolic

representations and reasoning, and advocate the endowment of

a robot with a set of low-level behaviors that react to the

immediacy of sensory information in a noncognitive manner.

The main idea is that "the world is its own best model", and

complex behaviors emerge from the interaction of the simple

low-level behaviors as they respond to the various stimuli

provided by the real world. The number of researchers in

this camp is small but growing rapidly. In the next

paragraphs we discuss the characteristics of each approach,

starting with the behavior-based one.

Ethological observations of animal behavior [Gould 82]

[Manning 79][McFarland 87] provide the inspirational basis

for the behavior-based approach in robotics. The observation

is that animals use instinctive behaviors rather than

"reasoning" to survive in their ever-changing environment.

Apparently, their actions are the resultant of various










reactions to external stimuli. These reactions are

apparently due to animal instinct and are not a consequence

of sophisticated processing or reasoning. Braitemberg

[Braitemberg 84] elaborates on the observation that animal

behavior could be an outcome of simple primitive behaviors

and such behavior could be reproduced in mobile robots whose

motors are driven directly by the output of the appropriate

sensors. Rodney Brooks' introduction of the subsumption

architecture [Brooks 86a] has given the behavior-based

approach a great push forward and has forced researchers in

the robotics community to reexamine their methods and basic

philosophy of robot control architecture. For example,

researchers have always taken for granted that a robot needs

to model its environment. Now, alerted by the main thesis of

the behavior-based approach of no global internal model and

no global planning activity, they ask questions of why and

for what specific tasks does one need to model the

environment. Brooks' subsumption architecture uses a menu of

primitive behaviors such as avoid obstacles, wander, track

prey, etc., each acting as an individual intelligence that

competes for control of the robot. There is no central brain

that chooses and combines these simple behaviors, instead,

the robot sensors and what they detect at that particular

moment determine the winning behavior. All other behaviors

at that point are temporarily subsumed. Surprisingly, the

conduct of Brooks' brainless "insect" robots often seems

clever. The simple responses end up working together in










surprisingly complex ways. These "insects" never consult a

map or make plans; instead, their action is a direct

response to sensory information. The payoff for eliminating

symbolic models of the environment and the central planner

is speed. Real time operation becomes possible since the

computational burden is distributed and greatly reduced.

Another advantage of the subsumption architecture is its

modularity and flexibility. In principle, more behaviors may

easily be added until the desired level of competence is

reached. A drawback of the behavior-based approach is that

one cannot simply tell the various behaviors how to achieve

a goal. Instead, in an environment which has the expected

properties, one must find an interaction loop between the

system and that environment which will converge towards the

desired goal [Maes 90]. Thus, the designer of a behavior-

based system has to "pre-wire" the arbitration strategy or

the priorities of the various behaviors. This inflexibility,

coupled with the inability to handle explicitly specified

goals, makes it hard for such behavior-based systems to be

useful for different types of missions over a wide range of

domains. Additionally, an efficient behavior to assure

reaching a specified goal cannot always be guaranteed. So,

it is possible for a robot using the behavior-based approach

to take a certain pathway many times over, even though

traversing this pathway might not be desirable for many

reasons. For example, the path might lead the robot away

from the target or into danger. This is possible because the










robot does not build or have a map of its environment. In

effect, the robot does not remember what it has seen or

where it has been. Anderson and Donath [Anderson 88]

describe some cyclical behavior exhibited by a reflexive

behavior-based robot and attribute such behavior to the lack

of internal state within each behavior. They also report

that this cyclic behavior was observed by Culberston

[Culberston 63]. Brooks and Connell [Brooks 86b] have also

observed cyclical behavior in their wandering and wall

following behaviors. To avoid such problems, later work by

Mataric [Mataric 89] a member of Brooks' group experimented

with map building and use under the subsumption

architecture.

The traditional approach to robot control architecture

is derived from the standard AI model of human cognition

proposed by Newell and Simon in the mid-fifties. It follows

the Deliberative Thinking paradigm where intelligent tasks

can be implemented by a reasoning process operating on a

symbolic internal model. Thus, it emphasizes cognition or a

central planner with a model or map of the environment as

essential to robot intelligence. Sensory confirmation of

that model is equally important. Such symbolic systems

demand from the sensor systems complete descriptions of the

world in symbolic form. Action, in this case, is not a

direct result of sensor data but rather is the outcome of a

series of stages of sensing, modelling, and then planning. A

desirable feature of such systems is the general ability to










handle explicit high-level user specific goals. Given a set

of goals and constraints, the planning module advances the

overall mission by deciding the robot's next move based on

an analysis of the local model of the environment

(constructed from current sensor data) and the existing

global model. The global or world model is obtained either

directly from the user, if the robot is operating in a known

environment, or it is autonomously constructed over time

from the various local models when operating in an unknown

environment. The world model representation employed in the

traditional approach is general purpose and, thus, useful

for a variety of situations and planning tasks. Without such

a general purpose model, features critical to plan execution

may not be discovered. But, a general purpose world model

puts some unrealistic demands on the perception task and has

the disadvantage of an unavoidable delay in the sensor to

actuator loop. Such delay is due to the computational

bottleneck caused by cognition and the generation of a

symbolic model of the world. Lack of real time operation

(speed) and inflexibility are the two major complaints from

the behavior-based camp about the cognitive approach. These

claims are supported by the fact that the few autonomous

mobile robot projects implemented with the traditional

approach suffer from slow response times and inflexibility

when operating in complex dynamic environments. The response

of the traditionalists is that while reflexive behavior can

keep a robot from crashing into a wall, a higher-level










intelligence is needed to decide whether to turn left or

right when the robot comes to an intersection.



1.1 Philosophical Underpinnings and Overview

Our goal is to develop a general purpose robot that is

useful for a variety of tasks (explicitly stated by a user)

in various types of dynamically changing environments. The

philosophical view of our research is that such a goal could

only be accomplished by combining the two approaches

mentioned above, and that these two approaches complement

each other just as reflexive responses and higher-level

thought complement each other in human beings. For example,

while one does not think about how to articulate the joints

in one's legs when walking down a sidewalk (the reflexive

behaviors take care of the walking function), higher-level

thinking and planning is needed when one, for example,

remembers that the sidewalk is not passable further down due

to construction noticed earlier. At this moment one has to

plan a suitable alternative route to the destination.

Adhering to this philosophical view of reflexive behaviors

and cognitive modules working in a complementary fashion

where the advantages of one approach compensates for the

limitations of the other, this research proposes a hybrid

decomposition of the control architecture for an intelligent

fully autonomous mobile robot. This architecture follows a

parallel distributed decomposition and supports a hierarchy

of control with lower-level reflexive type behaviors working










in parallel with higher-level planning and map building

modules. Thus, our architecture includes a cognitive

component and a behavior-based component. The cognitive

component is characterized by knowledge representations and

a reasoning mechanism which performs higher mental functions

such as planning. The behavior-based component system hosts

the cognitive component and provides the basic instinctive

competence for the robot. The level of competence of the

behavior-based component determines the degree of complexity

of the planner in the cognitive component. Thus, the higher

the competence level of the behavior-based system, the

simpler the planning activity. Once the behavior-based

system is built to the desired level of competence, it can

then host the cognitive part. The interface between the two

components utilizes motivated behaviors implemented as part

of the behavior-based system. We define a motivated behavior

as one whose response is driven mainly by the associated

'motivation' or internal state of the robot. This is

analogous to motivated behavior exhibited by animals. For

example, the motivated behavior of feeding depends on the

internal motivation state of hunger in addition to the

presence of the external stimulus of food. Utilizing

motivated behaviors, the cognitive planning activity can

thus execute its plans by merely setting the motivation

state of the robot and letting the behavior-based subsystem

worry about the details of plan execution. In our approach,

the arbitration of the responses of lower-level behaviors is










partly hardwired in the behavior-based system, and partly

incorporated into a set of production rules as part of the

planning module of the cognitive system. These rules are

driven by the goals of the robot, and the current situation

facts provided by the world model and the status of the

behavior-based system. In addition, in the behavior-based

system, we use superposition in a potential force field

formulation (similar to [Arkin 87] and [Khatib 85]) to

combine the responses of the various complementary behaviors

that are active at any one time. The goal for the hybrid

architecture is to gain the real-time performance of a

behavior-based system without loosing the general goal

handling capability of a general purpose world model and

planner. We view world models as essential to intelligent

interaction with the environment, providing a "bigger

picture" for the robot when reflexive behaviors encounter

difficulty.

In our framework we tackle the behavior fusion problem

with the lower-level behaviors, while higher-level modules

such as the map builder tackle the sensor fusion problem in

attempting to build a general purpose representation.

Theoretical concepts and mathematical tools for sensor data

fusion are presented in chapter 2; issues in designing

intelligent fully autonomous mobile robots are presented in

chapter 3, while details of the workings of our proposed

architecture are explained in chapter 4. This architecture

is implemented and tested in a dynamic, unknown, and










unstructured environment in our lab for controlling a K2A

Cybermotion mobile robot. Chapter 5 covers the experimental

setup and implementation issues, while chapter 6 presents

and discusses the results obtained. We share the belief that

a world representation and sensory confirmation of that

representation are essential to the intelligence of an

autonomous mobile robot. Thus, the map builder is an

important part of the hybrid control architecture. We

propose, also in chapter 4, a distributed knowledge-based

framework called the Sensory Knowledge Integrator (SKI) as

the underlying model for the map builder. The SKI framework

organizes the domain knowledge needed to describe the

environment being observed into data-driven and model-driven

knowledge sources, and provides a strategy for applying that

knowledge. The theoretical concepts of the SKI model are

presented in section 4.3, while the implementation of the

map builder is discussed in chapter 5. The results of

implementing the various knowledge sources of the map

builder are also presented in chapter 5. These results show

two types of representations: an occupancy grid

representation and a 2-D line representation generated from

sonar sensor data. Results of position correction or re-

referencing of the robot are also presented. Chapter 7

concludes this dissertation and discusses limitations and

future research trends.









1.2 Contributions of the Research

We see the contributions of this work as:

1. The development and implementation of a hybrid control

architecture that combines both traditional and behavior-

based approaches.

2. The development and implementation of the Sensory

Knowledge Integrator framework which provides a parallel

distributed model for sensor data fusion and consistent

world modeling.

3. The development and implementation of a new approach for

consistent world modeling. This approach involves the

interactive use of the occupancy grid and the 2-D line

representations for filtering out unsupported raw input data

points to the line-finder knowledge source and thus

providing a better 2-D line representation.

4. A fast algorithm for position referencing of a mobile

platform using the 2-D line representation.

5. Addressing the important question of combining both the

behavior-based and the traditional approach and whether it

provides better performance.














CHAPTER 2
SENSOR DATA FUSION: MANAGING UNCERTAINTY




In this chapter we concentrate on sensor fusion, an

essential issue of the traditional approach (discussed in

chapter 1) concerned with generating meaningful and

consistent interpretations (in terms of symbolic models) of

the environment being observed. We discuss sensor data

fusion techniques and methods for managing uncertainty at

various levels of abstraction, and illustrate the advantages

and disadvantages of each. Some of the techniques presented

in this chapter are used in the various knowledge sources of

the map builder module, an essential part of our hybrid

architecture. The implementation of the map builder

(including the sensor fusion techniques used) is presented

in detail in chapter 5.



2.1 Motivation

The increasing demand for robot systems to perform

complex tasks such as autonomous operation and flexible

manufacturing has spurred research aimed at upgrading robot

intelligence and capabilities. These complex tasks are often

associated with unstructured environments where information

is often uncertain, incomplete, or in error. This problem

calls for the development of a robot system capable of using










many different sources of sensory information in order to

overcome the limitations of single sensory robot systems.

Single sensory robot systems are limited in their ability to

resolve and interpret unknown environments, since they are

only capable of supplying partial information. The need for

multi-sensor robot systems is evident in the literature:

[Giralt 84a], [Durrant-Whyte 86a], [Henderson 84],

[Ruokangas 86], [Flynn 88], [Luo 88], [Mitiche 86], [Shafer

86]. The motivation is to obtain from a set of several

different and/or similar sensors, information that would be

impossible or impractical to obtain from any one of the

sensors alone. This is often possible since different

sensors are sensitive to different properties of the

environment. Thus, each sensor type offers unique attributes

and contextual information in interpreting the environment.

The goal of a multi-sensor system is to combine information

from the various sensors, with a priori knowledge about the

environment, the sensors, the task, etc., into a meaningful

and consistent interpretation of the environment. In this

manner, the system maintains an internal description of the

world which represents its "best guess" about the external

world.

Sensor data fusion combines information from various

sensors into one representative set of data that provides a

more accurate description of the observed environment (an

improved world model) than the description provided by any

of the sensors acting alone. The objective is to reduce










uncertainty about the observed environment. However, in

addition to the fusion of information from multiple sensory

sources, the problem of generating an accurate world model

representation involves the fusion of sensory data with

object models and a priori knowledge about the environment.



2.2 Sensors and the Sensing Process

The field of robotic sensor design is rapidly growing

and undergoing a great deal of research activity. A variety

of sensors are available for robotic applications. These

include TV cameras, infrared cameras, ranging devices such

as acoustic, infrared, and laser range finders, touch

sensors, proximity sensors, force/torque sensors,

temperature sensor, etc. An assessment of robotic sensors is

presented in [Nitzan 81]. Nitzan defines sensing as "the

translation of relevant physical properties of surface and

volume elements into the information required for a given

application." [Nitzan 81, p. 2]. Thus, physical properties

such as optical, mechanical, electrical, magnetic, and

temperature properties are translated by the sensing process

into the information required for the specific application.

For example, a parts inspection application might require

information about dimensions, weights, defects labeling,

etc.. The basic steps of sensing are shown in the block

diagram of figure 1.1 (from [Nitzan 81]).












PHYSIC Tandducig Preprocessi I IMPROVED Interpreing REQUIRED Application
PROPERTIES SIGNAL SIGNAL INFORMATION I

0 PmcQ ---- Now
I I Sen ing



Figure 2.1 Block diagram of sensing steps.
From [Nitzan 81].



In this chapter we are concerned with the sensing

process where information from a variety of sensors is

combined and analyzed to form a consistent interpretation of

the observed environment. As we will discuss later, the

interpretation process is complex and involves processing of

sensor data at various levels of abstraction using domain

specific knowledge.



2.3 Classification of Sensor Data

The fusion technique of choice depends on the level of

abstraction and on the classification of the sensor data. In

multi-sensor systems, data from the various sensors are

dynamically exchanged. The use of these data in the fusion

or integration process falls under one of the following

classes:



Competitive. In this case the sensors' information is

redundant. This occurs when the observations of the

sensors) intersect; that is, they supply the same type of










information about the same feature or property of the same.

object. The following sensor configurations and scenarios

produce competitive information interaction:

a) Two or more sensors of the same type measuring the

value of the same feature of an object. For example, two

sonar sensors measuring the depth of an object from a fixed

frame of reference.

b) Different sensors measuring the value of a specific

feature. For example, depth information could also be

provided using stereo vision as well as a sonar range

finder. Another example of how different sensing modalities

produce the same feature, is the generation of edge features

of an object from either intensity images or range images.

c) A single sensor measuring the same feature at

different times. For example, a sonar sensor continuously

acquiring depth measurements from the same position and

viewpoint.

d) A single sensor measuring the same feature from a

different viewpoint or operating configuration or state.

e) Sensors measuring different features but, when

transformed to a common description, the information becomes

competitive. For example, the speed of a mobile robot could

be measured by using a shaft encoder, or it could be deduced

from dead-reckoning information from fixed external beacons.



Complementary. In this case the observations of the

sensors are not overlapping, i.e., the sensors supply











different information about the same or different feature.

The measurements are added (set union) to the total

environment description without concern for conflict. A good

example of complementary sensor data fusion is Flynn's

combining of sonar and IR sensor data [Flynn 88]. The sonar

can measure the distance to an object but has poor angular

resolution, while the IR sensor has good angular resolution

but is not able to measure the distance accurately. By using

both sensors to scan a room, and combining their information

in a complementary manner where the advantages of one sensor

compensates for the disadvantages of the other, the robot

was able to build a better map of the room.



Cooperative. This occurs when one sensor's information

is used to guide the search for another's new observations.

In other words, one sensor relies on another for information

prior to observations. For example, the guiding of a tactile

sensor by initial visual inspection [Allen 88].



Independent. In this case one sensor or another is used

independently for a particular task. Here fusion is not

performed, but the system as a whole employs more than one

sensor for a particular task and uses one particular sensor

type at a time while the others are completely ignored, even

though they may be functional. For example, in an

environment where the lighting conditions are very poor, a

mobile robot may depend solely on a sonar sensor for










obstacle avoidance, while in good lighting conditions both

vision and sonar sensing modalities could be employed.

Independent sensor data interaction occurs in the natural

world where redundant sensors are abundant. For example,

pigeons have more than four independent orientation sensing

systems, that do not seem to be combined but rather,

depending on the environmental conditions, the data from one

sensory subsystem tends to dominate [Kriethen 1983].



2.4 Levels of Abstraction of Sensor Data

Levels of abstraction of sensor data are application

and task dependent. These levels of abstraction vary from

the signal level (lowest level) where the raw response of

the sensor is present, to the symbolic level (highest level)

where symbolic descriptions of the environment exist for use

by the planning subsystem. This model is based upon

psychological theories of human perceptual system that

suggest a collection of processors that are hierarchically

structured and modular [Fodor 83]. These processors create a

series of successively more abstract representations of the

world ranging from the low-level transducer outputs to the

highly abstract representations available to the cognitive

system. Thus, in order to bridge the wide gap between raw

sensory data and understanding of what those data mean, a

variety of intermediate representations are used. These

representations make various kinds of knowledge explicit and

expose various kinds of constraints [Winston 84]. For










complicated problem domains such as the problem of

understanding and interpreting a robot's environment based

mainly on data from its sensors, it becomes important to be

able to work on small pieces of the problem separately, and

then combine the partial solutions at the end into a

complete problem solution. This task of understanding the

environment is thus accomplished at various levels of

analysis or abstraction. Thus, sensor data exist at the

various levels of knowledge abstraction in the solution

space and appropriate fusion techniques are applied at the

various levels for an improved solution. Though the specific

abstraction levels are task dependent, they could be

generalized as follows:



Signal level. This level contains data that are close

to the signal or unprocessed physical data level. At this

level data are usually contaminated with random noise and

are generally probabilistic in nature. Therefore statistical

inferencing techniques are appropriate for data fusion on

this level.



Feature level. Data at this level consist of

environmental/object features derived from the signal. The

various features describe the object or solution. Often,

incomplete descriptions of the features must be used. This

calls for a type of reasoning that is subjective and based










on the body of evidence associated with each feature. This

type of reasoning is called evidential reasoning.



Symbol level. At this level symbolic descriptions of

the environment exist and propositions about the environment

are either true or false. Therefore, logical (Boolean)

reasoning about these descriptions is most appropriate.



2.5 Sensor Data Fusion Techniques

A variety of techniques for combining sensor data have

been proposed. Most approaches concentrate on Bayesian or

statistical combination techniques [Richardson 88], [Luo

88], [Porrill 88], [Ayache 88], [Durrant-Whyte 86a]. Some

researchers followed a heuristic approach [Flynn 88], [Allen

88]. Garvey et al, [Garvey 82] proposed evidential reasoning

as a combination technique and claims it is more general

than either Bayesian or Boolean approaches. The fusion

technique of choice depends on the classification of the

sensor data involved, and on the level of abstraction of the

data. For example, at the signal level, data is generally

probabilistic in nature and hence a probabilistic approach

is most appropriate. At higher levels such as the symbolic

feature level, a Boolean approach is generally appropriate.

Sensor data classification and levels of abstraction have

been discussed in the previous sections. For complementary

data the sensors' data are not overlapping and there is no

concern for conflict. In this case, at any level of










abstraction, the technique of choice is to simply add the

sensor descriptions to the total environment description.

Similarly, for cooperative sensor data there is no concern

for conflict since the data of one sensor guides the

observations of the other. In the case of competitive sensor

data when two or more sensors provide information about the

same property value of the same object, a fusion technique

is called for. But, before choosing a fusion technique to

combine the data, how does one determine if the data is

truly competitive? That is, how does one determine if the

data represent the same physical entity? This correlation or

consistency check is discussed in the next section. The

choice of fusion technique in the competitive case depends

on the level of abstraction of the data. For example, at the

raw data level the problem becomes that of estimating a

parameter x from the observations of the sensors involved.

This could be resolved by either using a deterministic or

nonrandom approach (like the least-squares method for

example), or by using a probabilistic or random approach

(like the minimum mean squared error method). Dealing with

uncertain information is still a problem at higher levels of

abstraction and a variety of methods have been proposed. In

the following sections we will discuss fusion techniques in

more detail.










2.5.1 Correlation or Consistency Checking

To determine whether sensor data, or features derived

from that data could be classified as competitive, a

consistency check is performed on the data. This is a common

difficulty in robotic perception where often the correlation

between perceived or observed data and model data has to be

determined. The well known problem of correlating between

what the robot actually sees and what it expects to see is

an appropriate example. Following is an example that

illustrates one form of consistency checking:

Let pei and pej be parametric primitive vectors

estimated by sensors i, and j respectively. We desire to

test the competitive hypothesis Ho that these estimates are
for the same primitive p of an object. Let 8,ei = pei pei be

the estimate of 8iJ = pi pi where pi and pi are the

corresponding true parameter vectors, then Ho: 5ij = 0. We

want to test Ho vs. Hi: 6ij <> 0. Let the corresponding

estimation errors be represented by ei = p Pei, and eJ = p
- pei. Define eij = 8ij eij, it follows that eij = pi pi -

Pei + Pej. Then, under Ho we have


eij /Ho = P pei (p Pej) = ei ei,



and the covariance of the error under Ho is given by



C /Ho = E[ (eiJ /Ho) (eiJ /Ho)T]

= E[ (ei ei) (ei eJ)T]










= E[ (ei) (ei)T] E[ (ei) (e)T] -

E[ (ei) (ei) T] + E [ (eJ) (ei) T]

= Ci Cii Cii + Ci.



If the errors ei and ei are independent then Cij = Cii = 0,

and

C /Ho = Ci + CJ.



For Gaussian estimation errors the test of Ho vs. H1 is

as follows: Accept Ho if


d = (BeiJ)T C-1 (8eiJ) 9,8

where 0 is a threshold such that: p{ d>0 /Ho} = a, and a is

a small number such as 0.05 for example.



If Ho is accepted, then pei and pej are competitive and

can thus be fused to obtain Peij the combined estimate of p.

Using the standard Kalman filter equations, and letting the

prior mean of p be Pei, we obtain the following combined

estimate and corresponding error covariance:



peiJ = Pei + K(pel Pei)

Cov = Ci K Ci,

where K is the Kalman filter gain, and for independent

errors ei and ei is given by


K = Ci(Ci + CJ)-1












2.5.2 Fusion at Lower Levels of Abstraction

At the lower levels of abstraction where the sensor

data is close to the signal level, the data contains signal

components random in nature, and hence probabilistic

reasoning or inference is most appropriate. Most of the

literature on sensor data fusion involves problems at this

level with the majority of the researchers adopting

probabilistic (Bayesian) inference. Next we will briefly

discuss the commonly used assumption, advantages, and

disadvantages of statistical sensor fusion techniques.

Most researchers treat the sensor fusion problem, at

this level of abstraction, as an optimal estimation problem.

This problem is briefly stated as follows: Given a system of

interest (e.g. the environment of a robot) represented by an

n-dimensional state vector x, and the measured quantities

(output of sensor systems) represented by the m-dimensional

measurement vector z, what is the best way to estimate the

value of x given z according to some specified optimality

criterion. A general measurement model is given by



z = h(x, v)



where h(x, v) is an m-dimensional vector which represents

the ideal operation of the sensor system, and v represents

the m-dimensional random noise or error vector.

Unfortunately, most problems with this general measurement










model have not been solved in a practical sense [Richardson

88]. Instead, the following measurement model with additive

noise is usually considered:



z = h(x) + v



Moreover, it is commonly assumed that x and v are

statistically independent, and that the probability density

functions f(x) and f(v) are known a priori. In addition,

most methods often assume f(x) and f(v) to be Gaussian with

the following statistical properties:



E[v] = 0, E[vvT] = R

E[x] = Ex, E[(x-E) (x-Ex)T] = M



where M and R are the state and noise covariance matrices

respectively. The above assumptions and equations are

sometimes written as:


x N(Ex, M), v N(0, R), and Cov(x, v) = 0



The measurement model is further simplified when the

function h(x) is linear. In this case, when the measurement

model is both linear and Gaussian, a closed form solution to

the estimation problem is obtained. The linear measurement

model is represented as










z = Hx + v



This is the measurement equation for the standard Kalman

filter. Given the above assumptions, the optimal estimate of

x, given z measurements, xopt(z) is determined by minimizing

a loss or risk function. This risk function is the

optimality criterion which provides a measure of the

"goodness" of the estimate. A typical loss function is the

mean squared error given by:



L(x, xopt) = (x Xopt)T W (x X)

where W is a symmetric positive definite weighting matrix.



Now let us consider two sensor systems producing two

sets of measurements zl and z2 of the state x. We are

interested in estimating x based on zl and z2, that is,

computing xopt(zl, z2) In the general case, one can not
compute xopt(zl, z2) based on the separate estimates xopt(zl)

and xopt(z2) of sensors 1 and 2 respectively [Richardson 88].

However, this is possible in the special case of a linear

Gaussian measurement model. For more information on

estimation in multi-sensor system the reader is referred to

Willner et al. [Willner 76] and Richardson and Marsh

[Richardson 88].

The major limitations of the above mentioned methods

stem from the assumption that f(x) is known a priori. This

requires a large number of experiments to be performed on











the sensor system in order to establish a model for f(x) and

f(v). Unfortunately, this is usually not done because it is

impractical or impossible. In such a case, the initial value

of Ex is usually set to zero, and the initial value of M is

set to a large multiple of the identity matrix indicating

our lack of knowledge of prior observations.



2.5.3 Fusion at Middle and Hiah Levels of Abstraction

At intermediate and high levels of abstraction,

features derived from lower level sensory data are present.

These features are normally associated with some degree of

uncertainty. It is the task of the multi-sensor system to

apply domain knowledge to these features in order to produce

valid interpretations about the environment. Thus, the basic

methodology involves the application of symbolic reasoning

and artificial intelligence techniques to aid the

interpretation task. Moreover, because "knowledge is power",

a powerful multi-sensor perception system must rely on

extensive amounts of knowledge about both the domain and the

problem solving strategy effective in that domain

[Feigenbaum 77].

Uncertainty results from the use of inadequate

knowledge as well as from attempts to reason with missing or

unreliable data. For example, in a speech understanding

system, the two sources of uncertainty are: 1) noise in the

speech waveform (sensor noise and variability), and 2) the

application of incomplete and imprecise theories of speech










[Newell 75]. Several methods for managing uncertainty have

been proposed. These include the use of Bayesian probability

theory, certainty theory (developed at Stanford and employed

in the MYCIN system [Buchannan 84]), fuzzy set theory [Zadeh

83], the Dempster/Shafer theory of evidence, nonmonotonic

reasoning, and theory of endorsements. Ng and Abramson [Ng

90] provide a good reference that introduces and compares

these methods.



2.5.3.1 Bayesian Probability Theory

Bayes theorem, a very important result of probability

theory, allows the computation of the probability of a

hypothesis based on some evidence, given only the

probabilities with which the evidence follows from the

hypothesis. Let


P(Hi/E) = the probability that Hi is true given evidence E
P(E/Hi) = the probability of observing evidence E when Hi is
true
P(Hi) = the probability that Hi is true
n = the number of possible hypothesis



Then, the theorem states that:


P(E/Hi) P(Hi)
P(Hi/E)=
n
S(P(E/HK) *P(HK)
k=l










In using Bayes's theorem, two major assumptions are

required: first that all the prior probabilities P(E/Hk) and

P(Hk) are known; second that all P(E/Hk) are independent.

These assumptions are difficult or impossible to meet in

many practical domains. In such situations, more heuristic

approaches are used. Another problem with statistical

methods in general is that these methods can not distinguish

between the lack of belief and disbelief. This stems from

the observation that in traditional probability theory the

sum of confidence for a certain hypothesis and confidence

against the same hypothesis must add to 1. However, often

one might have a certain degree of confidence that a certain

hypothesis is true, yet have no knowledge about it being not

true. Certainty theory attempts to overcome this limitation.



2.5.3.2 Certainty Theory

Certainty theory splits the confidence for and the

confidence against a certain hypothesis by defining the

following two measures:



MB(H/E) is the measure of belief of a hypothesis H given
evidence E, with 0
MD(H/E) is the measure of disbelief of a hypothesis H given
evidence E, with 0


These two measures are tied together with the certainty

factor:


CF(H/E) = MB(H/E) MD(H/E)












The certainty factor approaches 1 as the evidence for a

hypothesis becomes stronger, with 1 indicating absolute

truth. As the evidence against the hypothesis gets stronger

the certainty factor approaches -1, with -1 indicating

absolute denial. A certainty factor around 0 indicates that

there is little evidence for or against the hypothesis. To

combine the certainty factors of different hypothesis, the

following rules apply:



CF(H1 AND H2) = MIN[CF(H1), CF(H2)]

CF(H1 OR H2) = MAX[CF(H1), CF(H2)]



Another problem is how to compute the certainty factor of a

conclusion based on uncertain premises. That is, if P

implies Q with a certainty factor of CF1, and CF(P) is

given, then CF(Q) = CF(P) CF1. Also how to combine the

evidence when two or more rules produce the same result.

Assume that result Q produced by rule R1 has a certainty

factor CF(R1) and that rule R2 also produced Q with a

certainty factor CF(R2), then the resulting certainty factor

of Q, CF(Q), is calculated as follows:



1. When CF(R1) and CF(R2) are positive,

CF(Q) = CF(R1) + CF(R2) CF(R1)*CF(R2)

2. When CF(R1) and CF(R2) are negative,

CF(Q) = CF(R1) + CF(R2) + CF(R1)*CF(R2)










3. Otherwise,

CF(R1) + CF(R2)
CF(Q) =
1- MIN[ICF(R1)I, ICF(R2)I]




Although certainty theory solves many of the problems

presented by an uncertain world (as in MYCIN), the meaning

of the certainty measures and how they are generated, is not

well defined. The assignment of numeric certainty measures

based on human terms such as "it is very likely that" is not

well defined and considered by some as ad hoc.



2.5.3.3 Fuzzy Set Theory

Fuzzy set theory is yet another approach for dealing

with uncertainty. The main idea is that often information is

vague rather than random and hence a possibility theory must

be proposed as a measure of vagueness just as probability

theory measures randomness. The lack of precision is

expressed quantitatively by the notion of a fuzzy set. This

notion introduces a set membership function that takes on

real values between 0 and 1 and measures the degree to which

a set member belongs to the fuzzy set. To illustrate, let I

be the set of positive integers, and A be the fuzzy subset

of I that represents the fuzzy set of small integers. A

possibility distribution that defines the fuzzy membership

of various integer values in the set of small integers could

be characterized by:












mA(1)=l, mA(2)=l, mA(3)=0.9, mA(4)=0.7, ,mA(30)=0.001



where mA(i) is the membership function that measures the

degree to which i (a positive integer) belongs to A.

To illustrate some of the combination rules of fuzzy

set theory, assume that both A and B are propositions with

C=A@B denoting the proposition that C is the combination of

A and B. Then, for



1. Conjunction: A AND B, is given by

m[A*B] (a,b)=min[mA(a),mB(b)].

2. Disjunction: A OR B, is given by

m[A+B] (a,b)=max[mA(a) ,mB(b)].

3. Implication: IF A THEN B, is given by

m[A/B] (a,b)=min[l, (l-mA(a)+mB(b))]



For more details on possibility theory including evidence

propagation and truth quantification rules the reader is

.referred to [Zadeh 78], while [Cheeseman 86] provides a

comparison between fuzzy and probabilistic reasoning.



2.5.3.4 Belief Theory

Belief theory, developed by Dempster and Shafer [Shafer

76] as an alternative to the theory of probability, makes a

fundamental distinction between uncertainty and ignorance.

As mentioned above, in probability theory the extent of










knowledge about a belief B is expressed in a single

probability number P(B). In cases where the prior

probabilities are not known, the choice of P(B) may not be

justified. Belief theory proposes belief functions where

each function distributes a unit of belief across a set of

propositions (called the "frame of discernment") for which

it has direct evidence, in proportion to the weight of that

evidence as it bears on each proposition. The frame of
discernment (0) is defined as an exhaustive set of mutually

exclusive propositions about the domain. The role of 0 in

belief theory resembles that of the sample space (U) in

probability theory except that in belief theory the number
of possible hypothesis is 12e1 while in probability theory it

is jQ. The basic probability assignment is a function m that

maps the power set of 0 into numbers between 0 and 1, that

is:


m: 20 ---> [0, 1]


If A is a subset of 0, then m satisfies:



1. m(0) = 0, where 0 is the null hypothesis.

2. X m(A) =1
AC 8



A belief function of a proposition A, BF(A) measures

the total amount of belief in A, and is defined as:











BF(A) = X m(B)
BCA



And satisfies the following:


1. BF (0)=0

2. BF ()=1

3. BF(A) + BF(~A) < 1



The Dempster/Shafer theory is based on Shafer's

representation of belief and Dempster's rule of combination.

The Shafer representation expresses the belief in a

proposition A by the evidential interval [BF(A), p(A)],

where BF(A) denotes the support for a proposition and sets a

minimum value for its likelihood, while p(A) denotes the

plausibility of that proposition and establishes its maximum

likelihood. P(A) is equivalent to 1-BF(~A), the degree with

which one fails to doubt 'A'. The uncertainty of 'A',

u(A)=p(A)-BF(A), is thus implicitly represented in the

interval [BF(A), p(A)]. The Dempster's rule of combination

is a method for integrating distinct bodies of evidence. To

combine the belief of two knowledge sources, suppose for

example that knowledge source 1 (KS1) commits exactly ml(A)

as a portion of its belief for proposition 'A', while KS2

commits m2(B) to proposition 'B'. Note that both 'A' and 'B'
are subsets of ) the frame of discernment. If we are










interested in computing the evidence for proposition C = A
n B, then


BF(C) = [1/(1-k)][ X ml(A)*m2(B)]

ArAB=C

u(C) = [l/(l-k)] [ml(8)*m2(0)]

where
k = X ml(A)*m2(B)

ArCB=0


The added complexity of the Dempster/Shafer theory

increases the computational cost. In addition, the

assumptions of independence required in the Bayesian

approach still apply here. Another criticism of this theory

is that it produces weaker conclusions due to the fact that

it avoids the assignment of stronger probability values, and

hence stronger conclusions may not be justified.



2.5.3.5 Nonmonotonic Reasoning

While all of the methods mentioned above use a numeric

model of uncertainty, nonmonotonic reasoning uses a non-

numeric approach. In this case the system starts by making

reasonable assumptions using the current uncertain

information, and proceeds with its reasoning as if the

assumptions were true. If at a later time these assumptions

were found to be false (by leading to an impossible

conclusion, for example), then the system must change these










assumptions and all the conclusions derived from them. Thus,

in contrast to the inference strategies discussed above

where knowledge can only be added (monotonic) and axioms do

not change, in nonmonotonic reasoning systems knowledge can

also be retracted. Truth Maintenance Systems [Doyle 79],

[deKleer 86], implement nonmonotonic reasoning. The argument

for nonmonotonic reasoning is that nonmonotonicity is an

important feature of human problem solving and reasoning. In

addition, numeric approaches to uncertainty do not consider

the problem of changing data, that is, what to do if a piece

of uncertain information is later found to be true or false.



2.5.3.6 Theory of endorsements

Cohen's theory of endorsements [Cohen 85] is yet

another qualitative approach to managing uncertainty. The

basic philosophy of this theory is to make explicit the

knowledge about uncertainty and evidence. The motivation for

this theory stems from the limitation of the numerical

approaches which summarize all supporting and opposing

evidence into a single number. The semantics of this number

that represent knowledge about uncertain information is

often unclear. Thus, the basic idea is that knowledge about

uncertain situations should influence system behavior.

Hence, if a required piece of evidence is lacking, an

endorsement-based system allocates resources to the

resolution task whose execution will provide the most

information for reducing the uncertainty. The system










represents all reasons for believing or disbelieving a

hypothesis in structures called endorsements. These

endorsements are associated with propositions and inference

rules. The system uses endorsements to decide whether a

proposition at hand is certain enough by back chaining and

determining if its sub-goals are well endorsed in order to

assert it. Cohen describes five classes of endorsements:


Rule endorsements.
Data endorsements.
Task endorsements.
Conclusion endorsements.
Resolution endorsements.


The main problem with this recently developed theory is

the exponential growth of the body of endorsements when

asserting a proposition based on endorsements of its sub-

goals and their associated sub-goals and so on. Thus, a

simple rule could lead to large bodies of endorsements after

a few inferences.



2.6 Implementation Examples

In this section we show how some of the techniques

described in this chapter are used in our research. As

mentioned in the introduction at the beginning of this

chapter, some of these techniques are used in the various

knowledge sources of the map builder module. The

implementation of the map builder is discussed in detail in

section 5.2. Here, we highlight with some examples the










implementation in our work of some of the sensor fusion

techniques described in this chapter.

The first example uses the consistency checking

techniques of section 2.5.1 to match observed and model line

parameters. A line parameter vector consists of orientation,

collinearity, and overlap variables with associated

uncertainties. The normal distance between the two parameter

vectors is calculated as described in section 2.5.1 and

compared to a threshold for a consistency check. Section

5.2.4.1 illustrates the details of the matching operation.

If the match is successful, the line parameter vectors are

now merged using the estimation techniques described in

section 2.5.1 also. These techniques use the standard Kalman

filter equations. The merged estimate with reduced

uncertainty is then compared to the observed lines to

determine the error in robot position and orientation.

Section 5.2.4.2 details such operations.

Another example illustrates a symbolic uncertainty

management technique similar to the theory of endorsements

presented in section 2.5.3.6, and to uncertainty management

within the schema system proposed in [Draper 88]. Such a

technique is used for conflict resolution when no match

exists between observed features (what the sensors are

actually seeing) and model features (what the sensors should

be seeing). To resolve the inconsistencies, we propose a

knowledge-based approach within the framework of the Sensory

Knowledge Integrator. Using a priori domain-dependent and









domain-independent knowledge, the system reasons about the

conflict and generates resolution tasks to resolve it. These

tasks utilize symbolic endorsements which constitute a

symbolic record of the object-specific evidence supporting

or denying the presence of an object instance. By

accumulating endorsements from a priori expectations about

the object, and from sub-part and co-occurrence support for

the object, the system deals with reasons for believing or

disbelieving a certain hypothesis. To illustrate this

approach we give the following example: Suppose for example

that the system's model of a coffee mug includes a handle.

So, the absence of a handle in a particular view of the mug

reduces the confidence rating of the mug hypothesis. Rather

than just lower the numeric confidence value, the "mug

detector" knowledge source also records the absence of the

handle. This is a source of negative support weakening the

confidence in that hypothesis. The system then takes steps

to remove this negative evidence, invoking another behavior,

for example the "curiosity" behavior, to scan the object

from a variety of view points to account for the missing

piece of evidence. If a hypothesis is subsequently posted

for the handle, the mug hypothesis regains its higher

confidence. The system thus arrives at more reliable

conclusions by reasoning about the sources of uncertainty.

The symbolic representation of uncertainty facilitates this.














CHAPTER 3
INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS




One of the main goals of robotics research is the

development of autonomous robots. Autonomous robots are

desirable in many applications especially those where human

intervention is difficult. This chapter gives an overview

and analysis of the research issues associated with the

field of intelligent autonomous systems, and based on this

analysis presents the directions of the proposed research.

Traditional autonomous mobile robot projects [Crowley 85]

[Shafer 86] [Ayache 88] use some of the sensor fusion

techniques presented in chapter 2 in an attempt to build a

complete and accurate model of the environment. However,

despite the positive attributes of completeness and detail

of a global world model, some researchers [Brooks 86a] [

Connell 89] argue the need for its existence. The task of

constructing this world model may conflict with the need to

provide timely information about the environment. Hence, a

tradeoff exists between immediacy and assimilation [Payton

86]. For control purposes, immediacy considerations give a

higher value to sensor data that can be used to effect

action more quickly. This is because in many real time

situations the time between receiving sensor data and acting

on it is very critical. The disadvantage of immediacy is the










difficulty to obtain information or features critical to

plan execution from sensor data that has not undergone

sufficient assimilation. In addition, the extracted data may

be inconsistent or in error. To effect immediacy, Brooks

[Brooks 86a] proposed a parallel behavior-based

decomposition of the control system of a mobile robot. His

approach deviates from the traditional serial decomposition

approach which allows for greater sensor data assimilation.

The traditional approach decomposes the sensor to actuator

path into a few large processing modules in series, figure

3.la. In the behavior-based approach, figure 3.1b, the path

is divided into many small parallel modules each with its

own specialized task and complete path from the sensor to

the actuator. A general characteristic difference between

the two approaches is that the parallel approach requires

behavior fusion, while the traditional approach requires

sensor fusion. Section 3.2 compares these two approaches and

illustrates the advantages and disadvantages of each.

In this chapter we discuss the important issues

concerning the design of intelligent autonomous agents that

are capable of interacting with a dynamic environment. These

issues include consistent world modeling and control

architectures. We present a brief survey of the current

research in the field and highlight the approaches and

methodologies used by the various researchers in













SENSOR


O
-J
0


cr
S ACTUATORS

0
I


a. Serial decomposition.


REASON ABOUT BEHAVIOR OF OBJECTS

PLAN CHANGES TO THE WORLD

IDENTIFY OBJECTS

SENO MONITOR CHANGES
SENSO_
BUILD MAPS

EXPLORE

WANDER

AVOID OBJECTS


_CTUATORS


b. Parallel decomposition.




Figure 3.1 Control architectures, from [Brooks 86]:
a. Serial vs. b. Parallel decomposition.










tackling the important issues. On the issue of autonomous

robot control architecture, researchers are split between

the behavior-based decomposition and the traditional

decomposition. A brief survey of current research in the

behavior-based approach is presented in section 3.1.2, while

a survey of research in the traditional approach is embedded

in section 3.3 on world model construction issues since such

research involves the classical problems of sensor data

fusion, consistent world modeling, and robot position

referencing. In section 3.2 we discuss and compare the two

approaches, listing the advantages and limitations of each.

Fundamental to the traditional approach is the issue of

consistent world modeling which is presented in section 3.3.

Finally, section 3.4 discusses the directions of this

research.



3.1 Behavior-Based Approaches to Robot Autonomy

In this section we begin by tracing the basis of the

behavior-based approach to concepts in animal behavior, then

we provide a survey of current behavior-based approaches to

robot autonomy, and finally discuss the limitations of the

subsumption architecture [Brooks 86a], and reactive systems

in general.



3.1.1 Lessons from animal behavior

In designing autonomous mobile robots, valuable

insights can be obtained and lessons learned from nature.










Nature provides us with a variety of examples of animal

behavior as they successfully interact with their

environment in order to survive. Animals survive by

possessing the ability to feed, avoid predators, reproduce,

etc.. It is believed that animals survive due to a

combination of inherited instinctive responses to certain

environmental situations, and the ability to adapt to new

situations. Ethologists who study animal behavior in their

natural habitat, view animal behavior as largely a result of

the innate responses to certain environmental stimuli.

Behavioral psychologists, on the other hand, study animal

behavior under controlled laboratory settings, and believe

that animal behavior is mainly learned and not an innate

response. Observations and experiments by [Manning 79]

support the existence of both learned and innate behaviors

in animals. Animals with a short life span and small body

size such as insects seem to depend mostly on innate

behaviors for interacting with the environment, while

animals with a longer life span and larger body size

(capable of supporting large amounts of brain tissue

required for learning capacity) seem to develop learned

behavior.

Reflexive behavior is perhaps the simplest form of

animal behavior. A reflexive behavior is defined as having a

stereotyped response triggered by a certain class of

environmental stimuli. The intensity and duration of the

response of a reflexive behavior depends only on the










intensity and duration of the stimulus. Reflexive responses

allow the animal to quickly adjusts to sudden environmental

changes, and thus provides the animal with protective

behavior, postural control, and gait adaptation to uneven

terrain. Such reflexive responses are believed to be

instinctive and not learned since they have been observed in

animals which have been isolated from birth. Other reactive

types of behaviors include orientation responses where an

animal is oriented towards or away from some environmental

agent, and fixed-action patterns which are extended, largely

stereotyped responses to sensory stimulus [Beer 90].

The behaviors mentioned above are by no means solely

dependent on external stimuli. The internal state of the

animal plays an important role in the initiation,

maintenance, and modulation of a given behavior. Motivated

behavior are those governed primarily by the internal state

of the animal with no simple or rigid dependence on external

stimuli. For example, the behavior of feeding does not only

depend on the presence of food (external stimuli) but also

upon the state of hunger (the internal motivational

variable). Thus a behavior exhibited by an animal at a

certain moment enjoys the highest motivational potential

along with the proper combination of external stimuli

detected at that moment. The motivational potential of a

motivated behavior varies with the level of arousal and

satiation. In addition, such behaviors can occur in the










complete absence of any external stimuli, and can greatly

outlast any external stimulus[Beer 90].

Given a diverse set of sensory information about the

environment, and a diverse behavioral repertoire, how does

an animal select which information to respond to, and

properly coordinate its many possible actions into a

coherent behavior needed for its long term survival? The

answer to the first part lies in the fact that many

different animals have sense organs specialized in detecting

certain environmental features. For example, [Anderson 90]

reports observations by [Lettvin 70] about the frog's visual

system as specialized in detecting movements of small, dark

circular objects at close range, while it is unable to

detect stationary food objects or large moving objects.

Other animals that posses a variety of specialized sensory

detectors can actively select a subset of these detectors to

initiate the response of the animal. For example,

experiments and observations by Lorenz reported by [Anderson

90], indicate that the herring gull can detect attributes of

shape, size, color, and pattern of speckles of its eggs.

Moreover, depending on the task performed by the gull,

certain attributes become important while others become

unimportant. For example, when herring gulls steal the eggs

of other herring gulls, the shape and size of the egg are

very important. But when retrieving its own eggs as they

roll from the nest during incubation, shape becomes

unimportant while attributes such as color and pattern of











speckles gain importance. Based on these observations Lorenz

formulated his idea of the "innate releasing mechanism" that

actively selects a subset of the available sensory

information to trigger the appropriate response. This leads

us to the second part of the question posed at the beginning

of this paragraph: How do animals handle behavior conflict

where certain stimuli and motivations cause the tendency to

simultaneously perform more than one activity? The

observation is that the various behaviors of an animal

exhibit a certain organization or hierarchy [Manning 79].

Some behaviors take precedence over others, while others are

mutually exclusive. Internal state of the animal and

environmental conditions determine the switches between

behaviors. Such switches may not be all or nothing switches,

and the relationship between behaviors may be non-

hierarchical. That is, behaviors can partially overlap

making it sometimes difficult to identify direct switches

between them [Beer 90]. Survival dictates that whatever

behavioral organization exists in animals, it must support

adaptive behavior, whereby, based on past interactions with

the environment, aspects of future behavior are modified.

Anderson and Donath [Anderson 90] summarize some of the

important observations in their research regarding animal

behavior as a model for autonomous robot control:


a) To some degree all animals possess a set of innate behaviors
which allow the animal to respond to different situations.
b) The type of behavior exhibited at any given time is the result
of some internal switching mechanism.










c) Complex behavior can occur as the result of the sequential
application of different sets of primitive behaviors with the
consequence of a given behavior acting as a mechanism which
triggers the next one.
d) Simple reflex types of behavior occur independent of
environmental factors and provide the animal with a set of
protective behaviors.
e) Activation of more complex types of behavior typically depend
upon external and internal constraints.
f) Animals typically only respond to a small subset of the total
amount of sensory information available to them at any given time.
Animals have developed specialized types of detectors which allow
them to detect specific events.
g) Behavior is often organized hierarchically with complex
behavioral patterns resulting from the integration of simpler
behavioral patterns.
h) Conflicting behaviors can occur in animals. These will require
either a method of arbitration between such behaviors or the
activation of alternate behaviors. Pages 151-152.



3.1.2 Current behavior-based approaches to robot autonomy

[Brooks 86a] follows a behavior-based decomposition and

proposes the famous subsumption architecture for behavior

arbitration. The main idea is that higher-level layers or

behaviors override (subsume) lower-level ones by inhibiting

their outputs or suppressing their inputs. His subsumption

architecture have been used in a variety of robots [Brooks

90], and proved robust in dynamic environments. Most of his

robots are designed to be "artificial insects" with a

deliberate avoidance of map or model building. Brooks' idea

is that "the world is it's own best model", and intelligent

action is the outcome of many simple behaviors working

concurrently and coordinated through the context of the

world. There is no explicit representation of goals or

plans, rather, the goals are implicitly designed into the

system by the pre-determined interactions between behaviors










through the environment. The next section discusses in

detail the advantages and limitations of the subsumption

architecture.

[Payton 86] also follows a behavior-based

decomposition, and describes a collection of reflexive

strategies in a hierarchy of control, all competing for the

control of the vehicle. The winning behavior is determined

by a winner take all arbitration mechanism. Later work by

[Payton 90] describes methods of compiling world knowledge

such as mission constraints, maps, landmarks, etc.. into a

form of "internalized plans" that would have maximal utility

for guiding the action of the vehicle. He proposes a

gradient description to implicitly represent the

"internalized plans". Using this representational technique,

a priori knowledge such as a map can be treated by the

behavior-based system as if it were sensor data.

[Arkin 87] proposes a schema-based approach to the

navigation of a mobile robot. His motor schemas are

processes that run concurrently and independently, each

operating in conjunction with its associated perceptual

schemas. No arbitration mechanism is required, instead the

outputs of the various motor schemas are mapped into a

potential field and combined to produce the resultant

heading and velocity of the robot. Arkin demonstrates that

strategies for path following and obstacle avoidance can be

implemented with potential field methods by assigning










repulsive fields around observed obstacles, and by

appropriately adjusting the strength of the fields.



3.1.3 Limitations of the subsumption architecture

At first glance the subsumption architecture appears

modular. Theoretically, a new layer could simply be added on

top of the previous layers to achieve a new level of

competence. In reality, upper layers or behaviors interfere

with the internal states of lower-level layers, and thus can

not be designed independently. In fact, the whole controller

must be redesigned when even small changes to lower-level

behaviors are implemented [Hartley 91]. The reason for the

lack of modularity is that in the subsumption architecture,

the behavior arbitration mechanism is not separated from the

actual stimulus/response behavioral function. Moreover, the

arbitration strategy is further complicated by the use of

timeouts, or temporal ordering of behaviors [Anderson 90].

Another limitation of the subsumption architecture stems

from its rigid hierarchy of behaviors. In such a hierarchy,

a behavior is either higher or lower than another, with the

higher behavior inhibiting or suppressing the one below it.

For many non-trivial real life applications, such hierarchy

can not be established. Many behaviors are mutually

exclusive and not hierarchically organized. For example, in

our implementation the "target-nav" behavior which guides

the robot towards a specified location, is not higher or

lower than the "boundary-following" behavior. Rather, the










two are mutually exclusive. In addition, it is possible that

lower-level behaviors need to inhibit higher-level ones.

Examples of such situations are numerous in biological

system. In the subsumption architecture, only higher-level

behaviors inhibit lower ones.

In general, the subsumption architecture suffers the

same limitations of behavior-based systems. Such systems

require some method of arbitration such as the subsumption

architecture. The subsumption architecture is specifically

implemented as part of behavior-based systems motivated by

the desire of their designers to produce artificial insects.

Hence, such systems avoid world modeling and the explicit

representation of goals. Instead, as mentioned earlier, the

driving philosophy is that "the world is its own best

model", and goals are implicitly designed within the system

by the designer establishing, a priori, the interactions

between behaviors through the environment. This is a serious

limitation of such behavior-based systems since the designer

is required to predict the best action for the system to

take under all situations. Obviously, there is a limit to

how far one can forsee the various interactions and

constraints in order to precompile the optimum arbitration

strategy [Maes 90].









3.2 Control Architecture: Behavior-Based vs. Traditional
Control

As mentioned before, the control architecture of an

autonomous intelligent mobile robot can be modeled as either

a serial or a parallel decomposition of the perception-

action control path, figure 3.1. The more traditional

approaches [Crowley 85] [Kriegman 89] [Moravec 85] are

serial in nature where the control path is decomposed into a

few modules in series such as: 1) Sensing, 2) Modelling, 3)

Planning, 4) Actuation. In the parallel decomposition

approach [Brooks 86a] [Connell 89] [Payton 86] multiple

parallel control paths or layers exist such that the

perceptual load is distributed. Each layer performs a

specialized goal or behavior and processes data and issues

control commands in a manner specific to its own goals. In

[Brooks 86a] these layers of control correspond to levels of

competence or behaviors with the lower layers achieving

simple tasks such as avoiding obstacles and the higher

layers incrementally achieving more complex behaviors such

as identifying objects and planning changes to the world. In

this hierarchical set of layers, each layer is independent

from the others in performing its own tasks even though

higher level layers may influence lower level layers by

inhibiting their output or suppressing their input. Unlike

the traditional approach which has the disadvantage of

imposing an unavoidable delay in the sensor to actuator

loop, the layered approach enjoys direct perception to

action through concurrency where individual layers can be










working on individual goals concurrently. In this case

perception is distributed and customized to sensor-set/task-

set pair of each layer. This eliminates the need for the

robot to make an early decision on which goals to pursue.

Another advantage of the parallel approach is flexibility.

Since modules or layers are independent, and each having its

own specialized behavior and goals, it is possible that each

may have its own specialized interface to the sensors and

actuators. Flexibility stems from the fact that the

interface specification of a module is part of that module's

design and does not affect the other modules' interfaces.

This contradicts the traditional serial approach where a

modification to a module's interface might require

modification of at least the previous and the following

modules if not the whole system. Moreover, all of the

modules in the serial approach must be complete and working

before the system is operational, while on the other hand, a

behavior-based system can still produce useful behavior

before all the modules are complete. The main disadvantage

of a behavior-based approach is the possibility of

exhibiting cyclical behavior patterns by the robot due to

the lack of memory within some behaviors, that is, such

behaviors do not remember previous events and base their

decisions solely on the latest sensor stimuli. This prevents

the robot from responding to events which happen over

several time periods and could cause cyclical behavior. In

our initial simulation of a robot wandering with a fixed










control algorithm, our robot exhibited cyclical behavior.

Anderson and Donath [Anderson 88] presented an approach

based upon the use of multiple primitive reflexive

behaviors, and came to the conclusion that "... cyclical

behavior may indicate that an essential characteristic of a

truly autonomous robot is the possession of memory and

reactive behavior (i.e., the ability to react to events

which occur over a number of intervals of time and the

ability to alter the behavior depending upon the previous

behavior)." [Anderson 88, p. 205]. Brooks and Connell

[Brooks 86b] have also observed cyclical behavior in their

wandering and wall following behaviors.

The majority of mobile robot projects follow somewhat

the traditional approach. In this approach, world

representation and sensory confirmation of that

representation are essential to the intelligence of an

autonomous mobile robot. A composite world representation is

generated through the integration of the various local

representations which are themselves formed by the fusion of

data from the multiple sensors onboard. Fusion at various

levels of abstraction is performed in order to produce the

representation useful to the planning subsystem. Unlike the

behavior-based approach where each behavior task employes

its own specialized representation, the representation

employed in the traditional approach is general purpose and

thus is useful for a variety of situations and planning

tasks. In contrast, the behavior-based approach employs a










variety of specialized representations (each derived from a

small portion of the sensors data) for use by a number of

concurrent planning tasks resulting in many distinct,

possibly conflicting, behaviors. Thus, the need to perform

"behavior fusion" arises in this case as opposed to sensor

fusion in the traditional approach. Behavior fusion

sacrifices the generality obtained by sensor fusion in order

to achieve immediate vehicle response, while sensor fusion

sacrifices immediacy for generality. The immediacy versus

assimilation tradeoff issue is adequately presented by

[Payton 86].



3.3 Issues in World Model Construction

In this section we examine the issues of constructing a

world model of an autonomous mobile robot. The environment

of a mobile robot is often unstructured and contains objects

either as obstacles to avoid or as items to be examined or

manipulated. In the traditional approach, a mobile robot

must build and use models of its environment. This model

must be accurate and must remain consistent as the robot

explores new areas or revisits old ones [Chatila 85].

Handling inconsistencies in world model construction of a

multi-sensor system, is one of the main problems tackled by

the proposed research. In order to construct an accurate and

consistent model of the environment, the robot must be able

to correctly determine its position and orientation. This is

a difficult task given that sensors are imprecise.












3.3.1 Position Referencing for a Mobile Robot

A mobile robot can achieve position referencing by any

of the following methods:



Trajectory Integration Referencing. Uses odometeric

devices such as shaft encoders without external reference.

These methods are prone to errors (due to wheel slippage)

that are cumulative and cause position drift.



Absolute position referencing. Uses fixed known

external beacons throughout the environment. The more

external beacons we place at known absolute positions in the

environment, the more structured this environment becomes.

In this case the errors in the robot's position and

orientation are related to the beacon system measurement

accuracy.



Relative position referencing. Is performed with

respect to objects with characteristic features whose

positions in the environment are known with good accuracy.

This method is very desirable yet it introduces considerable

complexity. A challenging task in this case is for the robot

to define its own references.











3.3.2 World Model Representation

Various world model representations have been proposed.

The choice of an adequate representation depends on the

domain (indoor, outdoor, factory environment, etc..), the

task (navigation, manipulation, identification, etc..), and

the sensors used. Crowley [Crowley 87, 85] suggested a

representation in terms of line segments in a 2-D floor plan

world. Kak et al [Kak 87] also used a model represented as a

2-D line drawing of the expected scene. Chatila and Laumond

[Chatila 85] used three types of models: geometrical,

topological, and a semantic model. The geometrical model is

a 2-D model containing the position in absolute coordinates

of the ground projection of vertices of polyhedral objects.

The topological model is represented by a connectivity graph

of places where a place is defined as an area that is a

functional unit such as a workstation or a topological unit

such as a room or a corridor. The semantic model is a

symbolic model containing information about objects, space

properties, and relationships. Kent et al [Kent 87] proposed

a representation of the world that consists of both a

spatial and an object or feature-based representation. The

spatial representation classifies the world space as

occupied, empty, or unknown, and explicitly represents

spatial relationships between objects. The feature-based

representation associates each object with the set of

features that verifies its identity. Elfs [Elfs 89]

described the occupancy grid representation which employes a










2-D or 3-D tessellation of space into cells, where each cell

stores a probabilistic estimate of its state. The state

associated with a cell is defined as a discrete random

variable with two states, occupied and empty.

This research adopts a representation similar to that

proposed by Kent but the spatially-indexed representation

employs a 2-D tessellation of space where each cell in the

grid, not only contains its state of occupied, empty, or

unknown, but it also contains information such as what

object the cell (if occupied) belongs to and whether it is a

boundary point or an edge point, etc.. This representation

is useful for navigation in computing free paths, and for

determining the identity of objects or features in a given

location. The object-indexed representation is linked to the

spatial representation and contains entries such as the

object's name, vertices, bounding edges, and other

discriminating features. This representation is suited to

responding to inquiries about objects or features by name or

by description. In addition to the two representations

mentioned above, we also construct a 2-D line representation

from sonar data. All these representations are implemented

under the proposed Sensory Knowledge Integrator framework

described in section 4.5.1. We describe the implementation

of the spatially-indexed representation in section 5.2.1,

while section 5.2.3 describes the implementation of the 2-D

line representation. This representation is used for







59


position referencing of the robot as described in section

5.2.4.



3.3.3 Managina World Model Inconsistencies

At any point in time the robot has a global and a local

model of its environment. The local model is robot-centered

and represents the environment as perceived at the moment by

the various sensors onboard. The robot is first faced with

the challenge of constructing the local model by integrating

information from the various sensor systems with a priori

knowledge about the environment. Next it must update the

global model using the local model and its position and

orientation information. The difficult problem is to

maintain a consistent model given inaccuracies in position

measurements and in sensor data and its processing elements.

Inconsistencies between the global and the local models must

be resolved in order to avoid model degradation. Resolving

these inconsistencies should improve the accuracy of the

position and orientation information.

A variety of methods have been proposed for model

consistency checks. Crowley [Crowley 85] defines a function

named CORRESPOND that is called for every line in the sensor

model (most recent sensor data described in terms of line

segments) to be matched with every line segment in the

composite local model (an integration of recent sensor

models from different viewing angles). This function tests

the correspondence of two line segments by checking: (1) If










the angle between the 2 segments is less than a certain

threshold, (2) If the perpendicular distance from the

midpoint of one segment to the next is less than a

determined threshold, and (3) If one segment passes through

a bounding box (tolerance) around the other. The outcome of

this test gives five types of correspondence. In a later

paper Crowley [Crowley 87] uses the normal distribution to

represent spatial uncertainty, and the normal distance as a

measure for matching the model parametric primitives (ex.

lines, edge segments) to the observed ones. He defines the

function SIMILAR which returns a true if the normal distance

between the two primitives is less than a certain threshold,

else it returns a false. The function CORRESPOND now

consists of a simple set of attribute tests using the

function SIMILAR.

Andress and Kak [Andress 87] define a COLLINEARITY and

a NONCOLLINEARITY function as a measure of compatibility and

incompatibility respectively between a model edge segment

and an observed one. These compatibility measures form the

initial "basic probability assignment" for the observed

segments. The Dempster-Shafer formalism is used for belief

update in the face of new evidence.

Our approach to consistent world modeling is similar to

that of [Crowley 87]. It is implemented as part of the

Sensory Knowledge Integrator framework. Section 5.2.4 of

chapter five details the implementation of our method. The

parameters of observed and model lines are matched using the










correlation and consistency checking techniques described in

section 2.5.1 of chapter 2. A successful match indicates

that all orientation, collinearity, and overlap tests have

been satisfied. Next, we merge the parameters of the two

matched lines using estimation techniques also described in

section 2,5.1. These techniques use the standard Kalman

filter equations. The merged estimate with reduced

uncertainty is then compared to the observed lines to

determine the error in robot position and orientation.

Section 5.2.4.2 details such operations.

In case no match exists between a model feature (what

the sensors should be seeing) and an observed feature (what

the sensors are actually seeing), then a conflict exists and

must be resolved. In our implementation we did not encounter

such conflicts and we did not setup our experiments in order

to obtain a conflict. Instead, in what follows, we propose

how inconsistencies could be approached under the Sensory

Knowledge Integrator framework. We propose a knowledge-based

approach for resolving the inconsistencies where the system

generates resolution tasks to resolve the conflict, and

reasons about the conflict using a priori domain-dependent

and domain-independent knowledge. At higher levels of

abstraction, the conflict resolution task utilizes symbolic

endorsements which constitutes a symbolic record of the

object-specific evidence supporting or denying the presence

of an object instance. Thus, we deal with reasons for

believing or disbelieving a certain hypothesis by











accumulating endorsements from a priori expectations about

the object, and from sub-part and co-occurrence support for

the object.



3.4 Direction of Proposed Research

As mentioned in chapter 1, this research follows a

hybrid (behavior-based and cognitive) approach to the

problem of controlling an autonomous mobile robot. The goal

is to enable autonomous operation in a dynamic, unknown, and

unstructured indoor environment. The robot knows about

certain objects to expect in the environment, but does not

have a map of it. The robot has considerable general

information about the structure of the environment, but

cannot assume that such information is complete. It

constructs a model of the environment using onboard

sensors. We aim to develop a general purpose robot useful

for a variety of explicitly stated user tasks. The tasks

could either be general or specific including such tasks as

"do not crash or fall down", "build a map", "locate all

green drums" etc..

Unlike most behavior-based approaches which avoid

modelling of the world and the use of world knowledge, our

view is that while world models are unnecessary for low-

level actions such as wandering around while avoiding

obstacles, they are essential to intelligent interaction

with the world. General, efficient, and flexible navigation

of a mobile robot require world models. The world models










provide a "bigger picture" for the robot when

reflexive/reactive behaviors encounter difficulty. Such

difficulties include trap situations due to local minima

causing cyclic behavior, oscillations in narrow passages or

in the presence of obstacles, and inability to pass between

closely spaced obstacles [Koren 91][Brooks 86b] [Anderson

88]. Trap situations are caused by various topologies of the

environment and by various obstacle configurations such as a

U-shaped configuration. [Arkin 90] gives a vivid example of

the difficulties encountered by reactive control as

resembling the "fly-at-the-window" problem. This situation

arises when the insect expends all of its energy trying to

go towards the sunlight outside through the glass of the

window. Our work dynamically builds a map of the environment

within the Sensory Knowledge Integrator framework. As we

will see in section 4.3.1, this framework contains a variety

of knowledge sources including different types of "trap

detector" knowledge sources that use map data to look for

traps. This information is used by the planning module to

avoid or recover from traps. Map data is used to reconfigure

the lower-level behaviors and not to replace the function of

these behaviors. In this manner the environment is more

efficiently explored while the robot still enjoys the real-

time operation of the reflexive behaviors. Additionally, the

construction of a general purpose world model makes use of

the available world knowledge. For example, the Sensory

Knowledge Integrator, the underlying framework of our map










builder module, exploits a priori knowledge about the

environment such as objects to be encountered or

manipulated. The a priori knowledge gives the robot an idea

about its relationship to the world and allows it to

efficiently use its resources.

The architecture for planning and control of a mobile

robot should be a synthesis of the traditional serial

approach and the parallel behavior-based approach. This

architecture consists of a hierarchy of control in which

lower level modules perform "reflexive" tasks while higher

level modules perform tasks requiring greater processing of

sensor data. We call our architecture a hybrid one because

it includes a cognitive component and a behavior-based

component, figure 3.2. A robot controlled by such a hybrid

architecture gains the real-time performance of a behavior-

based system while maintaining the effectiveness and goal

handling capabilities of a planner with a general purpose

world model. The basic instinctive competence for the robot

such as avoiding obstacles, maintaining balance, wandering,

moving forward, etc..are provided by the behavior-based

component of the system, while the cognitive part performs

higher mental functions such as planning. The higher the

competence level of the behavior-based system, the simpler

the planning activity. Motivated behaviors implemented as

part of the behavior-based system, and the associated

motivation state, form the interface between the two

components. A motivated behavior is triggered mainly by the











































associated motivation state. By merely setting the

motivation state of the robot, the cognitive module

activates selected motivated behaviors in order to bias the

response of the behavior-based system towards achieving the

desired goals. The details of plan execution are left to the

behavior-based subsystem. The motivation state consists of a

variety of representations, each associated with the

corresponding motivation-driven behavior. It is the means of

communication between the cognitive and the behavior-based

subsystems, and could be thought of as a collection of










virtual sensor data. The "target-nav" behavior, discussed in

section 4.2.4 of our implementation, is an example of a

motivation-driven behavior. Its motivation state is

represented as a location that is set by the planning

module. In addition to its virtual sensor data input, it

also acquires data from the real position shaft encoder

sensors in order to generate a vector heading towards the

target. The arbitration of the various behaviors in the

behavior-based system competing for control of the robot

actuators, is partly hardwired in the behavior-based system,

and partly encoded in a flexible arbitration strategy in the

cognitive system. The flexible strategy changes during the

operation of the robot depending on the current situation

and the task at hand. In section 4.3 of the next chapter, we

discuss various arbitration strategies including the one

used in our implementation.

As mentioned earlier, we view world models as essential

to intelligent interaction with the environment, providing a

"bigger picture" for the robot when reflexive behaviors

encounter difficulty. Thus, the research proposed in this

paper adopts a knowledge-based approach to the problem of

constructing an accurate and consistent world model of the

environment of an autonomous mobile robot. We propose to

construct this model within the framework of the Sensory

Knowledge Integrator proposed in [Bou-Ghannam 90a,b] and

described in chapter 4. A more accurate model is obtained

not only through combining information from multiple sensory










sources, but also by combining this sensory data with a

priori knowledge about the domain and the problem solving

strategy effective in that domain. Thus, multiple sensory

sources provide the added advantages of redundancy and

compensation (where the advantages of one sensor compensates

for the disadvantages or limitations of the other) while

domain knowledge is needed to compensate for the

inadequacies of low-level processing, as well as to generate

reasonable assumptions for the interpretations of features

derived from lower-level sensory data. In addition we

propose to achieve a consistent model by making explicit the

knowledge about the inconsistencies or conflict, and using

this knowledge to reason about the conflict. This is similar

to the theory of endorsement proposed by Cohen [Cohen 85]

where resolution tasks are generated, and positive and

negative endorsements are accumulated in order to resolve

the conflict.















CHAPTER 4
THE PROPOSED HYBRID CONTROL ARCHITECTURE




We propose an architecture that is a synthesis of the

parallel decomposition and the traditional serial

decomposition of a mobile robot control system. Both of

these architectures were discussed and compared earlier.

Figure 4.1 depicts the general framework of our hybrid

control architecture while figure 4.2 shows a block diagram

of a specific implementation of the proposed architecture.

The functions of the various blocks in the figures will

become clear as we proceed through the chapter. The hybrid

architecture supports a hierarchy of control in which the

various lower level modules (such as "avoid-obstacles",

"target-nav", "follow-wall", etc..) perform reflexive

actions or behaviors providing a direct perception to action

link, while the higher level modules (such as the map

builder and the planning modules) perform tasks requiring

greater processing of sensor data such as modelling. It is

important to emphasize parallelism or concurrency in the

hybrid architecture. For example, in figure 4.2, the

planning module, the map builder, and the lower-level

behaviors are all running concurrently. The lower-level

behaviors constitute the behavior-based subsystem and

provide the basic instinctive competence, while the higher-


68










level modules provide the cognitive function. Note that

action takes place only through the behavior-based

subsystem. When needed, the cognitive (planning) module

effects action by reconfiguring the behavior-based system.

Reconfiguration involves arbitration, and changing the

motivation state of the behavior-based subsystem, as will

become clear later in this chapter. We call this

decomposition a hybrid parallel/serial decomposition because

even though the whole system follows a parallel layered

decomposition, the higher level modules (namely the map

builder and planning modules) follow somewhat the

traditional serial approach of sensing, modelling, and

planning before task execution or actuation occurs. However,

unlike the traditional approach, the planning module does

not have to wait on the map builder for a highly processed

representation of the environment before it can effect

action. Instead, based on its current knowledge and the

status provided by the lower level behaviors, the planning

module can select from a variety of lower level behaviors or

actions. Thus, unlike the subsumption architecture proposed

by [Brooks 86a] where any behavior subsumes the function of

all the behaviors below it (by inhibiting their outputs or

suppressing their inputs), in our implementation the

arbitration strategy is incorporated in a set of rules in

the planning module. In what follows we discuss the

individual blocks of our proposed architecture starting with

the planning module.




























Actuators


Competence level 0


Figure 4.1 A general framework of the proposed hybrid
control architecture.
















































Figure 4.2 A specific implementation of the hybrid control
architecture.













71










4.1 The Planning Module: A Knowledge-Based Approach

The planning module performs reasoning and task

planning to accomplish user specific tasks, such as locate

all green drums in a warehouse for example. In order to

accomplished such tasks, the planning module performs

various tasks including map-based planning (such as route

planning) and behavior reconfiguration. For example, in our

implementation of the planning module, discussed in detail

in section 5.1, the task is to efficiently map the

environment without crashing into things. Thus our planning

module performs some simple map-based planning functions,

but deals mainly with behavior arbitration. The arbitration

strategy that effects the behavior reconfiguration is

embedded into a set of production rules in the planning

module. In determining the current behavior arbitration, the

arbitration rules utilize knowledge about the goals, the

environment, the individual behaviors, the current situation

status, etc.., and thus create a flexibile, goal-driven

arbitration strategy that is useful for a variety of

missions in different types of domains. We will learn more

about arbitration and the arbitration network in section

4.3. It is important to note that behavior reconfiguration

is only used when the behavior-based system encounters

difficulties such as a trap situation as described in

section 3.4. Various trap situation analogous to the "fly-

at-the-window" situation can be detected in the map builder

module, and recovered from or avoided using heuristic rules










in the planning module. Reconfiguration is accomplished by

changing the motivation state of the motivation-driven

behaviors, and by performing some behavior arbitration.

Thus, the planning module selects (enables and disables)

from a variety of behaviors (such as avoid-obstacles,

target-nav, follow wall, etc..) the appropriate set of

behaviors at the appropriate time for the task at hand. For

example, in our implementation given the task of mapping the

environment, when the 2-D line representation becomes

available, the corner detector knowledge source in the map

builder examines this representation and posts a corner

hypothesis on the hypothesis panel, which in turn causes a

specific rule of the arbitration strategy rules in the

planning module to fire selecting as a result the target-nav

behavior with the location of the corner to be investigated

as its target. Thus, the behavior selection is based upon

status inputs from the various behaviors and from the map

builder module which, in time, will also provide a high-

level spatially-indexed and object-indexed representation of

the environment. In addition, the planning module uses a

priori knowledge about the task, the environment, and the

problem solving strategy effective in that domain. For

example, if the robot's task is to locate all green drums of

a certain size, a priori knowledge (such as "drums of this

size are usually located on the floor in corners") can

greatly help the task of searching for the drums. This

knowledge about the task and objects in the environment,










coupled with knowledge about the environment (such as in an

indoor environment, corners are the intersection of two

walls, etc..) can be brought to bear to improve the

efficiency of the search. Other types of a priori knowledge

include self knowledge such as the diameter and height of

the robot, the physical arrangement of the sensors onboard

the robot, and types and characteristics of sensors used.

For example, knowing the diameter of the robot, helps in

making the decision of whether to let the robot venture into

a narrow pathway. Note that having a priori knowledge about

the environment does not mean that the environment is known,

instead it means that the robot knows about certain objects

to expect in the environment (for example, walls, doors and

corners in an indoor environment), but does not have a map

of it.

One type of reasoning performed by the planning module

include the detection and prevention of cyclic behavior

patterns exhibited by the robot when driven by the lower-

level reflexive behaviors. For example, if in the wander-

while-avoiding-obstacles behavior, the robot gets stuck in a

cyclic pattern (similar to the fly-at-the-window situation)

giving no new information to the map builder, the map

builder forwards its status to the planning module including

the location of the centers of various frontiers that form

the boundary between empty and unknown space. The planning

module enables the target-nav behavior to choose one of

these centers as its target. When this target is reached










with the robot oriented towards the yet unknown area, the

planning module might enable the wander behavior again to

allow discovery of unknown areas. The actual implementation

of the planning module is presented in chapter 5 and follows

a knowledge-based approach. We use CLIPS, a knowledge-based

system shell, as an implementation tool to represent and

reason with the knowledge used to accomplish the tasks of

the planning module. A review of CLIPS is given in the

appendix.



4.2 Lower-Level Behaviors

As mentioned earlier, the behavior-based subsystem

provides the robot with the basic instinctive competence

which are taken for granted by the planning module. The

subsystem consists of a collection of modular, task-

achieving units called behaviors that react directly to

sensory data, each producing a specific response to detected

stimuli. These behaviors are independently running modules

that perform specific tasks based on the latest associated

sensor data. Each of these modules is associated with a set

of sensors needed to perform the specialized behavior of the

module. Sensor data is channelled directly to the individual

behavior allowing for "immediate" reaction. Thus each module

constructs from its sensor data a specialized local

representation necessary to effect the behavior of the

module. The responses from multiple behaviors compete for

the control of the robot actuators, and the winner is










determined by an arbitration network. The behaviors

comprising our behavior-based subsystem are reactive,

characterized by a rigid stimulus-response relationship with

the environment. The response of a reactive behavior is

deterministic and strictly depends on the sensory stimulus

(external environment and internal state). It is not

generated by a cognitive process with representational

structures. The model of our reactive behavior is given in

figure 4.3. The function F(Si) represents the deterministic
stimulus-response function, while the threshold T is

compared to F(Si) before an output response is triggered. In

our implementation, the various thresholds are

experimentally determined. In general, the threshold

represents one the important parameters for behavior

adaptation and learning. Thus, adhering to the biological

analogy, the threshold can be set by other behaviors

depending on the environmental context and the internal

state. We further divide our reactive behaviors into

reflexive and motivation-driven behaviors. The motivation-

driven behaviors are partly triggered by the motivation

state of the robot (set by the planning module in our

implementation, but could theoretically be set by other

behaviors), while the response of a reflexive behavior is

driven only by external stimuli. Reflexive behaviors

constitute the protective instincts for the robot such as











Stimuli
S.
I


Figure 4.3 Model of a reactive behavior.


Sonar


2
/r) e


Robot


Figure 4.4 Sonar sensor repulsive force model.











avoiding obstacles and maintaining balance, while motivated

behaviors execute goal-driven tasks triggered by the

associated motivation state set by the planning module in an

effort to bias the response of the behavior-based system

towards achieving the overall mission. Such motivated tasks

include moving to a specified target location as in the

"target-nav" behavior.

In addition to providing the specific task-achieving

functionality, the behaviors also serve as abstraction

devices by providing status information to the planning

module. Such status include variables representing error

status variables, and operating conditions variables. The

error status include variables indicating errors such as

robot communications error, or sonar data error. The

operating conditions variables represent a behavior's

operating conditions such as "target-reached" or "break-

detected" for example.

Some of the behaviors we are interested in include

"avoid-obstacles", "wander", "target-nav", "boundary-

follower", and "path follower".



4.2.1 Avoid obstacles behavior

This behavior uses sonar information from a ring of 12

sonar sensors placed around the robot. Each sonar hit (range

data) is modeled as the site of a repulsive force whose

magnitude decays as the square of range reading (distance to

the obstacle) of that sonar [Khatib 85], figure 4.4. The










avoid behavior determines the resultant repulsive force

acting on the robot by summing the forces sensed by each

sonar sensor, as follows:


12
Fres = (l/ri)2.ei
i=l

The magnitude of the repulsive force is then compared to an

experimentally determined threshold value. If the magnitude

exceeds the threshold value then the repulsive force

represents the response of the avoid behavior as a vector

heading for the robot. This threshold is fixed

(experimentally determined and normalized to outputs of

other behaviors) in our implementation, but within an

adaptive behavior-based system it is not fixed and

constitutes a variable parameter which is adjusted by the

outputs of other behaviors as the situation demands. For

example, a "charge-battery" behavior may tune down the

effect of the "avoid-obstacles" behavior by raising its

threshold as the robot approaches the battery charging

station, allowing the robot to dock and connect to the

charging pole.



4.2.2 Random wander behavior

This behavior generates a random heading for the robot.

This behavior, coupled with the avoid obstacles behavior,

form an emergent behavior with the following functionality:

"wander at random without bumping into things". Such an










emergent behavior is used in totally unknown environments

for random wandering. This is also useful for leading the

robot out of trap situations.



4.2.3 Boundary following behavior

This behavior uses sonar scan data to determine the

direction for safe wandering or the nav-vector as we call

it. In this behavior mode the robot keeps moving forward as

long as the range of the forward pointing sonar is larger

than a certain threshold, and the range values of the rest

of the sonar sensors are also larger than their respective

thresholds. The range data is dynamically read from a set of

12 sonar sensors onboard forming a ring around the robot.

When the conditions for a clear (no obstacle in the way)

forward motion are no longer satisfied, this behavior

determines the next clear nav-vector from the latest sonar

scan. Usually, many nav-vectors are valid and the one

closest to a forward direction is chosen, thus minimizing

the degree of turns for the robot. This also has the effect

of following the boundary of the indoor environment such as

walls. The algorithm involved is straight forward, fast and

utilizes sonar range data directly with minimal processing

of that data. This behavior could also be used as a

generalized wandering behavior for map building. Note that

while this behavior is active, the map builder is busy

(concurrently) assimilating sensor data from the various

locations visited by the robot.












4.2.4 Taraet-Nay Behavior

This behavior is a location attraction behavior that

generates as its response a vector proportional to the

vector between the current location of the robot and the

specified target location. Robot position is provided by

wheel encoder sensors, while the target position is provided

by the planning module as part of the motivation state of

the robot. So, this is a motivation-driven behavior, the

motivation being to reach the target location. A robot R at

(xr, Yr) is attracted to a target T at (xt, Yt) by the

following heading vector:


V = RT = (xt Xr)i + (Yt Yr)J



The magnitude of this vector decreases as the robot

approaches the target, and the target-reached flag is set

when the magnitude drops below a certain threshold. In

addition, our implementation normalizes the magnitude and

sets an experimentally determined threshold value as its

maximum amplitude. As a result, the avoid-obstacles behavior

gains higher priority as the robot encounters obstacles on

its path to the target. This behavior coupled with the avoid

obstacles behavior attempt to navigate the robot towards the

target without bumping into obstacles. The robot actually

tries to go around obstacles towards the target. Results of

our experimentation show the robot successfully navigating











around obstacles placed in its path. Keep in mind that some

topologies of the environment will trap the robot in

unproductive cyclic maneuvers when operating under the

target-nav behavior.



4.3 Arbitration Network

Behavior-based systems require some mechanism for

combining or arbitrating the outputs of the different

behaviors. In the case of a behavior-based mobile robot,

each behavior generates a heading for the robot, but the

robot can only accept one heading at a time. The resultant

or winning heading is determined by the arbitration

mechanism. One method of arbitration is the subsumption

architecture [Brooks 86a] where higher-level behaviors

subsume lower-level ones by suppressing their input or

inhibiting their outputs. Arbitration, however, can be

achieved in many ways. The simplest method uses a priority

list where the behavior with the highest priority on the

list gets control of the actuators. Another method

implements a strategy where once a certain behavior gets

control, it remains in control as long as it is active

regardless of other behaviors. Once the stimulus for

activating the controlling behavior disappears, the behavior

releases control of the actuators. Other strategies involves

combining the outputs of the various behaviors in some kind

of formulation, such as the potential field method [Arkin

87].










Our approach for behavior arbitration implements simple

binary all or nothing switches in the arbitration network,

with the arbitration control strategy incorporated into a

set of production rules, figure 4.5. In our implementation,

the rules reside in the planning module running under CLIPS,

a knowledge-based systems shell. CLIPS implementation of the

arbitration strategy is discussed in section 5.1. We believe

that by encapsulating the arbitration strategy in a set of

rules under the control of the cognitive system, a robot

useful for a wide range of missions can be created. The

rules incorporate knowledge about the goals, the individual

behaviors, the environment, and the current situation

status. Note that the real-time performance of reactive

control is still maintained, since the behavior-based system

is always enabled. The arbitration rules in the cognitive

system reconfigure the arbitration switches of the behavior-

based system depending on the goal at hand, and the current

situation. For example, in our experiments, we initially

configure the behavior-based system into an "explore" mode

allowing the robot to wander around without bumping into

things. After the initial configuration, the cognitive

system leaves the behavior-based system alone. Later, when a

knowledge source in the map builder module discovers large

boundaries between the empty and the unknown areas of the

occupancy grid representation being constructed, and that

such boundaries are not being traversed by the robot, this

information is made available to the arbitration rules in










the planning module and brought to bear on the behavior-

based system configuration by activating the target-nav

behavior with the center of the discovered boundaries as

target location. When the robot arrives at its target and

crosses the boundary, it is now reconfigured to discover the

new unknown areas.

Adhering to the biological analogy, the switches in an

arbitration network of a behavior-based system are not

purely binary all or nothing switches, but are active

elements with adjustable gains and thresholds, as modeled in

figure 4.6 by the operational amplifier circuit. The

programmable gain of the switch varies from a minimum to a

maximum allowable value. When the gain is either 0 or 1,

then a simple binary switch is obtained. The threshold is

another adjustable value above which the response of the

input behavior will produce an output. The programmable gain

and threshold constitute the basic parameters for adaptive

behavior and learning. The best values for such parameters

are not readily apparent, but can be fine tuned by some

adaptive learning algorithm or neural network. [Hartley 91]

proposes the use of "Genetic Algorithms" to accomplish such

tasks. Using the switch model of figure 4.6, the

implementation of an arbitration network involves the

knowledge of which behavior affects (adjusts the parameters

of) which other behavior, by how much, and in which

situations or context. The wiring of such an arbitration

network depends upon the goals or the desired competence of









Arbitration
An Arbitration Control
Rules


Behavior-n output I


Behavior-0 output I


Aritration Network
Arbitration Network


4-


a,
0
CD
C/


Actuators
-----BOB-


Figure 4.5 Arbitration by
superposition.


production rules and


Programmable
Threshold


Figure 4.6 Model of an arbitration network switch.










the robot. In addition, within a single system many

arbitration networks may exist, each servicing a sub-group

of behaviors that determine a certain competence level as

shown in figure 4.1. In our view, each competence level is

developed separately, and the thresholds and gains

determined experimentally (by using a neural network, for

example). The behaviors shown in figure 4.7 give a simple

example of one competence level, the "explore" competence

level used by the robot to explore unknown environments for

map building tasks and local navigation. As shown in figure

4.7, this level includes an "avoid", a "random-wander", a

"boundary-following", and a "target-nav" behavior. The

latter three behaviors are mutually exclusive while either

of the three is complementary to the "avoid" behavior. One

arbitration scheme for this subset of behaviors could give

the "target-nav" the highest priority of the three behaviors

with the "boundary-following" behavior having a higher

priority than the "random-wander" behavior. Thus, for

example, once the "boundary-following" behavior is

triggered, it inhibits the "random-wander" behavior causing

the output of the "boundary-following" behavior to be

combined (superimposed) with the output of the "avoid"

behavior which has the highest priority.























Figure 4.7 Behaviors of the explore competence level.


4.4 Man Builder: A Distributed Knowledge-Based Approach

The map builder generates two linked representations of

the world: a spatially indexed and an object or feature

indexed representation. The spatially indexed representation

consists of a 2-D tessellation of space where each cell in

the grid contains information about its state whether

occupied, empty, or unknown, in addition to information such

as what object the cell (if occupied) belongs to and whether

it is a boundary point or an edge point, etc... This

representation is useful for navigation in computing free

path, and for determining the identity of objects or

features in a given location. The object-indexed

representation is linked to the spatial representation and

contains entries such as the object's name, vertices,

bounding edges, and other discriminating features. This

representation is suited to responding to inquiries about

objects or features by name or by description.










We follow a knowledge-based approach as a framework for

the map builder since the goal of a multi-sensor system such

as the map builder, is not only to combine information from

multiple sensory sources, but also to combine this sensory

data with a priori domain-dependent and domain independent

knowledge. Thus, a powerful multi-sensor system must rely on

extensive amounts of knowledge about both the domain and the

problem solving strategy effective in that domain

[Feigenbaum 77]. This knowledge is needed to compensate for

the inadequacies of low-level processing, as well as to

generate reasonable assumptions for the interpretations of

features derived from lower level sensory data. Domain-

dependent knowledge consists of knowledge about domain

objects and ways of recognizing them. This involves semantic

descriptions of objects, semantic relationships between

objects, the use of interpretation context, experimentally

derived classification functions, and knowledge about the

task and the sensor. Domain-independent knowledge involves

general principles such as perspective distortion,

occlusion, varying points of view, etc..

The challenge of multi-sensory perception requires a

flexible inference strategy that supports both forward and

backward chaining. This flexibility allows the system to

dynamically alternate between data-driven and model-driven

strategies as the situation requires. The blackboard

framework allows for this flexibility [Nii 86a,b]. In a

blackboard framework forward chaining and backward chaining










steps can be arbitrarily interleaved. In addition, the many

knowledge sources have continual access to the current state

of the blackboard, and thus, can contribute

opportunistically by applying the right knowledge at the

right time.

Our proposed Sensory Knowledge Integrator [Bou-Ghannam

90b] follows a blackboard framework. The Sensory Knowledge

Integrator is described in detail in a technical report[Bou-

Ghannam 90a]. Its highlights will be discussed in the

remainder of this section.

An intelligent multi-sensor system maintains an

internal description of the world which represents its "best

guess" about the external world. This world model is built

using sensory input and a priori knowledge about the

environment, the sensors, and the task. Thus, the problem of

constructing a world model representation involves two types

of information fusion: 1) Fusion of information from

multiple sensory sources, and 2) Fusion of sensory data with

a priori knowledge and object models [Kent 87]. This

representation of the world consists of both a spatial and

an object or feature-based representation. The spatial

representation classifies the world space as occupied,

empty, or unknown, and explicitly represents spatial

relationships between objects. The feature-based

representation associates each object with the set of

features that verifies the identity of the object.










In previous work [Bou-Ghannam 90a,b] we introduced the

Sensory Knowledge Integrator (SKI), a knowledge-based

framework for sensor data fusion. What follows in this

section is a brief review of SKI, figure 4.8. SKI organizes

the domain knowledge and provides a strategy for applying

that knowledge. This knowledge, needed to describe the

environment being observed in a meaningful manner, is

embedded in data-driven and model-driven knowledge sources

at various levels of abstraction. These knowledge sources

are modular and independent emphasizing parallelism in the

SKI model. The knowledge sources use algorithmic procedures

or heuristic rules to transform information (observations

and hypothesis) at one level of abstraction into information

at the same or other levels. Processed sensory data in the

observations database cause the execution of the data-driven

knowledge sources while model data in the hypothesis

database cause the execution of model-driven knowledge

sources. This execution produces new data (on the

observations or hypothesis database) which in turn cause the

execution of new knowledge and the production of new data

until a high level description of the environment under

observation is incrementally reached. These high level

descriptions comprise the robot's local world model that is

continually updated by new sensor observations.

Data in the observations database range from intensity

and depth arrays at lower levels of abstraction, to lines,

edges, regions, surfaces, at intermediate levels, to objects






91



and their relationships at higher levels of abstraction. The

partitioning of the data into application dependent

hierarchies or levels of abstraction is essential because it

makes it easy to modularize the knowledge base. Thus,

certain knowledge sources become associated with a certain

level of data abstraction and could only be triggered by

data on that level of abstraction. This eliminates the need

to match all the knowledge sources to all the data. The

hypothesis database contains hypothesized high level goals

with hypothesized sub-goals derived from these goals in a

backward (top-down) reasoning scheme. The sub-goals are

matched to facts at lower levels of abstraction to assert

the validity of the hypothesis.

The control module handles the conflict resolution

among knowledge sources and thus determines what knowledge

source or group of knowledge sources to apply next. The

control module monitors the changes in the observations/

hypothesis database along with the potential contributions

of the related knowledge sources in the knowledge base and

determines the next processing steps or actions to pursue.

In other words, the control module determines the focus of

attention of the system. It contains knowledge about the

"big picture" of the solution space and, hence, can resolve

conflict among knowledge sources triggered by the current

situation data. We implement the control module in CLIPS.

To clarify the theoretical concepts introduced above,

section 5.2 of the next chapter discusses the implementation


r




Full Text
58
2-D or 3-D tessellation of space into cells, where each cell
stores a probabilistic estimate of its state. The state
associated with a cell is defined as a discrete random
variable with two states, occupied and empty.
This research adopts a representation similar to that
proposed by Kent but the spatially-indexed representation
employs a 2-D tessellation of space where each cell in the
grid, not only contains its state of occupied, empty, or
unknown, but it also contains information such as what
object the cell (if occupied) belongs to and whether it is a
boundary point or an edge point, etc.. This representation
is useful for navigation in computing free paths, and for
determining the identity of objects or features in a given
location. The object-indexed representation is linked to the
spatial representation and contains entries such as the
object's name, vertices, bounding edges, and other
discriminating features. This representation is suited to
responding to inquiries about objects or features by name or
by description. In addition to the two representations
mentioned above, we also construct a 2-D line representation
from sonar data. All these representations are implemented
under the proposed Sensory Knowledge Integrator framework
described in section 4.5.1. We describe the implementation
of the spatially-indexed representation in section 5.2.1,
while section 5.2.3 describes the implementation of the 2-D
line representation. This representation is used for


150
Knowledge with Sensory Information for Mobile Robots." Proc.
IEEE Int'1 Conf. on Robotics and Automation, pp 734-740.
[Kent 87]
Kent, E. W., Shneier, M. 0., and Hong, T. H. 1987. "Building
Representations from Fusions of Multiple Views." Proc. IEEE
Int'1 Conf. on Robotics and Automation, pp. 1634-1639.
[Khatib 85]
Khatib, 0. 1985. "Real-time Obstacle Avoidance for
Manipulators and Mobile Robots." Proc. IEEE Int11 Conf. on
Robotics and Automation, St. Louis, MO., pp.500-505.
[Koren 91]
Koren, Y., and Borenstein, J. 1991. "Potential Field Methods
and their Inherent Limitations for Mobile Robot Navigation."
Proc. IEEE Int'l Conf. on Robotics and Automation,
Sacramento, CA., pp.1398-1404.
[Kreithen 83]
Kreithen, M. L. 1983. "Orientation Strategies in Birds: a
tribute to W. T. Keeton. In Behavioral Energetics: The Cost
of Survival in Vertebrates. Ohio State University, Columbus,
pp.3-28.
[Kriegman 89]
Kriegman, D. J., Triendl, E., and Binford, T. 0. 1989.
"Stereo Vision and Navigation in Buildings for Mobile
Robots, IEEE Transactions on Robotics and Automation,
5(6):792-803.
[Lettvin 70]
Lettvin, J. Y., et al 1970. "What the frog's eye tells the
frog's brain." In W. McCullock, Embodiments of Mind. MIT
Press, Cambridge, MA. pp.230-255.
[Luo 88]
Luo, R. C., and Lin, M. H. 1988. "Robot Multi-Sensor Fusion
and Integration: Optimum Estimation of Fused Sensor Data."
Proc. IEEE Int'l Conf. on Robotics and Automation, pp.1076-
1081.
[Maes 90]
Maes, P. 1990. "Situated Agents Can Have Goals." In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA, pp.49-70.
[Manning 79]
Manning, A. 1979. An Introduction to Animal Behavior.
Addison-Wesley Publishers, Reading, MA.


147
[Buchannan 84]
Buchannan, B. G., and Shortliffe, E. H. (eds.) 1984. Rule-
Based Expert Systems: The MYCIN experiments of the Stanford
Heuristic Programming Project. Addison-Wesley, Reading, MA.
[Chatergy 85]
Chatergy, R. 1985. "Some Heuristics for the Navigation of a
Robot, Int11 J. Robotics Res. 4(l):59-66.
[Chatila 85]
Chatila, R., Laumond, J. P. 1985. "Posistion Referencing and
Consistent World Modeling for Mobile Robots." Proc. IEEE
Int' 1 Conf. on Robotics and Automation, St. Louis, MO. pp
138-145.
[Cheeseman 86]
Cheeseman, P. 1986. "Probabilistic vs. Fuzzy Reasoning." In
Uncertainty in AI, L. N. Kanal, and J. F. Lemmer, eds.,
Elsevier Science Publishers, New York, pp.85-102.
[Cohen 85]
Cohen, P. R. 1985. Heuristic Reasoning about Uncertainty: An
Artificial Intelligence Approach. Pitman/Morgan Kaufmann
publishers.
[Connell 89]
Connell, J. H. 1989. "A Behavior-Based Arm Controller," IEEE
Transactions on Robotics and Automation, 5(6):784-791.
[Crowley 87]
Crowley, J. L., and Ramparany, F. 1987. "Mathematical Tools
for Representing Uncertainty in Perception." In Proc. of the
1987 workshop on Spatial Reasoning and Multi-sensor Fusion,
A. Kak, and S. Chen eds. Morgan Kaufmann publishers, pp 293-
302.
[Crowley 85]
Crowley, J. L. 1985. "Dynamic World Modeling for an
Intelligent Mobile Robot Using a Rotating Ultra-Sonic
Ranging Device," Proc. IEEE Int'1 Conf. on Robotics and
Automation, St. Louis, MO.,pp. 128-135.
[Culbertson 63]
Culbertson, J. 1963. The Minds of Robots: Sense Data, Memory
Images, and Behavior in Conscious Automata. University of
Illinois Press, Urbana.
[DeKleer 86]
DeKleer, J. 1986. "An Assumption Based Truth Maintenance
System." Artificial Intelligence 28:127-162.
[Doyle 79]
Doyle, J. 1979. "AAAA Truth Maintenance System." Artificial
Intelligence 12:231-272.


INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING
By
AKRAM A. BOU-GHANNAM
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1991

(c) Copyright 1991 by Akram A. Bou-Ghannam. All rights
reserved.

To my daughter Stephanie
Through her eyes I see peace on earth,
and receive my motivation.

ACKNOWLEDGEMENTS
It takes lots of special people to make a dissertation.
First and foremost, I would like to thank my wife Nada and
my daughter Stephanie, for giving me lots of love and moral
support. I am also very grateful to the IBM Corporation for
the financial support and wonderful treatment. My IBM
managers and colleagues at the Boca Raton site were
extremely helpful and understanding. I could always count on
their support whenever I needed parts, equipment, technical
material, or attended technical conferences. Special thanks
go to the following IBMers: Rit VanDuren, Jerry Merckel, Tom
Kludt, Barabra Britt, Vic Moore, Rick Mendelson, Steve
Henderson, and Sharon Amato.
Within the university, I would like to thank my advisor
and all my committee members responsible for the
intellectual stimulation that shaped this dissertation.
Special thanks go to Dr. Carl Crane for sponsoring and
encouraging the cooperation between the robotics groups in
electrical, mechanical, and nuclear engineering. This
fruitful cooperation allowed me to put my ideas to practice
and run live experiments. Guido Reuter worked closely with
me on the "map builder" implementation and deserves much
thanks. Also, thanks go to Tom Heywood for his help in the
system setup and robot control.
IV

TABLE OF CONTENTS
ACKNOWLEDGEMENTS iv
ABSTRACT vii
CHAPTER 1: INTRODUCTION 1
1.1 Philosophical Underpinnings and Overview 7
1.2 Contributions of the Research 11
CHAPTER 2: SENSOR DATA FUSION: MANAGING UNCERTAINTY 12
2.1 Motivation 12
2.2 Sensors and the Sensing Process 14
2.3 Classification of Sensor Data 15
2.4 Levels of Abstraction of Sensor Data 18
2.5 Sensor Data Fusion Techniques 20
2.5.1 Correlation or Consistency Checking 22
2.5.2 Fusion at Lower Levels of Abstraction 24
2.5.3 Fusion at Middle and High Levels of
Abstraction 27
2.5.3.1 Bayesian Probability Theory 28
2.5.3.2 Certainty Theory 2 9
2.5.3.3 Fuzzy Set Theory 31
2.5.3.4 Belief Theory 32
2.5.3.5 Nonmonotonic Reasoning 35
2.5.3.6 Theory of endorsements 36
2.6 Implementation Examples 37
CHAPTER 3: INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS ....40
3.1 Behavior-Based Approaches to Robot Autonomy 4 3
3.1.1 Lessons from animal behavior 43
3.1.2 Current behavior-based approaches to robot
autonomy 4 8
3.1.3 Limitations of the subsumption architecture ... 50
3.2 Control Architecture: Behavior-Based vs.
Traditional Control 52
3.3 Issues in World Model Construction 55
3.3.1 Position Referencing for a Mobile Robot 56
3.3.2 World Model Representation 57
3.3.3 Managing World Model Inconsistencies 59
3.4 Direction of Proposed Research 62
CHAPTER 4: THE PROPOSED HYBRID CONTROL ARCHITECTURE 68
4.1 The Planning Module: A Knowledge-Based Approach 72
4.2 Lower-Level Behaviors 75
4.2.1 Avoid Obstacles Behavior 78
4.2.2 Random Wander Behavior 7 9
4.2.3 Boundary Following Behavior 80
4.2.4 Target-Nav Behavior 81
4.3 Arbitration Network 82
4.4 Map Builder: A Distributed Knowledge-Based
Approach 87
v

CHAPTER 5: EXPERIMENTAL SETUP AND IMPLEMENTATION 93
5.1 Implementation of the Planning Module 97
5.2 Implementation of the Map Builder ..100
5.2.1 The EOU Knowledge Source 102
5.2.2 The Filter-Raw Knowledge Source 107
5.2.3 The 2-D Line Finding Knowledge Source 107
5.2.4 The Consistency Knowledge Sources 112
5.2.4.1 The Match/Merge Knowledge Source 113
5.2.4.2 The Re-reference Knowledge Source 117
CHAPTER 6: EXPERIMENTAL RESULTS 120
6.1 Results From A Live Experimental Run 120
6.2 Discussion of Results 128
CHAPTER 7: SUMMARY AND CONCLUSIONS 132
APPENDIX: INTRODUCTION TO CLIPS 139
REFERENCES 145
BIOGRAPHICAL SKETCH 153
*
vi

Abstract of Dissertation Presented to the Graduate School of
the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING
By
Akram Bou-Ghannam
August 1991
Chairman: Dr. Keith L. Doty
Major Department: Electrical Engineering
Based on the philosophical view of reflexive behaviors
and cognitive modules working in a complementary fashion,
this research proposes a hybrid decomposition of the control
architecture for an intelligent, fully autonomous mobile
robot. This architecture follows a parallel distributed
decomposition and supports a hierarchy of control with
lower-level reflexive type behaviors working in parallel
with higher-level planning and map building modules. The
behavior-based component of the system provides the basic
instinctive competences for the robot while the cognitive
part is characterized by knowledge representations and a
reasoning mechanism which performs higher machine
intelligence functions such as planning. The interface
between the two components utilizes motivated behaviors
implemented as part of the behavior-based system. A
motivated behavior is one whose response is dictated mainly
by the internal state (or the motivation state) of the
Vll

robot. Thus, the cognitive planning activity can execute its
plans by merely setting the motivation state of the robot
and letting the behavior-based subsystem worry about the
details of plan execution. The goal of such a hybrid
architecture is to gain the real-time performance of a
behavior-based system without losing the effectiveness of a
general purpose world model and planner. We view world
models as essential to intelligent interaction with the
environment, providing a "bigger picture" for the robot when
reactive behaviors encounter difficulty.
Another contribution of this research is the Sensory
Knowledge Integrator proposed as the underlying model for
the map builder. This proposed framework follows a
distributed knowledge-based approach to the fusion of sensor
data from the various sensors of a multi-sensor system in
order to provide a consistent interpretation of the
environment being observed. Within the various distributed
knowledge sources of the Sensory Knowledge Integrator, we
tackle the problems of building and maintaining a consistent
model of the world and robot position referencing.
We describe a live experimental run of our robot under
hybrid control in an unknown and unstructured lab
environment. This experiment demonstrated the validity of
the proposed hybrid control architecture and the Sensory
Knowledge Integrator for the task of mapping the
environment. Results of the emergent robot behavior and
viii

environment are
different map representations of
presented and discussed.
the
IX

CHAPTER 1
INTRODUCTION
Organisms live in a dynamic environment and tailor
their actions based on their internal state and on the
perceived state of the external environment. This
interaction with the environment becomes more complex as one
ascends the ladder of hierarchy of organisms, starting with
the simplest ones that follow a stimulus-response type of
interaction where actions are a direct response to the
sensory information, and ending with humans that are endowed
with intelligence. Intelligence enables humans to reason
with symbols, to make models of the world and to make plans
to favorably alter the relationship between themselves and
the environment. Given the ability to reason does not mean
that people are devoid of the primitive instinctive type of
behaviors. As a matter of fact, reflexive responses account
for most of what people do when walking, eating, talking,
etc.... Less than half the brain is dedicated to higher-
level thinking [Albus 81].
Relative to the development of a machine (robot) which
exhibits various degrees of autonomous behavior, which of
the following two approaches is most appropriate: 1) Should
one design machines to mimic human intelligence with
symbolic reasoning and symbolic models of the world?, 2)
1

2
Should one design machines that mimic "insect intelligence"
with no central brain and symbolic models of the world? This
deep difference in philosophy currently divides the
artificial intelligence community into two camps: 1) The
"traditionalists," constituting the majority of researchers
who have long assumed that robots, just like humans, should
have models of their world and should reason about the next
action based on the models and current sensor data [Ayache
87] [Crowley 85] [Giralt 84b] [Kriegman 89] [Shafer 86]. 2)
The behavior-based camp of researchers [Brooks 86a] [Connell
89] [Payton 86] [Anderson 88] [Agre 90] who avoid symbolic
representations and reasoning, and advocate the endowment of
a robot with a set of low-level behaviors that react to the
immediacy of sensory information in a noncognitive manner.
The main idea is that "the world is its own best model", and
complex behaviors emerge from the interaction of the simple
low-level behaviors as they respond to the various stimuli
provided by the real world. The number of researchers in
this camp is small but growing rapidly. In the next
paragraphs we discuss the characteristics of each approach,
starting with the behavior-based one.
Ethological observations of animal behavior [Gould 82]
[Manning 79] [McFarland 87] provide the inspirational basis
for the behavior-based approach in robotics. The observation
is that animals use instinctive behaviors rather than
"reasoning" to survive in their ever-changing environment.
Apparently, their actions are the resultant of various

3
reactions to external stimuli. These reactions are
apparently due to animal instinct and are not a consequence
of sophisticated processing or reasoning. Braitemberg
[Braitemberg 84] elaborates on the observation that animal
behavior could be an outcome of simple primitive behaviors
and such behavior could be reproduced in mobile robots whose
motors are driven directly by the output of the appropriate
sensors. Rodney Brooks' introduction of the subsumption
architecture [Brooks 86a] has given the behavior-based
approach a great push forward and has forced researchers in
the robotics community to reexamine their methods and basic
philosophy of robot control architecture. For example,
researchers have always taken for granted that a robot needs
to model its environment. Now, alerted by the main thesis of
the behavior-based approach of no global internal model and
no global planning activity, they ask questions of why and
for what specific tasks does one need to model the
environment. Brooks' subsumption architecture uses a menu of
primitive behaviors such as avoid obstacles, wander, track
prey, etc., each acting as an individual intelligence that
competes for control of the robot. There is no central brain
that chooses and combines these simple behaviors, instead,
the robot sensors and what they detect at that particular
moment determine the winning behavior. All other behaviors
at that point are temporarily subsumed. Surprisingly, the
conduct of Brooks' brainless "insect" robots often seems
clever. The simple responses end up working together in

4
surprisingly complex ways. These "insects" never consult a
map or make plans; instead, their action is a direct
response to sensory information. The payoff for eliminating
symbolic models of the environment and the central planner
is speed. Real time operation becomes possible since the
computational burden is distributed and greatly reduced.
Another advantage of the subsumption architecture is its
modularity and flexibility. In principle, more behaviors may
easily be added until the desired level of competence is
reached. A drawback of the behavior-based approach is that
one cannot simply tell the various behaviors how to achieve
a goal. Instead, in an environment which has the expected
properties, one must find an interaction loop between the
system and that environment which will converge towards the
desired goal [Maes 90] Thus, the designer of a behavior-
based system has to "pre-wire" the arbitration strategy or
the priorities of the various behaviors. This inflexibility,
coupled with the inability to handle explicitly specified
goals, makes it hard for such behavior-based systems to be
useful for different types of missions over a wide range of
domains. Additionally, an efficient behavior to assure
reaching a specified goal cannot always be guaranteed. So,
it is possible for a robot using the behavior-based approach
to take a certain pathway many times over, even though
traversing this pathway might not be desirable for many
reasons. For example, the path might lead the robot away
from the target or into danger. This is possible because the

5
robot does not build or have a map of its environment. In
effect, the robot does not remember what it has seen or
where it has been. Anderson and Donath [Anderson 88]
describe some cyclical behavior exhibited by a reflexive
behavior-based robot and attribute such behavior to the lack
of internal state within each behavior. They also report
that this cyclic behavior was observed by Culberston
[Culberston 63]. Brooks and Connell [Brooks 86b] have also
observed cyclical behavior in their wandering and wall
following behaviors. To avoid such problems, later work by
Mataric [Mataric 89] a member of Brooks' group experimented
with map building and use under the subsumption
architecture.
The traditional approach to robot control architecture
is derived from the standard AI model of human cognition
proposed by Newell and Simon in the mid-fifties. It follows
the Deliberative Thinking paradigm where intelligent tasks
can be implemented by a reasoning process operating on a
symbolic internal model. Thus, it emphasizes cognition or a
central planner with a model or map of the environment as
essential to robot intelligence. Sensory confirmation of
that model is equally important. Such symbolic systems
demand from the sensor systems complete descriptions of the
world in symbolic form. Action, in this case, is not a
direct result of sensor data but rather is the outcome of a
series of stages of sensing, modelling, and then planning. A
desirable feature of such systems is the general ability to

6
handle explicit high-level user specific goals. Given a set
of goals and constraints, the planning module advances the
overall mission by deciding the robot's next move based on
an analysis of the local model of the environment
(constructed from current sensor data) and the existing
global model. The global or world model is obtained either
directly from the user, if the robot is operating in a known
environment, or it is autonomously constructed over time
from the various local models when operating in an unknown
environment. The world model representation employed in the
traditional approach is general purpose and, thus, useful
for a variety of situations and planning tasks. Without such
a general purpose model, features critical to plan execution
may not be discovered. But, a general purpose world model
puts some unrealistic demands on the perception task and has
the disadvantage of an unavoidable delay in the sensor to
actuator loop. Such delay is due to the computational
bottleneck caused by cognition and the generation of a
symbolic model of the world. Lack of real time operation
(speed) and inflexibility are the two major complaints from
the behavior-based camp about the cognitive approach. These
claims are supported by the fact that the few autonomous
mobile robot projects implemented with the traditional
approach suffer from slow response times and inflexibility
when operating in complex dynamic environments. The response
of the traditionalists is that while reflexive behavior can
keep a robot from crashing into a wall, a higher-level

7
intelligence is needed to decide whether to turn left or
right when the robot comes to an intersection.
1.1 Philosophical Underpinnings and Overview
Our goal is to develop a general purpose robot that is
useful for a variety of tasks (explicitly stated by a user)
in various types of dynamically changing environments. The
philosophical view of our research is that such a goal could
only be accomplished by combining the two approaches
mentioned above, and that these two approaches complement
each other just as reflexive responses and higher-level
thought complement each other in human beings. For example,
while one does not think about how to articulate the joints
in one's legs when walking down a sidewalk (the reflexive
behaviors take care of the walking function) higher-level
thinking and planning is needed when one, for example,
remembers that the sidewalk is not passable further down due
to construction noticed earlier. At this moment one has to
plan a suitable alternative route to the destination.
Adhering to this philosophical view of reflexive behaviors
and cognitive modules working in a complementary fashion
where the advantages of one approach compensates for the
limitations of the other, this research proposes a hybrid
decomposition of the control architecture for an intelligent
fully autonomous mobile robot. This architecture follows a
parallel distributed decomposition and supports a hierarchy
of control with lower-level reflexive type behaviors working

8
in parallel with higher-level planning and map building
modules. Thus, our architecture includes a cognitive
component and a behavior-based component. The cognitive
component is characterized by knowledge representations and
a reasoning mechanism which performs higher mental functions
such as planning. The behavior-based component system hosts
the cognitive component and provides the basic instinctive
competences for the robot. The level of competence of the
behavior-based component determines the degree of complexity
of the planner in the cognitive component. Thus, the higher
the competence level of the behavior-based system, the
simpler the planning activity. Once the behavior-based
system is built to the desired level of competence, it can
then host the cognitive part. The interface between the two
components utilizes motivated behaviors implemented as part
of the behavior-based system. We define a motivated behavior
as one whose response is driven mainly by the associated
'motivation' or internal state of the robot. This is
analogous to motivated behavior exhibited by animals. For
example, the motivated behavior of feeding depends on the
internal motivation state of hunger in addition to the
presence of the external stimulus of food. Utilizing
motivated behaviors, the cognitive planning activity can
thus execute its plans by merely setting the motivation
state of the robot and letting the behavior-based subsystem
worry about the details of plan execution. In our approach,
the arbitration of the responses of lower-level behaviors is

9
partly hardwired in the behavior-based system, and partly
incorporated into a set of production rules as part of the
planning module of the cognitive system. These rules are
driven by the goals of the robot, and the current situation
facts provided by the world model and the status of the
behavior-based system. In addition, in the behavior-based
system, we use superposition in a potential force field
formulation (similar to [Arkin 87] and [Khatib 85]) to
combine the responses of the various complementary behaviors
that are active at any one time. The goal for the hybrid
architecture is to gain the real-time performance of a
behavior-based system without loosing the general goal
handling capability of a general purpose world model and
planner. We view world models as essential to intelligent
interaction with the environment, providing a "bigger
picture" for the robot when reflexive behaviors encounter
difficulty.
In our framework we tackle the behavior fusion problem
with the lower-level behaviors, while higher-level modules
such as the map builder tackle the sensor fusion problem in
attempting to build a general purpose representation.
Theoretical concepts and mathematical tools for sensor data
fusion are presented in chapter 2; issues in designing
intelligent fully autonomous mobile robots are presented in
chapter 3, while details of the workings of our proposed
architecture are explained in chapter 4. This architecture
is implemented and tested in a dynamic,
unknown, and

10
unstructured environment in our lab for controlling a K2A
Cybermotion mobile robot. Chapter 5 covers the experimental
setup and implementation issues, while chapter 6 presents
and discusses the results obtained. We share the belief that
a world representation and sensory confirmation of that
representation are essential to the intelligence of an
autonomous mobile robot. Thus, the map builder is an
important part of the hybrid control architecture. We
propose, also in chapter 4, a distributed knowledge-based
framework called the Sensory Knowledge Integrator (SKI) as
the underlying model for the map builder. The SKI framework
organizes the domain knowledge needed to describe the
environment being observed into data-driven and model-driven
knowledge sources, and provides a strategy for applying that
knowledge. The theoretical concepts of the SKI model are
presented in section 4.3, while the implementation of the
map builder is discussed in chapter 5. The results of
implementing the various knowledge sources of the map
builder are also presented in chapter 5. These results show
two types of representations: an occupancy grid
representation and a 2-D line representation generated from
sonar sensor data. Results of position correction or re
referencing of the robot are also presented. Chapter 7
concludes this dissertation and discusses limitations and
future research trends.

11
1.2 Contributions of the Research
We see the contributions of this work as:
1. The development and implementation of a hybrid control
architecture that combines both traditional and behavior-
based approaches.
2. The development and implementation of the Sensory
Knowledge Integrator framework which provides a parallel
distributed model for sensor data fusion and consistent
world modeling.
3. The development and implementation of a new approach for
consistent world modeling. This approach involves the
interactive use of the occupancy grid and the 2-D line
representations for filtering out unsupported raw input data
points to the line-finder knowledge source and thus
providing a better 2-D line representation.
4. A fast algorithm for position referencing of a mobile
platform using the 2-D line representation.
5. Addressing the important question of combining both the
behavior-based and the traditional approach and whether it
provides better performance.

CHAPTER 2
SENSOR DATA FUSION: MANAGING UNCERTAINTY
In this chapter we concentrate on sensor fusion, an
essential issue of the traditional approach (discussed in
chapter 1) concerned with generating meaningful and
consistent interpretations (in terms of symbolic models) of
the environment being observed. We discuss sensor data
fusion techniques and methods for managing uncertainty at
various levels of abstraction, and illustrate the advantages
and disadvantages of each. Some of the techniques presented
in this chapter are used in the various knowledge sources of
the map builder module, an essential part of our hybrid
architecture. The implementation of the map builder
(including the sensor fusion techniques used) is presented
in detail in chapter 5.
2.1 Motivation
The increasing demand for robot systems to perform
complex tasks such as autonomous operation and flexible
manufacturing has spurred research aimed at upgrading robot
intelligence and capabilities. These complex tasks are often
associated with unstructured environments where information
is often uncertain, incomplete, or in error. This problem
calls for the development of a robot system capable of using
12

13
many different sources of sensory information in order to
overcome the limitations of single sensory robot systems.
Single sensory robot systems are limited in their ability to
resolve and interpret unknown environments, since they are
only capable of supplying partial information. The need for
multi-sensor robot systems is evident in the literature:
[Giralt 84a], [Durrant-Whyte 86a], [Henderson 84],
[Ruokangas 86], [Flynn 88], [Luo 88], [Mitiche 86], [Shafer
86] The motivation is to obtain from a set of several
different and/or similar sensors, information that would be
impossible or impractical to obtain from any one of the
sensors alone. This is often possible since different
sensors are sensitive to different properties of the
environment. Thus, each sensor type offers unique attributes
and contextual information in interpreting the environment.
The goal of a multi-sensor system is to combine information
from the various sensors, with a priori knowledge about the
environment, the sensors, the task, etc., into a meaningful
and consistent interpretation of the environment. In this
manner, the system maintains an internal description of the
world which represents its "best guess" about the external
world.
Sensor data fusion combines information from various
sensors into one representative set of data that provides a
more accurate description of the observed environment (an
improved world model) than the description provided by any
of the sensors acting alone. The objective is to reduce

14
uncertainty about the observed environment. However, in
addition to the fusion of information from multiple sensory
sources, the problem of generating an accurate world model
representation involves the fusion of sensory data with
object models and a priori knowledge about the environment.
2.2 Sensors and the Sensing Process
The field of robotic sensor design is rapidly growing
and undergoing a great deal of research activity. A variety
of sensors are available for robotic applications. These
include TV cameras, infrared cameras, ranging devices such
as acoustic, infrared, and laser range finders, touch
sensors, proximity sensors, force/torque sensors,
temperature sensor, etc. An assessment of robotic sensors is
presented in [Nitzan 81] Nitzan defines sensing as "the
translation of relevant physical properties of surface and
volume elements into the information required for a given
application." [Nitzan 81, p. 2]. Thus, physical properties
such as optical, mechanical, electrical, magnetic, and
temperature properties are translated by the sensing process
into the information required for the specific application.
For example, a parts inspection application might require
information about dimensions, weights, defects labeling,
etc.. The basic steps of sensing are shown in the block
diagram of figure 1.1 (from [Nitzan 81]).

15
In this chapter we are concerned with the sensing
process where information from a variety of sensors is
combined and analyzed to form a consistent interpretation of
the observed environment. As we will discuss later, the
interpretation process is complex and involves processing of
sensor data at various levels of abstraction using domain
specific knowledge.
2.3 Classification of Sensor Data
The fusion technique of choice depends on the level of
abstraction and on the classification of the sensor data. In
multi-sensor systems, data from the various sensors are
dynamically exchanged. The use of these data in the fusion
or integration process falls under one of the following
classes:
Competitive. In this case the sensors1 information is
redundant. This occurs when the observations of the
sensor (s) intersect; that is, they supply the same type of

16
information about the same feature or property of the same
object. The following sensor configurations and scenarios
produce competitive information interaction:
a) Two or more sensors of the same type measuring the
value of the same feature of an object. For example, two
sonar sensors measuring the depth of an object from a fixed
frame of reference.
b) Different sensors measuring the value of a specific
feature. For example, depth information could also be
provided using stereo vision as well as a sonar range
finder. Another example of how different sensing modalities
produce the same feature, is the generation of edge features
of an object from either intensity images or range images.
c) A single sensor measuring the same feature at
different times. For example, a sonar sensor continuously
acquiring depth measurements from the same position and
viewpoint.
d) A single sensor measuring the same feature from a
different viewpoint or operating configuration or state.
e) Sensors measuring different features but, when
transformed to a common description, the information becomes
competitive. For example, the speed of a mobile robot could
be measured by using a shaft encoder, or it could be deduced
from dead-reckoning information from fixed external beacons.
Complementary. In this case the observations of the
sensors are not overlapping, i.e., the sensors supply

17
different information about the same or different feature.
The measurements are added (set union) to the total
environment description without concern for conflict. A good
example of complementary sensor data fusion is Flynn's
combining of sonar and IR sensor data [Flynn 88]. The sonar
can measure the distance to an object but has poor angular
resolution, while the IR sensor has good angular resolution
but is not able to measure the distance accurately. By using
both sensors to scan a room, and combining their information
in a complementary manner where the advantages of one sensor
compensates for the disadvantages of the other, the robot
was able to build a better map of the room.
Cooperative. This occurs when one sensor's information
is used to guide the search for another's new observations.
In other words, one sensor relies on another for information
prior to observations. For example, the guiding of a tactile
sensor by initial visual inspection [Allen 88].
Independent. In this case one sensor or another is used
independently for a particular task. Here fusion is not
performed, but the system as a whole employs more than one
sensor for a particular task and uses one particular sensor
type at a time while the others are completely ignored, even
though they may be functional. For example, in an
environment where the lighting conditions are very poor, a
mobile robot may depend solely on a sonar sensor for

18
obstacle avoidance, while in good lighting conditions both
vision and sonar sensing modalities could be employed.
Independent sensor data interaction occurs in the natural
world where redundant sensors are abundant. For example,
pigeons have more than four independent orientation sensing
systems, that do not seem to be combined but rather,
depending on the environmental conditions, the data from one
sensory subsystem tends to dominate [Kriethen 1983].
2.4 Levels of Abstraction of Sensor Data
Levels of abstraction of sensor data are application
and task dependent. These levels of abstraction vary from
the signal level (lowest level) where the raw response of
the sensor is present, to the symbolic level (highest level)
where symbolic descriptions of the environment exist for use
by the planning subsystem. This model is based upon
psychological theories of human perceptual system that
suggest a collection of processors that are hierarchically
structured and modular [Fodor 83]. These processors create a
series of successively more abstract representations of the
world ranging from the low-level transducer outputs to the
highly abstract representations available to the cognitive
system. Thus, in order to bridge the wide gap between raw
sensory data and understanding of what those data mean, a
variety of intermediate representations are used. These
representations make various kinds of knowledge explicit and
expose various kinds of constraints [Winston 84], For

19
complicated problem domains such as the problem of
understanding and interpreting a robot's environment based
mainly on data from its sensors, it becomes important to be
able to work on small pieces of the problem separately, and
then combine the partial solutions at the end into a
complete problem solution. This task of understanding the
environment is thus accomplished at various levels of
analysis or abstraction. Thus, sensor data exist at the
various levels of knowledge abstraction in the solution
space and appropriate fusion techniques are applied at the
various levels for an improved solution. Though the specific
abstraction levels are task dependent, they could be
generalized as follows:
Signal level. This level contains data that are close
to the signal or unprocessed physical data level. At this
level data are usually contaminated with random noise and
are generally probabilistic in nature. Therefore statistical
inferencing techniques are appropriate for data fusion on
this level.
Feature level.
Data
at this
level
consist
of
environmental/object
features
derived
from the
signal.
The
various features describe the object or solution. Often,
incomplete descriptions of the features must be used. This
calls for a type of reasoning that is subjective and based

20
on the body of evidence associated with each feature. This
type of reasoning is called evidential reasoning.
Symbol level. At this level symbolic descriptions of
the environment exist and propositions about the environment
are either true or false. Therefore, logical (Boolean)
reasoning about these descriptions is most appropriate.
2.5 Sensor Data Fusion Techniques
A variety of techniques for combining sensor data have
been proposed. Most approaches concentrate on Bayesian or
statistical combination techniques [Richardson 88], [Luo
88], [Porrill 88], [Ayache 88], [Durrant-Whyte 86a]. Some
researchers followed a heuristic approach [Flynn 88], [Allen
88]. Garvey et al, [Garvey 82] proposed evidential reasoning
as a combination technique and claims it is more general
than either Bayesian or Boolean approaches. The fusion
technique of choice depends on the classification of the
sensor data involved, and on the level of abstraction of the
data. For example, at the signal level, data is generally
probabilistic in nature and hence a probabilistic approach
is most appropriate. At higher levels such as the symbolic
feature level, a Boolean approach is generally appropriate.
Sensor data classification and levels of abstraction have
been discussed in the previous sections. For complementary
data the sensors' data are not overlapping and there is no
concern for conflict. In this case, at any level of

21
abstraction, the technique of choice is to simply add the
sensor descriptions to the total environment description.
Similarly, for cooperative sensor data there is no concern
for conflict since the data of one sensor guides the
observations of the other. In the case of competitive sensor
data when two or more sensors provide information about the
same property value of the same object, a fusion technique
is called for. But, before choosing a fusion technique to
combine the data, how does one determine if the data is
truly competitive? That is, how does one determine if the
data represent the same physical entity? This correlation or
consistency check is discussed in the next section. The
choice of fusion technique in the competitive case depends
on the level of abstraction of the data. For example, at the
raw data level the problem becomes that of estimating a
parameter x from the observations of the sensors involved.
This could be resolved by either using a deterministic or
nonrandom approach (like the least-squares method for
example), or by using a probabilistic or random approach
(like the minimum mean squared error method). Dealing with
uncertain information is still a problem at higher levels of
abstraction and a variety of methods have been proposed. In
the following sections we will discuss fusion techniques in
more detail.

22
2.5.1 Correlation or Consistency Checking
To determine whether sensor data, or features derived
from that data could be classified as competitive, a
consistency check is performed on the data. This is a common
difficulty in robotic perception where often the correlation
between perceived or observed data and model data has to be
determined. The well known problem of correlating between
what the robot actually sees and what it expects to see is
an appropriate example. Following is an example that
illustrates one form of consistency checking:
Let pg1 and pei be parametric primitive vectors
estimated by sensors i, and j respectively. We desire to
test the competitive hypothesis H0 that these estimates are
for the same primitive p of an object. Let 8eii = pei pei be
the estimate of = p1 pi where p1 and pi are the
corresponding true parameter vectors, then H0: 8^ = 0. We
want to test H0 vs. Hx: 8^ <> 0. Let the corresponding
estimation errors be represented by e1 = p pei, and ei = p
- Pgi. Define e1^ = 8^ S^i, it follows that e^ = p1 pi -
Pe1 + Pgi. Then, under H0 we have
e /H0 = p Pgi (p Pgi) = e1 ei,
and the covariance of the error under H0 is given by
C /H0 = E[ (e /H0) (e /H0)T]
= E[ (ei ei) (e1 ei)T]

23
= E [ (e1) (e1)T] E [ (eM (e3)T] -
E[ (e3) (e1) T] + E [ (e^) (e3)T]
= c1 - CP1 + c^.
If the errors e1 and e3 are independent then = CP1 = 0,
and
C /H0 = C1 + CJ.
For Gaussian estimation errors the test of H0 vs. Hx is
as follows: Accept H0 if
d = (5e^)T c-1 (5^3) < 0,
where 0 is a threshold such that: p{ d>0 /H0} = a, and a is
a small number such as 0.05 for example.
If H0 is accepted, then Pg1 and pe^ are competitive and
can thus be fused to obtain p^i the combined estimate of p.
Using the standard Kalman filter equations, and letting the
prior mean of p be Pg1, we obtain the following combined
estimate and corresponding error covariance:
Peij = Pe1 + K (Pej Pe1)
Cov = C1 K C1,
where K is the Kalman filter gain, and for independent
errors e1 and e^ is given by
K = C (C1 + CP) -1

24
2.5.2 Fusion at Lower Levels of Abstraction
At the lower levels of abstraction where the sensor
data is close to the signal level, the data contains signal
components random in nature, and hence probabilistic
reasoning or inference is most appropriate. Most of the
literature on sensor data fusion involves problems at this
level with the majority of the researchers adopting
probabilistic (Bayesian) inference. Next we will briefly
discuss the commonly used assumption, advantages, and
disadvantages of statistical sensor fusion techniques.
Most researchers treat the sensor fusion problem, at
this level of abstraction, as an optimal estimation problem.
This problem is briefly stated as follows: Given a system of
interest (e.g. the environment of a robot) represented by an
n-dimensional state vector x, and the measured quantities
(output of sensor systems) represented by the m-dimensional
measurement vector z, what is the best way to estimate the
value of x given z according to some specified optimality
criterion. A general measurement model is given by
z = h(x, v)
where h(x, v) is an m-dimensional vector which represents
the ideal operation of the sensor system, and v represents
the m-dimensional random noise or error vector.
Unfortunately, most problems with this general measurement

25
model have not been solved in a practical sense [Richardson
88]. Instead, the following measurement model with additive
noise is usually considered:
z = h(x) + v
Moreover, it is commonly assumed that x and v are
statistically independent, and that the probability density
functions f (x) and f (v) are known a priori. In addition,
most methods often assume f(x) and f(v) to be Gaussian with
the following statistical properties:
E[v] = 0, E[vvT ] = R
E[x] = Ex, E[ (x-Ex) (x-Ex) T] = M
where M and R are the state and noise covariance matrices
respectively. The above assumptions and equations are
sometimes written as:
x N (Ex, M) v N(0, R) and Cov(x, v) = 0
The measurement model is further simplified when the
function h(x) is linear. In this case, when the measurement
model is both linear and Gaussian, a closed form solution to
the estimation problem is obtained. The linear measurement
model is represented as

26
z = Hx + v
This is the measurement equation for the standard Kalman
filter. Given the above assumptions, the optimal estimate of
x, given z measurements, xopt(z) is determined by minimizing
a loss or risk function. This risk function is the
optimality criterion which provides a measure of the
"goodness" of the estimate. A typical loss function is the
mean squared error given by:
L(x, xopt) = (x xopt)TW (x xopt)
where W is a symmetric positive definite weighting matrix.
Now let us consider two sensor systems producing two
sets of measurements zl and z2 of the state x. We are
interested in estimating x based on zl and z2, that is,
computing xopt(zl, z2) In the general case, one can not
compute xopt(zl, z2) based on the separate estimates xQpt(zl)
and xopt(z2) of sensors 1 and 2 respectively [Richardson 88].
However, this is possible in the special case of a linear
Gaussian measurement model. For more information on
estimation in multi-sensor system the reader is referred to
Willner et al. [Willner 76] and Richardson and Marsh
[Richardson 88].
The major limitations of the above mentioned methods
stem from the assumption that f (x) is known a priori. This
requires a large number of experiments to be performed on

27
the sensor system in order to establish a model for f(x) and
f(v) Unfortunately, this is usually not done because it is
impractical or impossible. In such a case, the initial value
of Ex is usually set to zero, and the initial value of M is
set to a large multiple of the identity matrix indicating
our lack of knowledge of prior observations.
2.5.3 Fusion at Middle and High Levels of Abstraction
At intermediate and high levels of abstraction,
features derived from lower level sensory data are present.
These features are normally associated with some degree of
uncertainty. It is the task of the multi-sensor system to
apply domain knowledge to these features in order to produce
valid interpretations about the environment. Thus, the basic
methodology involves the application of symbolic reasoning
and artificial intelligence techniques to aid the
interpretation task. Moreover, because "knowledge is power",
a powerful multi-sensor perception system must rely on
extensive amounts of knowledge about both the domain and the
problem solving strategy effective in that domain
[Feigenbaum 77].
Uncertainty results from the use of inadequate
knowledge as well as from attempts to reason with missing or
unreliable data. For example, in a speech understanding
system, the two sources of uncertainty are: 1) noise in the
speech waveform (sensor noise and variability), and 2) the
application of incomplete and imprecise theories of speech

28
[Newell 75] Several methods for managing uncertainty have
been proposed. These include the use of Bayesian probability
theory, certainty theory (developed at Stanford and employed
in the MYCIN system [Buchannan 84]), fuzzy set theory [Zadeh
83], the Dempster/Shafer theory of evidence, nonmonotonic
reasoning, and theory of endorsements. Ng and Abramson [Ng
90] provide a good reference that introduces and compares
these methods.
2.5.3.1 Bayesian Probability Theory
Bayes theorem, a very important result of probability
theory, allows the
computation of the
probability
of a
hypothesis based
on some evidence,
given
only
the
probabilities with
which the evidence
follows
from
the
hypothesis. Let
P (Hi/E) = the probability that Hi is true given evidence E
P(E/Hi) = the probability of observing evidence E when Hi is
true
P (Hi) = the probability that Hi is true
n = the number of possible hypothesis
Then, the theorem states that:
P (E/Hi) P (Hi)
P (Hi/E)
n
L(P (E/HK) *p (hk)
k=l

29
In using Bayes's theorem, two major assumptions are
required: first that all the prior probabilities P(E/Hk) and
P (Hk) are known; second that all P (E/Hk) are independent.
These assumptions are difficult or impossible to meet in
many practical domains. In such situations, more heuristic
approaches are used. Another problem with statistical
methods in general is that these methods can not distinguish
between the lack of belief and disbelief. This stems from
the observation that in traditional probability theory the
sum of confidence for a certain hypothesis and confidence
against the same hypothesis must add to 1. However, often
one might have a certain degree of confidence that a certain
hypothesis is true, yet have no knowledge about it being not
true. Certainty theory attempts to overcome this limitation.
2.5.3.2 Certainty Theory
Certainty theory splits the confidence for and the
confidence against a certain hypothesis by defining the
following two measures:
MB(H/E) is the measure of belief of a hypothesis H given
evidence E, with 0 MD(H/E) is the measure of disbelief of a hypothesis H given
evidence E, with 0 These two measures are tied together with the certainty
factor:
CF (H/E) = MB (H/E) MD (H/E)

30
The certainty factor approaches 1 as the evidence for a
hypothesis becomes stronger, with 1 indicating absolute
truth. As the evidence against the hypothesis gets stronger
the certainty factor approaches -1, with -1 indicating
absolute denial. A certainty factor around 0 indicates that
there is little evidence for or against the hypothesis. To
combine the certainty factors of different hypothesis, the
following rules apply:
CF(HI AND H2) = MIN[CF(H1), CF(H2)]
CF(HI OR H2) = MAX[CF(HI), CF(H2)]
Another problem is how to compute the certainty factor of a
conclusion based on uncertain premises. That is, if P
implies Q with a certainty factor of CF1, and CF(P) is
given, then CF(Q) = CF(P) CF1. Also how to combine the
evidence when two or more rules produce the same result.
Assume that result Q produced by rule R1 has a certainty
factor CF(R1) and that rule R2 also produced Q with a
certainty factor CF(R2), then the resulting certainty factor
of Q, CF(Q), is calculated as follows:
1. When CF(R1) and CF(R2) are positive,
CF(Q) = CF(R1) + CF(R2) CF(R1)*CF(R2)
2. When CF(R1) and CF(R2) are negative,
CF(Q) = CF(R1) + CF(R2) + CF(Rl)*CF(R2)

31
3. Otherwise,
CF(Q)
CF(R1) + CF(R2)
1- MIN [|CF (R1) I, |CF(R2)|]
Although certainty theory solves many of the problems
presented by an uncertain world (as in MYCIN), the meaning
of the certainty measures and how they are generated, is not
well defined. The assignment of numeric certainty measures
based on human terms such as "it is very likely that" is not
well defined and considered by some as ad hoc.
2.5.3.3 Fuzzv Set Theory
Fuzzy set theory is yet another approach for dealing
with uncertainty. The main idea is that often information is
vague rather than random and hence a possibility theory must
be proposed as a measure of vagueness just as probability
theory measures randomness. The lack of precision is
expressed quantitatively by the notion of a fuzzy set. This
notion introduces a set membership function that takes on
real values between 0 and 1 and measures the degree to which
a set member belongs to the fuzzy set. To illustrate, let I
be the set of positive integers, and A be the fuzzy subset
of I that represents the fuzzy set of small integers. A
possibility distribution that defines the fuzzy membership
of various integer values in the set of small integers could
be characterized by:

32
mA (1) =1, mA (2) =1, mA(3)=0.9/ mA (4) =0.7, . mA (30) =0.001
where mA(i) is the membership function that measures the
degree to which i (a positive integer) belongs to A.
To illustrate some of the combination rules of fuzzy
set theory, assume that both A and B are propositions with
C=A@B denoting the proposition that C is the combination of
A and B. Then, for
1. Conjunction: A AND B, is given by
m[A*B] (a, b)=min[mA(a),mB(b)].
2. Disjunction: A OR B, is given by
m[A+B] (a, b)=max[mA(a),mB(b)].
3. Implication: IF A THEN B, is given by
m[A/B](a,b)=min[1,(1-mA(a)+mB(b))].
For more details on possibility theory including evidence
propagation and truth quantification rules the reader is
.referred to [Zadeh 78], while [Cheeseman 86] provides a
comparison between fuzzy and probabilistic reasoning.
2.5.3.4 Belief Theory
Belief theory, developed by Dempster and Shafer [Shafer
76] as an alternative to the theory of probability, makes a
fundamental distinction between uncertainty and ignorance.
As mentioned above, in probability theory the extent of

33
knowledge about a belief B is expressed in a single
probability number P (B) In cases where the prior
probabilities are not known, the choice of P (B) may not be
justified. Belief theory proposes belief functions where
each function distributes a unit of belief across a set of
propositions (called the "frame of discernment") for which
it has direct evidence, in proportion to the weight of that
evidence as it bears on each proposition. The frame of
discernment (0) is defined as an exhaustive set of mutually
exclusive propositions about the domain. The role of 0 in
belief theory resembles that of the sample space (Q) in
probability theory except that in belief theory the number
of possible hypothesis is |2e| while in probability theory it
is |£2|. The basic probability assignment is a function m that
maps the power set of 0 into numbers between 0 and 1, that
is:
m: 2e > [0, 1]
If A is a subset of 0, then m satisfies:
1. m(0) = 0, where 0 is the null hypothesis.
2. I m(A) =1
AC 0
A belief function of a proposition A, BF(A) measures
the total amount of belief in A, and is defined as:

34
BF (A) = X m(B)
BCA
And satisfies the following:
1. BF (0) = 0
2. BF (0) =1
3. BF(A) + BF(~A) < 1
The Dempster/Shafer theory is based on Shafer's
representation of belief and Dempster's rule of combination.
The Shafer representation expresses the belief in a
proposition A by the evidential interval [BF(A), p(A)],
where BF(A) denotes the support for a proposition and sets a
minimum value for its likelihood, while p (A) denotes the
plausibility of that proposition and establishes its maximum
likelihood. P(A) is equivalent to 1-BF(~A), the degree with
which one fails to doubt 'A'. The uncertainty of 'A',
u (A) =p (A) -BF (A) is thus implicitly represented in the
interval [BF(A) p(A)] The Dempster's rule of combination
is a method for integrating distinct bodies of evidence. To
combine the belief of two knowledge sources, suppose for
example that knowledge source 1 (KS1) commits exactly ml (A)
as a portion of its belief for proposition 'A', while KS2
commits m2(B) to proposition 'B'. Note that both 'A' and 'B'
are subsets of 0 the frame of discernment. If we are

35
interested in computing the evidence for proposition C = A
n B, then
BF (C) = [1/ (1-k) ] [ S ml (A) *m2 (B) ]
AT>B=C
u (C) = [1/(1-k) ] [ml (0) *m2 (0) ]
where
k = I ml (A) *m2 (B)
AnB=0
The added complexity of the Dempster/Shafer theory
increases the computational cost. In addition, the
assumptions of independence required in the Bayesian
approach still apply here. Another criticism of this theory
is that it produces weaker conclusions due to the fact that
it avoids the assignment of stronger probability values, and
hence stronger conclusions may not be justified.
2.5.3,5 Nonmonotonic Reasoning
While all of the methods mentioned above use a numeric
model of uncertainty, nonmonotonic reasoning uses a non
numeric approach. In this case the system starts by making
reasonable assumptions using the current uncertain
information, and proceeds with its reasoning as if the
assumptions were true. If at a later time these assumptions
were found to be false (by leading to an impossible
conclusion, for example), then the system must change these

36
assumptions and all the conclusions derived from them. Thus,
in contrast to the inference strategies discussed above
where knowledge can only be added (monotonic) and axioms do
not change, in nonmonotonic reasoning systems knowledge can
also be retracted. Truth Maintenance Systems [Doyle 79],
[deKleer 86], implement nonmonotonic reasoning. The argument
for nonmonotonic reasoning is that nonmonotonicity is an
important feature of human problem solving and reasoning. In
addition, numeric approaches to uncertainty do not consider
the problem of changing data, that is, what to do if a piece
of uncertain information is later found to be true or false.
2.5.3.6 Theory of endorsements
Cohen's theory of endorsements [Cohen 85] is yet
another qualitative approach to managing uncertainty. The
basic philosophy of this theory is to make explicit the
knowledge about uncertainty and evidence. The motivation for
this theory stems from the limitation of the numerical
approaches which summarize all supporting and opposing
evidence into a single number. The semantics of this number
that represent knowledge about uncertain information is
often unclear. Thus, the basic idea is that knowledge about
uncertain situations should influence system behavior.
Hence, if a required piece of evidence is lacking, an
endorsement-based system allocates resources to the
resolution task whose execution will provide the most
information for reducing the uncertainty. The system

37
represents all reasons for believing or disbelieving a
hypothesis in structures called endorsements. These
endorsements are associated with propositions and inference
rules. The system uses endorsements to decide whether a
proposition at hand is certain enough by back chaining and
determining if its sub-goals are well endorsed in order to
assert it. Cohen describes five classes of endorsements:
Rule endorsements.
Data endorsements.
Task endorsements.
Conclusion endorsements.
Resolution endorsements.
The main problem with this recently developed theory is
the exponential growth of the body of endorsements when
asserting a proposition based on endorsements of its sub
goals and their associated sub-goals and so on. Thus, a
simple rule could lead to large bodies of endorsements after
a few inferences.
2.6 Implementation Examples
In this section we show how some of the techniques
described in this chapter are used in our research. As
mentioned in the introduction at the beginning of this
chapter, some of these techniques are used in the various
knowledge sources of the map builder module. The
implementation of the map builder is discussed in detail in
section 5.2. Here, we highlight with some examples the

38
implementation in our work of some of the sensor fusion
techniques described in this chapter.
The first example uses the consistency checking
techniques of section 2.5.1 to match observed and model line
parameters. A line parameter vector consists of orientation,
collinearity, and overlap variables with associated
uncertainties. The normal distance between the two parameter
vectors is calculated as described in section 2.5.1 and
compared to a threshold for a consistency check. Section
5.2.4.1 illustrates the details of the matching operation.
If the match is successful, the line parameter vectors are
now merged using the estimation techniques described in
section 2.5.1 also. These techniques use the standard Kalman
filter equations. The merged estimate with reduced
uncertainty is then compared to the observed lines to
determine the error in robot position and orientation.
Section 5.2.4.2 details such operations.
Another example illustrates a symbolic uncertainty
management technique similar to the theory of endorsements
presented in section 2.5.3.6, and to uncertainty management
within the schema system proposed in [Draper 88] Such a
technique is used for conflict resolution when no match
exists between observed features (what the sensors are
actually seeing) and model features (what the sensors should
be seeing). To resolve the inconsistencies, we propose a
knowledge-based approach within the framework of the Sensory
Knowledge Integrator. Using a priori domain-dependent and

39
domain-independent knowledge, the system reasons about the
conflict and generates resolution tasks to resolve it. These
tasks utilize symbolic endorsements which constitute a
symbolic record of the object-specific evidence supporting
or denying the presence of an object instance. By
accumulating endorsements from a priori expectations about
the object, and from sub-part and co-occurrence support for
the object, the system deals with reasons for believing or
disbelieving a certain hypothesis. To illustrate this
approach we give the following example: Suppose for example
that the system's model of a coffee mug includes a handle.
So, the absence of a handle in a particular view of the mug
reduces the confidence rating of the mug hypothesis. Rather
than just lower the numeric confidence value, the "mug
detector" knowledge source also records the absence of the
handle. This is a source of negative support weakening the
confidence in that hypothesis. The system then takes steps
to remove this negative evidence, invoking another behavior,
for example the "curiosity" behavior, to scan the object
from a variety of view points to account for the missing
piece of evidence. If a hypothesis is subsequently posted
for the handle, the mug hypothesis regains its higher
confidence. The system thus arrives at more reliable
conclusions by reasoning about the sources of uncertainty.
The symbolic representation of uncertainty facilitates this.

CHAPTER 3
INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS
One of the main goals of robotics research is the
development of autonomous robots. Autonomous robots are
desirable in many applications especially those where human
intervention is difficult. This chapter gives an overview
and analysis of the research issues associated with the
field of intelligent autonomous systems, and based on this
analysis presents the directions of the proposed research.
Traditional autonomous mobile robot projects [Crowley 85]
[Shafer 86] [Ayache 88] use some of the sensor fusion
techniques presented in chapter 2 in an attempt to build a
complete and accurate model of the environment. However,
despite the positive attributes of completeness and detail
of a global world model, some researchers [Brooks 86a] [
Connell 89] argue the need for its existence. The task of
constructing this world model may conflict with the need to
provide timely information about the environment. Hence, a
tradeoff exists between immediacy and assimilation [Payton
86] For control purposes, immediacy considerations give a
higher value to sensor data that can be used to effect
action more quickly. This is because in many real time
situations the time between receiving sensor data and acting
on it is very critical. The disadvantage of immediacy is the
40

41
difficulty to obtain information or features critical to
plan execution from sensor data that has not undergone
sufficient assimilation. In addition, the extracted data may
be inconsistent or in error. To effect immediacy, Brooks
[Brooks 86a] proposed a parallel behavior-based
decomposition of the control system of a mobile robot. His
approach deviates from the traditional serial decomposition
approach which allows for greater sensor data assimilation.
The traditional approach decomposes the sensor to actuator
path into a few large processing modules in series, figure
3.1a. In the behavior-based approach, figure 3.1b, the path
is divided into many small parallel modules each with its
own specialized task and complete path from the sensor to
the actuator. A general characteristic difference between
the two approaches is that the parallel approach requires
behavior fusion, while the traditional approach requires
sensor fusion. Section 3.2 compares these two approaches and
illustrates the advantages and disadvantages of each.
In this chapter we discuss the important issues
concerning the design of intelligent autonomous agents that
are capable of interacting with a dynamic environment. These
issues include consistent world modeling and control
architectures. We present a brief survey of the current
research in the field and highlight the approaches and
methodologies used by the various researchers in

SENSOR^
ACTUATORS
a. Serial decomposition.
SENSOR
REASON ABOUT BEHAVIOR OF OBJECTS
PLAN CHANGES TO THE WORLD
IDENTIFY OBJECTS
MONITOR CHANGES
BUILD MAPS
EXPLORE
WANDER
AVOID OBJECTS
CTUATORS
b. Parallel decomposition.
Figure 3.1 Control architectures, from [Brooks 86]:
a. Serial vs. b. Parallel decomposition.
42

43
tackling the important issues. On the issue of autonomous
robot control architecture, researchers are split between
the behavior-based decomposition and the traditional
decomposition. A brief survey of current research in the
behavior-based approach is presented in section 3.1.2, while
a survey of research in the traditional approach is embedded
in section 3.3 on world model construction issues since such
research involves the classical problems of sensor data
fusion, consistent world modeling, and robot position
referencing. In section 3.2 we discuss and compare the two
approaches, listing the advantages and limitations of each.
Fundamental to the traditional approach is the issue of
consistent world modeling which is presented in section 3.3.
Finally, section 3.4 discusses the directions of this
research.
3.1 Behavior-Based Approaches to Robot Autonomy
In this section we begin by tracing the basis of the
behavior-based approach to concepts in animal behavior, then
we provide a survey of current behavior-based approaches to
robot autonomy, and finally discuss the limitations of the
subsumption architecture [Brooks 86a], and reactive systems
in general.
3.1.1 Lessons from animal behavior
In designing autonomous mobile robots, valuable
insights can be obtained and lessons learned from nature.

44
Nature provides us with a variety of examples of animal
behavior as they successfully interact with their
environment in order to survive. Animals survive by
possessing the ability to feed, avoid predators, reproduce,
etc.. It is believed that animals survive due to a
combination of inherited instinctive responses to certain
environmental situations, and the ability to adapt to new
situations. Ethologists who study animal behavior in their
natural habitat, view animal behavior as largely a result of
the innate responses to certain environmental stimuli.
Behavioral psychologists, on the other hand, study animal
behavior under controlled laboratory settings, and believe
that animal behavior is mainly learned and not an innate
response. Observations and experiments by [Manning 79]
support the existence of both learned and innate behaviors
in animals. Animals with a short life span and small body
size such as insects seem to depend mostly on innate
behaviors for interacting with the environment, while
animals with a longer life span and larger body size
(capable of supporting large amounts of brain tissue
required for learning capacity) seem to develop learned
behavior.
Reflexive behavior is perhaps the simplest form of
animal behavior. A reflexive behavior is defined as having a
stereotyped
response triggered by
a certain class
of
environmental
stimuli. The
intensity
and duration
of
the
response of
a reflexive
behavior
depends only
on
the

45
intensity and duration of the stimulus. Reflexive responses
allow the animal to quickly adjusts to sudden environmental
changes, and thus provides the animal with protective
behavior, postural control, and gait adaptation to uneven
terrain. Such reflexive responses are believed to be
instinctive and not learned since they have been observed in
animals which have been isolated from birth. Other reactive
types of behaviors include orientation responses where an
animal is oriented towards or away from some environmental
agent, and fixed-action patterns which are extended, largely
stereotyped responses to sensory stimulus [Beer 90].
The behaviors mentioned above are by no means solely
dependent on external stimuli. The internal state of the
animal plays an important role in the initiation,
maintenance, and modulation of a given behavior. Motivated
behavior are those governed primarily by the internal state
of the animal with no simple or rigid dependence on external
stimuli. For example, the behavior of feeding does not only
depend on the presence of food (external stimuli) but also
upon the state of hunger (the internal motivational
variable). Thus a behavior exhibited by an animal at a
certain moment enjoys the highest motivational potential
along with the proper combination of external stimuli
detected at that moment. The motivational potential of a
motivated behavior varies with the level of arousal and
satiation. In addition, such behaviors can occur in the

46
complete absence of any external stimuli, and can greatly
outlast any external stimulus[Beer 90].
Given a diverse set of sensory information about the
environment, and a diverse behavioral repertoire, how does
an animal select which information to respond to, and
properly coordinate its many possible actions into a
coherent behavior needed for its long term survival? The
answer to the first part lies in the fact that many
different animals have sense organs specialized in detecting
certain environmental features. For example, [Anderson 90]
reports observations by [Lettvin 70] about the frog's visual
system as specialized in detecting movements of small, dark
circular objects at close range, while it is unable to
detect stationary food objects or large moving objects.
Other animals that posses a variety of specialized sensory
detectors can actively select a subset of these detectors to
initiate the response of the animal. For example,
experiments and observations by Lorenz reported by [Anderson
90], indicate that the herring gull can detect attributes of
shape, size, color, and pattern of speckles of its eggs.
Moreover, depending on the task performed by the gull,
certain attributes become important while others become
unimportant. For example, when herring gulls steal the eggs
of other herring gulls, the shape and size of the egg are
very important. But when retrieving its own eggs as they
roll from the nest during incubation, shape becomes
unimportant while attributes such as color and pattern of

47
speckles gain importance. Based on these observations Lorenz
formulated his idea of the "innate releasing mechanism" that
actively selects a subset of the available sensory
information to trigger the appropriate response. This leads
us to the second part of the question posed at the beginning
of this paragraph: How do animals handle behavior conflict
where certain stimuli and motivations cause the tendency to
simultaneously perform more than one activity? The
observation is that the various behaviors of an animal
exhibit a certain organization or hierarchy [Manning 79] .
Some behaviors take precedence over others, while others are
mutually exclusive. Internal state of the animal and
environmental conditions determine the switches between
behaviors. Such switches may not be all or nothing switches,
and the relationship between behaviors may be non-
hierarchical. That is, behaviors can partially overlap
making it sometimes difficult to identify direct switches
between them [Beer 90]. Survival dictates that whatever
behavioral organization exists in animals, it must support
adaptive behavior, whereby, based on past interactions with
the environment, aspects of future behavior are modified.
Anderson and Donath [Anderson 90] summarize some of the
important observations in their research regarding animal
behavior as a model for autonomous robot control:
a) To some degree all animals possess a set of innate behaviors
which allow the animal to respond to different situations.
b) The type of behavior exhibited at any given time is the result
of some internal switching mechanism.

48
c) Complex behavior can occur as the result of the sequential
application of different sets of primitive behaviors with the
consequence of a given behavior acting as a mechanism which
triggers the next one.
d) Simple reflex types of behavior occur independent of
environmental factors and provide the animal with a set of
protective behaviors.
e) Activation of more complex types of behavior typically depend
upon external and internal constraints.
f) Animals typically only respond to a small subset of the total
amount of sensory information available to them at any given time.
Animals have developed specialized types of detectors which allow
them to detect specific events.
g) Behavior is often organized hierarchically with complex
behavioral patterns resulting from the integration of simpler
behavioral patterns.
h) Conflicting behaviors can occur in animals. These will require
either a method of arbitration between such behaviors or the
activation of alternate behaviors. Pages 151-152.
3.1.2 Current behavior-based approaches to robot autonomy
[Brooks 86a] follows a behavior-based decomposition and
proposes the famous subsumption architecture for behavior
arbitration. The main idea is that higher-level layers or
behaviors override (subsume) lower-level ones by inhibiting
their outputs or suppressing their inputs. His subsumption
architecture have been used in a variety of robots [Brooks
90], and proved robust in dynamic environments. Most of his
robots are designed to be "artificial insects" with a
deliberate avoidance of map or model building. Brooks' idea
is that "the world is it's own best model", and intelligent
action is the outcome of many simple behaviors working
concurrently and coordinated through the context of the
world. There is no explicit representation of goals or
plans, rather, the goals are implicitly designed into the
system by the pre-determined interactions between behaviors

49
through the environment. The next section discusses in
detail the advantages and limitations of the subsumption
architecture.
[Payton 86] also follows a behavior-based
decomposition, and describes a collection of reflexive
strategies in a hierarchy of control, all competing for the
control of the vehicle. The winning behavior is determined
by a winner take all arbitration mechanism. Later work by
[Payton 90] describes methods of compiling world knowledge
such as mission constraints, maps, landmarks, etc., into a
form of "internalized plans" that would have maximal utility
for guiding the action of the vehicle. He proposes a
gradient description to implicitly represent the
"internalized plans". Using this representational technique,
a priori knowledge such as a map can be treated by the
behavior-based system as if it were sensor data.
[Arkin 87] proposes a schema-based approach to the
navigation of a mobile robot. His motor schemas are
processes that run concurrently and independently, each
operating in conjunction with its associated perceptual
schemas. No arbitration mechanism is required, instead the
outputs of the various motor schemas are mapped into a
potential field and combined to produce the resultant
heading and velocity of the robot. Arkin demonstrates that
strategies for path following and obstacle avoidance can be
implemented with potential field methods by assigning

50
repulsive fields around observed obstacles, and by
appropriately adjusting the strength of the fields.
3.1.3 Limitations of the subsumption architecture
At first glance the subsumption architecture appears
modular. Theoretically, a new layer could simply be added on
top of the previous layers to achieve a new level of
competence. In reality, upper layers or behaviors interfere
with the internal states of lower-level layers, and thus can
not be designed independently. In fact, the whole controller
must be redesigned when even small changes to lower-level
behaviors are implemented [Hartley 91] The reason for the
lack of modularity is that in the subsumption architecture,
the behavior arbitration mechanism is not separated from the
actual stimulus/response behavioral function. Moreover, the
arbitration strategy is further complicated by the use of
timeouts, or temporal ordering of behaviors [Anderson 90].
Another limitation of the subsumption architecture stems
from its rigid hierarchy of behaviors. In such a hierarchy,
a behavior is either higher or lower than another, with the
higher behavior inhibiting or suppressing the one below it.
For many non-trivial real life applications, such hierarchy
can not be established. Many behaviors are mutually
exclusive and not hierarchically organized. For example, in
our implementation the "target-nav" behavior which guides
the robot towards a specified location, is not higher or
lower than the "boundary-following" behavior. Rather, the

51
two are mutually exclusive. In addition, it is possible that
lower-level behaviors need to inhibit higher-level ones.
Examples of such situations are numerous in biological
system. In the subsumption architecture, only higher-level
behaviors inhibit lower ones.
In general, the subsumption architecture suffers the
same limitations of behavior-based systems. Such systems
require some method of arbitration such as the subsumption
architecture. The subsumption architecture is specifically
implemented as part of behavior-based systems motivated by
the desire of their designers to produce artificial insects.
Hence, such systems avoid world modeling and the explicit
representation of goals. Instead, as mentioned earlier, the
driving philosophy is that "the world is its own best
model", and goals are implicitly designed within the system
by the designer establishing, a priori, the interactions
between behaviors through the environment. This is a serious
limitation of such behavior-based systems since the designer
is required to predict the best action for the system to
take under all situations. Obviously, there is a limit to
how far one can forsee the various interactions and
constraints in order to precompile the optimum arbitration
strategy [Maes 90] .

52
3.2 Control Architecture; Behavior-Based vs. Traditional
Control
As mentioned before, the control architecture of an
autonomous intelligent mobile robot can be modeled as either
a serial or a parallel decomposition of the perception-
action control path, figure 3.1. The more traditional
approaches [Crowley 85] [Kriegman 89] [Moravec 85] are
serial in nature where the control path is decomposed into a
few modules in series such as: 1) Sensing, 2) Modelling, 3)
Planning, 4) Actuation. In the parallel decomposition
approach [Brooks 86a] [Connell 89] [Payton 86] multiple
parallel control paths or layers exist such that the
perceptual load is distributed. Each layer performs a
specialized goal or behavior and processes data and issues
control commands in a manner specific to its own goals. In
[Brooks 86a] these layers of control correspond to levels of
competence or behaviors with the lower layers achieving
simple tasks such as avoiding obstacles and the higher
layers incrementally achieving more complex behaviors such
as identifying objects and planning changes to the world. In
this hierarchical set of layers, each layer is independent
from the others in performing its own tasks even though
higher level layers may influence lower level layers by
inhibiting their output or suppressing their input. Unlike
the traditional approach which has the disadvantage of
imposing an unavoidable delay in the sensor to actuator
loop, the layered approach enjoys direct perception to
action through concurrency where individual layers can be

53
working on individual goals concurrently. In this case
perception is distributed and customized to sensor-set/task-
set pair of each layer. This eliminates the need for the
robot to make an early decision on which goals to pursue.
Another advantage of the parallel approach is flexibility.
Since modules or layers are independent, and each having its
own specialized behavior and goals, it is possible that each
may have its own specialized interface to the sensors and
actuators. Flexibility stems from the fact that the
interface specification of a module is part of that module's
design and does not affect the other modules' interfaces.
This contradicts the traditional serial approach where a
modification to a module's interface might require
modification of at least the previous and the following
modules if not the whole system. Moreover, all of the
modules in the serial approach must be complete and working
before the system is operational, while on the other hand, a
behavior-based system can still produce useful behavior
before all the modules are complete. The main disadvantage
of a behavior-based approach is the possibility of
exhibiting cyclical behavior patterns by the robot due to
the lack of memory within some behaviors, that is, such
behaviors do not remember previous events and base their
decisions solely on the latest sensor stimuli. This prevents
the robot from responding to events which happen over
several time periods and could cause cyclical behavior. In
our initial simulation of a robot wandering with a fixed

54
control algorithm, our robot exhibited cyclical behavior.
Anderson and Donath [Anderson 88] presented an approach
based upon the use of multiple primitive reflexive
behaviors, and came to the conclusion that "... cyclical
behavior may indicate that an essential characteristic of a
truly autonomous robot is the possession of memory and
reactive behavior (i.e., the ability to react to events
which occur over a number of intervals of time and the
ability to alter the behavior depending upon the previous
behavior)." [Anderson 88, p. 205]. Brooks and Connell
[Brooks 86b] have also observed cyclical behavior in their
wandering and wall following behaviors.
The majority of mobile robot projects follow somewhat
the traditional approach. In this approach, world
representation and sensory confirmation of that
representation are essential to the intelligence of an
autonomous mobile robot. A composite world representation is
generated through the integration of the various local
representations which are themselves formed by the fusion of
data from the multiple sensors onboard. Fusion at various
levels of abstraction is performed in order to produce the
representation useful to the planning subsystem. Unlike the
behavior-based approach where each behavior task employes
its own specialized representation, the representation
employed in the traditional approach is general purpose and
thus is useful for a variety of situations and planning
tasks. In contrast, the behavior-based approach employs a

55
variety of specialized representations (each derived from a
small portion of the sensors data) for use by a number of
concurrent planning tasks resulting in many distinct,
possibly conflicting, behaviors. Thus, the need to perform
"behavior fusion" arises in this case as opposed to sensor
fusion in the traditional approach. Behavior fusion
sacrifices the generality obtained by sensor fusion in order
to achieve immediate vehicle response, while sensor fusion
sacrifices immediacy for generality. The immediacy versus
assimilation tradeoff issue is adequately presented by
[Payton 86].
3.3 Issues in World Model Construction
In this section we examine the issues of constructing a
world model of an autonomous mobile robot. The environment
of a mobile robot is often unstructured and contains objects
either as obstacles to avoid or as items to be examined or
manipulated. In the traditional approach, a mobile robot
must build and use models of its environment. This model
must be accurate and must remain consistent as the robot
explores new areas or revisits old ones [Chatila 85] .
Handling inconsistencies in world model construction of a
multi-sensor system, is one of the main problems tackled by
the proposed research. In order to construct an accurate and
consistent model of the environment, the robot must be able
to correctly determine its position and orientation. This is
a difficult task given that sensors are imprecise.

56
3.3.1 Position Referencing for a Mobile Robot
A mobile robot can achieve position referencing by any
of the following methods:
Trajectory Integration Referencing. Uses odometeric
devices such as shaft encoders without external reference.
These methods are prone to errors (due to wheel slippage)
that are cumulative and cause position drift.
Absolute position referencing. Uses fixed known
external beacons throughout the environment. The more
external beacons we place at known absolute positions in the
environment, the more structured this environment becomes.
In this case the errors in the robot's position and
orientation are related to the beacon system measurement
accuracy.
Relative position referencing. Is performed with
respect to objects with characteristic features whose
positions in the environment are known with good accuracy.
This method is very desirable yet it introduces considerable
complexity. A challenging task in this case is for the robot
to define its own references.

57
3.3.2 World Model Representation
Various world model representations have been proposed.
The choice of an adequate representation depends on the
domain (indoor, outdoor, factory environment, etc..), the
task (navigation, manipulation, identification, etc..), and
the sensors used. Crowley [Crowley 87, 85] suggested a
representation in terms of line segments in a 2-D floor plan
world. Kak et al [Kak 87] also used a model represented as a
2-D line drawing of the expected scene. Chatila and Laumond
[Chatila 85] used three types of models: geometrical,
topological, and a semantic model. The geometrical model is
a 2-D model containing the position in absolute coordinates
of the ground projection of vertices of polyhedral objects.
The topological model is represented by a connectivity graph
of places where a place is defined as an area that is a
functional unit such as a workstation or a topological unit
such as a room or a corridor. The semantic model is a
symbolic model containing information about objects, space
properties, and relationships. Kent et al [Kent 87] proposed
a representation of the world that consists of both a
spatial and an object or feature-based representation. The
spatial representation classifies the world space as
occupied, empty, or unknown, and explicitly represents
spatial relationships between objects. The feature-based
representation associates each object with the set of
features that verifies its identity. Elfs [Elfs 89]
described the occupancy grid representation which employes a

58
2-D or 3-D tessellation of space into cells, where each cell
stores a probabilistic estimate of its state. The state
associated with a cell is defined as a discrete random
variable with two states, occupied and empty.
This research adopts a representation similar to that
proposed by Kent but the spatially-indexed representation
employs a 2-D tessellation of space where each cell in the
grid, not only contains its state of occupied, empty, or
unknown, but it also contains information such as what
object the cell (if occupied) belongs to and whether it is a
boundary point or an edge point, etc.. This representation
is useful for navigation in computing free paths, and for
determining the identity of objects or features in a given
location. The object-indexed representation is linked to the
spatial representation and contains entries such as the
object's name, vertices, bounding edges, and other
discriminating features. This representation is suited to
responding to inquiries about objects or features by name or
by description. In addition to the two representations
mentioned above, we also construct a 2-D line representation
from sonar data. All these representations are implemented
under the proposed Sensory Knowledge Integrator framework
described in section 4.5.1. We describe the implementation
of the spatially-indexed representation in section 5.2.1,
while section 5.2.3 describes the implementation of the 2-D
line representation. This representation is used for

59
position referencing of the robot as described in section
5.2.4.
3.3.3 Managing World Model Inconsistencies
At any point in time the robot has a global and a local
model of its environment. The local model is robot-centered
and represents the environment as perceived at the moment by
the various sensors onboard. The robot is first faced with
the challenge of constructing the local model by integrating
information from the various sensor systems with a priori
knowledge about the environment. Next it must update the
global model using the local model and its position and
orientation information. The difficult problem is to
maintain a consistent model given inaccuracies in position
measurements and in sensor data and its processing elements.
Inconsistencies between the global and the local models must
be resolved in order to avoid model degradation. Resolving
these inconsistencies should improve the accuracy of the
position and orientation information.
A variety of methods have been proposed for model
consistency checks. Crowley [Crowley 85] defines a function
named CORRESPOND that is called for every line in the sensor
model (most recent sensor data described in terms of line
segments) to be matched with every line segment in the
composite local model (an integration of recent sensor
models from different viewing angles). This function tests
the correspondence of two line segments by checking: (1) If

60
the angle between the 2 segments is less than a certain
threshold, (2) If the perpendicular distance from the
midpoint of one segment to the next is less than a
determined threshold, and (3) If one segment passes through
a bounding box (tolerance) around the other. The outcome of
this test gives five types of correspondence. In a later
paper Crowley [Crowley 87] uses the normal distribution to
represent spatial uncertainty, and the normal distance as a
measure for matching the model parametric primitives (ex.
lines, edge segments) to the observed ones. He defines the
function SIMILAR which returns a true if the normal distance
between the two primitives is less than a certain threshold,
else it returns a false. The function CORRESPOND now
consists of a simple set of attribute tests using the
function SIMILAR.
Andress and Kak [Andress 87] define a COLLINEARITY and
a NONCOLLINEARITY function as a measure of compatibility and
incompatibility respectively between a model edge segment
and an observed one. These compatibility measures form the
initial "basic probability assignment" for the observed
segments. The Dempster-Shafer formalism is used for belief
update in the face of new evidence.
Our approach to consistent world modeling is similar to
that of [Crowley 87] It is implemented as part of the
Sensory Knowledge Integrator framework. Section 5.2.4 of
chapter five details the implementation of our method. The
parameters of observed and model lines are matched using the

61
correlation and consistency checking techniques described in
section 2.5.1 of chapter 2. A successful match indicates
that all orientation, collinearity, and overlap tests have
been satisfied. Next, we merge the parameters of the two
matched lines using estimation techniques also described in
section 2,5.1. These techniques use the standard Kalman
filter equations. The merged estimate with reduced
uncertainty is then compared to the observed lines to
determine the error in robot position and orientation.
Section 5.2.4.2 details such operations.
In case no match exists between a model feature (what
the sensors should be seeing) and an observed feature (what
the sensors are actually seeing), then a conflict exists and
must be resolved. In our implementation we did not encounter
such conflicts and we did not setup our experiments in order
to obtain a conflict. Instead, in what follows, we propose
how inconsistencies could be approached under the Sensory
Knowledge Integrator framework. We propose a knowledge-based
approach for resolving the inconsistencies where the system
generates resolution tasks to resolve the conflict, and
reasons about the conflict using a priori domain-dependent
and domain-independent knowledge. At higher levels of
abstraction, the conflict resolution task utilizes symbolic
endorsements which constitutes a symbolic record of the
object-specific evidence supporting or denying the presence
of an object instance. Thus, we deal with reasons for
believing or disbelieving a certain hypothesis by

62
accumulating endorsements from a priori expectations about
the object, and from sub-part and co-occurrence support for
the object.
3.4 Direction of Proposed Research
As mentioned in chapter 1, this research follows a
hybrid (behavior-based and cognitive) approach to the
problem of controlling an autonomous mobile robot. The goal
is to enable autonomous operation in a dynamic, unknown, and
unstructured indoor environment. The robot knows about
certain objects to expect in the environment, but does not
have a map of it. The robot has considerable general
information about the structure of the environment, but
cannot assume that such information is complete. It
constructs a model of the environment using onboard
sensors. We aim to develop a general purpose robot useful
for a variety of explicitly stated user tasks. The tasks
could either be general or specific including such tasks as
"do not crash or fall down", "build a map", "locate all
green drums" etc..
Unlike most behavior-based approaches which avoid
modelling of the world and the use of world knowledge, our
view is that while world models are unnecessary for low-
level actions such as wandering around while avoiding
obstacles, they are essential to intelligent interaction
with the world. General, efficient, and flexible navigation
of a mobile robot require world models. The world models

63
provide a "bigger picture" for the robot when
reflexive/reactive behaviors encounter difficulty. Such
difficulties include trap situations due to local minima
causing cyclic behavior, oscillations in narrow passages or
in the presence of obstacles, and inability to pass between
closely spaced obstacles [Koren 91][Brooks 86b][Anderson
88]. Trap situations are caused by various topologies of the
environment and by various obstacle configurations such as a
U-shaped configuration. [Arkin 90] gives a vivid example of
the difficulties encountered by reactive control as
resembling the "fly-at-the-window" problem. This situation
arises when the insect expends all of its energy trying to
go towards the sunlight outside through the glass of the
window. Our work dynamically builds a map of the environment
within the Sensory Knowledge Integrator framework. As we
will see in section 4.3.1, this framework contains a variety
of knowledge sources including different types of "trap
detector" knowledge sources that use map data to look for
traps. This information is used by the planning module to
avoid or recover from traps. Map data is used to reconfigure
the lower-level behaviors and not to replace the function of
these behaviors. In this manner the environment is more
efficiently explored while the robot still enjoys the real
time operation of the reflexive behaviors. Additionally, the
construction of a general purpose world model makes use of
the available world knowledge. For example, the Sensory
Knowledge Integrator, the underlying framework of our map

64
builder module, exploits a priori knowledge about the
environment such as objects to be encountered or
manipulated. The a priori knowledge gives the robot an idea
about its relationship to the world and allows it to
efficiently use its resources.
The architecture for planning and control of a mobile
robot should be a synthesis of the traditional serial
approach and the parallel behavior-based approach. This
architecture consists of a hierarchy of control in which
lower level modules perform "reflexive" tasks while higher
level modules perform tasks requiring greater processing of
sensor data. We call our architecture a hybrid one because
it includes a cognitive component and a behavior-based
component, figure 3.2. A robot controlled by such a hybrid
architecture gains the real-time performance of a behavior-
based system while maintaining the effectiveness and goal
handling capabilities of a planner with a general purpose
world model. The basic instinctive competences for the robot
such as avoiding obstacles, maintaining balance, wandering,
moving forward, etc..are provided by the behavior-based
component of the system, while the cognitive part performs
higher mental functions such as planning. The higher the
competence level of the behavior-based system, the simpler
the planning activity. Motivated behaviors implemented as
part of the behavior-based system, and the associated
motivation state, form the interface between the two
components. A motivated behavior is triggered mainly by the

65
Figure 3.2 Model of hybrid control.
associated motivation state. By merely setting the
motivation state of the robot, the cognitive module
activates selected motivated behaviors in order to bias the
response of the behavior-based system towards achieving the
desired goals. The details of plan execution are left to the
behavior-based subsystem. The motivation state consists of a
variety of representations, each associated with the
corresponding motivation-driven behavior. It is the means of
communication between the cognitive and the behavior-based
subsystems, and could be thought of as a collection of

66
virtual sensor data. The "target-nav" behavior, discussed in
section 4.2.4 of our implementation, is an example of a
motivation-driven behavior. Its motivation state is
represented as a location that is set by the planning
module. In addition to its virtual sensor data input, it
also acquires data from the real position shaft encoder
sensors in order to generate a vector heading towards the
target. The arbitration of the various behaviors in the
behavior-based system competing for control of the robot
actuators, is partly hardwired in the behavior-based system,
and partly encoded in a flexible arbitration strategy in the
cognitive system. The flexible strategy changes during the
operation of the robot depending on the current situation
and the task at hand. In section 4.3 of the next chapter, we
discuss various arbitration strategies including the one
used in our implementation.
As mentioned earlier, we view world models as essential
to intelligent interaction with the environment, providing a
"bigger picture" for the robot when reflexive behaviors
encounter difficulty. Thus, the research proposed in this
paper adopts a knowledge-based approach to the problem of
constructing an accurate and consistent world model of the
environment of an autonomous mobile robot. We propose to
construct this model within the framework of the Sensory
Knowledge Integrator proposed in [Bou-Ghannam 90a,b] and
described in chapter 4. A more accurate model is obtained
not only through combining information from multiple sensory

67
sources, but also by combining this sensory data with a
priori knowledge about the domain and the problem solving
strategy effective in that domain. Thus, multiple sensory
sources provide the added advantages of redundancy and
compensation (where the advantages of one sensor compensates
for the disadvantages or limitations of the other) while
domain knowledge is needed to compensate for the
inadequacies of low-level processing, as well as to generate
reasonable assumptions for the interpretations of features
derived from lower-level sensory data. In addition we
propose to achieve a consistent model by making explicit the
knowledge about the inconsistencies or conflict, and using
this knowledge to reason about the conflict. This is similar
to the theory of endorsement proposed by Cohen (Cohen 85]
where resolution tasks are generated, and positive and
negative endorsements are accumulated in order to resolve
the conflict.

CHAPTER 4
THE PROPOSED HYBRID CONTROL ARCHITECTURE
We propose an architecture that is a synthesis of the
parallel decomposition and the traditional serial
decomposition of a mobile robot control system. Both of
these architectures were discussed and compared earlier.
Figure 4.1 depicts the general framework of our hybrid
control architecture while figure 4.2 shows a block diagram
of a specific implementation of the proposed architecture.
The functions of the various blocks in the figures will
become clear as we proceed through the chapter. The hybrid
architecture supports a hierarchy of control in which the
various lower level modules (such as "avoid-obstacles",
"target-nav", "follow-wall", etc..) perform reflexive
actions or behaviors providing a direct perception to action
link, while the higher level modules (such as the map
builder and the planning modules) perform tasks requiring
greater processing of sensor data such as modelling. It is
important to emphasize parallelism or concurrency in the
hybrid architecture. For example, in figure 4.2, the
planning module, the map builder, and the lower-level
behaviors are all running concurrently. The lower-level
behaviors constitute the behavior-based subsystem and
provide the basic instinctive competences, while the higher-
68

69
level modules provide the cognitive function. Note that
action takes place only through the behavior-based
subsystem. When needed, the cognitive (planning) module
effects action by reconfiguring the behavior-based system.
Reconfiguration involves arbitration, and changing the
motivation state of the behavior-based subsystem, as will
become clear later in this chapter. We call this
decomposition a hybrid parallel/serial decomposition because
even though the whole system follows a parallel layered
decomposition, the higher level modules (namely the map
builder and planning modules) follow somewhat the
traditional serial approach of sensing, modelling, and
planning before task execution or actuation occurs. However,
unlike the traditional approach, the planning module does
not have to wait on the map builder for a highly processed
representation of the environment before it can effect
action. Instead, based on its current knowledge and the
status provided by the lower level behaviors, the planning
module can select from a variety of lower level behaviors or
actions. Thus, unlike the subsumption architecture proposed
by [Brooks 86a] where any behavior subsumes the function of
all the behaviors below it (by inhibiting their outputs or
suppressing their inputs), in our implementation the
arbitration strategy is incorporated in a set of rules in
the planning module. In what follows we discuss the
individual blocks of our proposed architecture starting with
the planning module.

Figure 4.1 A general framework of the proposed hybrid
control architecture.
70

Figure 4.2 A specific implementation of the hybrid control
architecture.
71

72
4.1 The Planning Module: A Knowledge-Based Approach
The planning module performs reasoning and task
planning to accomplish user specific tasks, such as locate
all green drums in a warehouse for example. In order to
accomplished such tasks, the planning module performs
various tasks including map-based planning (such as route
planning) and behavior reconfiguration. For example, in our
implementation of the planning module, discussed in detail
in section 5.1, the task is to efficiently map the
environment without crashing into things. Thus our planning
module performs some simple map-based planning functions,
but deals mainly with behavior arbitration. The arbitration
strategy that effects the behavior reconfiguration is
embedded into a set of production rules in the planning
module. In determining the current behavior arbitration, the
arbitration rules utilize knowledge about the goals, the
environment, the individual behaviors, the current situation
status, etc.., and thus create a flexibile, goal-driven
arbitration strategy that is useful for a variety of
missions in different types of domains. We will learn more
about arbitration and the arbitration network in section
4.3. It is important to note that behavior reconfiguration
is only used when the behavior-based system encounters
difficulties such as a trap situation as described in
section 3.4. Various trap situation analogous to the "fly-
at-the-window" situation can be detected in the map builder
module, and recovered from or avoided using heuristic rules

73
in the planning module. Reconfiguration is accomplished by
changing the motivation state of the motivation-driven
behaviors, and by performing some behavior arbitration.
Thus, the planning module selects (enables and disables)
from a variety of behaviors (such as avoid-obstacles,
target-nav, follow wall, etc..) the appropriate set of
behaviors at the appropriate time for the task at hand. For
example, in our implementation given the task of mapping the
environment, when the 2-D line representation becomes
available, the corner detector knowledge source in the map
builder examines this representation and posts a corner
hypothesis on the hypothesis panel, which in turn causes a
specific rule of the arbitration strategy rules in the
planning module to fire selecting as a result the target-nav
behavior with the location of the corner to be investigated
as its target. Thus, the behavior selection is based upon
status inputs from the various behaviors and from the map
builder module which, in time, will also provide a high-
level spatially-indexed and object-indexed representation of
the environment. In addition, the planning module uses a
priori knowledge about the task, the environment, and the
problem solving strategy effective in that domain. For
example, if the robot's task is to locate all green drums of
a certain size, a priori knowledge (such as "drums of this
size are usually located on the floor in corners") can
greatly help the task of searching for the drums. This
knowledge about the task and objects in the environment,

74
coupled with knowledge about the environment (such as in an
indoor environment, corners are the intersection of two
walls, etc..) can be brought to bear to improve the
efficiency of the search. Other types of a priori knowledge
include self knowledge such as the diameter and height of
the robot, the physical arrangement of the sensors onboard
the robot, and types and characteristics of sensors used.
For example, knowing the diameter of the robot, helps in
making the decision of whether to let the robot venture into
a narrow pathway. Note that having a priori knowledge about
the environment does not mean that the environment is known,
instead it means that the robot knows about certain objects
to expect in the environment (for example, walls, doors and
corners in an indoor environment) but does not have a map
of it.
One type of reasoning performed by the planning module
include the detection and prevention of cyclic behavior
patterns exhibited by the robot when driven by the lower-
level reflexive behaviors. For example, if in the wander-
while-avoiding-obstacles behavior, the robot gets stuck in a
cyclic pattern (similar to the fly-at-the-window situation)
giving no new information to the map builder, the map
builder forwards its status to the planning module including
the location of the centers of various frontiers that form
the boundary between empty and unknown space. The planning
module enables the target-nav behavior to choose one of
these centers as its target. When this target is reached

75
with the robot oriented towards the yet unknown area, the
planning module might enable the wander behavior again to
allow discovery of unknown areas. The actual implementation
of the planning module is presented in chapter 5 and follows
a knowledge-based approach. We use CLIPS, a knowledge-based
system shell, as an implementation tool to represent and
reason with the knowledge used to accomplish the tasks of
the planning module. A review of CLIPS is given in the
appendix.
4.2 Lower-Level Behaviors
As mentioned earlier, the behavior-based subsystem
provides the robot with the basic instinctive competences
which are taken for granted by the planning module. The
subsystem consists of a collection of modular, task-
achieving units called behaviors that react directly to
sensory data, each producing a specific response to detected
stimuli. These behaviors are independently running modules
that perform specific tasks based on the latest associated
sensor data. Each of these modules is associated with a set
of sensors needed to perform the specialized behavior of the
module. Sensor data is channelled directly to the individual
behavior allowing for "immediate" reaction. Thus each module
constructs from its sensor data a specialized local
representation necessary to effect the behavior of the
module. The responses from multiple behaviors compete for
the control of the robot actuators, and the winner is

determined
76
by an arbitration network. The behaviors
comprising our behavior-based subsystem are reactive,
characterized by a rigid stimulus-response relationship with
the environment. The response of a reactive behavior is
deterministic and strictly depends on the sensory stimulus
(external environment and internal state). It is not
generated by a cognitive process with representational
structures. The model of our reactive behavior is given in
figure 4.3. The function F(Si) represents the deterministic
stimulus-response function, while the threshold x is
compared to F(Si) before an output response is triggered. In
our implementation, the various thresholds are
experimentally determined. In general, the threshold
represents one the important parameters for behavior
adaptation and learning. Thus, adhering to the biological
analogy, the threshold can be set by other behaviors
depending on the environmental context and the internal
state. We further divide our reactive behaviors into
reflexive and motivation-driven behaviors. The motivation-
driven behaviors are partly triggered by the motivation
state of the robot (set by the planning module in our
implementation, but could theoretically be set by other
behaviors), while the response of a reflexive behavior is
driven only by external stimuli. Reflexive behaviors
constitute the protective instincts for the robot such as

Figure 4.3 Model of a reactive behavior.
Figure 4.4 Sonar sensor repulsive force model.
77

78
avoiding obstacles and maintaining balance, while motivated
behaviors execute goal-driven tasks triggered by the
associated motivation state set by the planning module in an
effort to bias the response of the behavior-based system
towards achieving the overall mission. Such motivated tasks
include moving to a specified target location as in the
"target-nav" behavior.
In addition to providing the specific task-achieving
functionality, the behaviors also serve as abstraction
devices by providing status information to the planning
module. Such status include variables representing error
status variables, and operating conditions variables. The
error status include variables indicating errors such as
robot communications error, or sonar data error. The
operating conditions variables represent a behavior's
operating conditions such as "target-reached" or "break-
detected" for example.
Some of the behaviors we are interested in include
"avoid-obstacles", "wander", "target-nav", "boundary-
follower", and "path follower".
4,2.1 Avoid obstacles behavior
This behavior uses sonar information from a ring of 12
sonar sensors placed around the robot. Each sonar hit (range
data) is modeled as the site of a repulsive force whose
magnitude decays as the square of range reading (distance to
the obstacle) of that sonar [Khatib 85], figure 4.4. The

79
avoid behavior determines the resultant repulsive force
acting on the robot by summing the forces sensed by each
sonar sensor, as follows:
12
Fres = E(l/ri)2.ei
i=l
The magnitude of the repulsive force is then compared to an
experimentally determined threshold value. If the magnitude
exceeds the threshold value then the repulsive force
represents the response of the avoid behavior as a vector
heading for the robot. This threshold is fixed
(experimentally determined and normalized to outputs of
other behaviors) in our implementation, but within an
adaptive behavior-based system it is not fixed and
constitutes a variable parameter which is adjusted by the
outputs of other behaviors as the situation demands. For
example, a "charge-battery" behavior may tune down the
effect of the "avoid-obstacles" behavior by raising its
threshold as the robot approaches the battery charging
station, allowing the robot to dock and connect to the
charging pole.
4.2.2 Random wander behavior
This behavior generates a random heading for the robot.
This behavior, coupled with the avoid obstacles behavior,
form an emergent behavior with the following functionality:
"wander at random without bumping into things".
Such an

80
emergent behavior is used in totally unknown environments
for random wandering. This is also useful for leading the
robot out of trap situations.
4.2.3 Boundary following behavior
This behavior uses sonar scan data to determine the
direction for safe wandering or the nav-vector as we call
it. In this behavior mode the robot keeps moving forward as
long as the range of the forward pointing sonar is larger
than a certain threshold, and the range values of the rest
of the sonar sensors are also larger than their respective
thresholds. The range data is dynamically read from a set of
12 sonar sensors onboard forming a ring around the robot.
When the conditions for a clear (no obstacle in the way)
forward motion are no longer satisfied, this behavior
determines the next clear nav-vector from the latest sonar
scan. Usually, many nav-vectors are valid and the one
closest to a forward direction is chosen, thus minimizing
the degree of turns for the robot. This also has the effect
of following the boundary of the indoor environment such as
walls. The algorithm involved is straight forward, fast and
utilizes sonar range data directly with minimal processing
of that data. This behavior could also be used as a
generalized wandering behavior for map building. Note that
while this behavior is active, the map builder is busy
(concurrently) assimilating sensor data from the various
locations visited by the robot.

81
4,2,4 Target-N&v Behavior
This behavior is a location attraction behavior that
generates as its response a vector proportional to the
vector between the current location of the robot and the
specified target location. Robot position is provided by
wheel encoder sensors, while the target position is provided
by the planning module as part of the motivation state of
the robot. So, this is a motivation-driven behavior, the
motivation being to reach the target location. A robot R at
(xr, yr) is attracted to a target T at (xt, yt) by the
following heading vector:
V = RT = (xt xr)i + (yt yr) j
The magnitude of this vector decreases as the robot
approaches the target, and the target-reached flag is set
when the magnitude drops below a certain threshold. In
addition, our implementation normalizes the magnitude and
sets an experimentally determined threshold value as its
maximum amplitude. As a result, the avoid-obstacles behavior
gains higher priority as the robot encounters obstacles on
its path to the target. This behavior coupled with the avoid
obstacles behavior attempt to navigate the robot towards the
target without bumping into obstacles. The robot actually
tries to go around obstacles towards the target. Results of
our experimentation show the robot successfully navigating

82
around obstacles placed in its path. Keep in mind that some
topologies of the environment will trap the robot in
unproductive cyclic maneuvers when operating under the
target-nav behavior.
4.3 Arbitration Network
Behavior-based systems require some mechanism for
combining or arbitrating the outputs of the different
behaviors. In the case of a behavior-based mobile robot,
each behavior generates a heading for the robot, but the
robot can only accept one heading at a time. The resultant
or winning heading is determined by the arbitration
mechanism. One method of arbitration is the subsumption
architecture [Brooks 86a] where higher-level behaviors
subsume lower-level ones by suppressing their input or
inhibiting their outputs. Arbitration, however, can be
achieved in many ways. The simplest method uses a priority
list where the behavior with the highest priority on the
list gets control of the actuators. Another method
implements a strategy where once a certain behavior gets
control, it remains in control as long as it is active
regardless of other behaviors. Once the stimulus for
activating the controlling behavior disappears, the behavior
releases control of the actuators. Other strategies involves
combining the outputs of the various behaviors in some kind
of formulation, such as the potential field method [Arkin
87] .

83
Our approach for behavior arbitration implements simple
binary all or nothing switches in the arbitration network,
with the arbitration control strategy incorporated into a
set of production rules, figure 4.5. In our implementation,
the rules reside in the planning module running under CLIPS,
a knowledge-based systems shell. CLIPS implementation of the
arbitration strategy is discussed in section 5.1. We believe
that by encapsulating the arbitration strategy in a set of
rules under the control of the cognitive system, a robot
useful for a wide range of missions can be created. The
rules incorporate knowledge about the goals, the individual
behaviors, the environment, and the current situation
status. Note that the real-time performance of reactive
control is still maintained, since the behavior-based system
is always enabled. The arbitration rules in the cognitive
system reconfigure the arbitration switches of the behavior-
based system depending on the goal at hand, and the current
situation. For example, in our experiments, we initially
configure the behavior-based system into an "explore" mode
allowing the robot to wander around without bumping into
things. After the initial configuration, the cognitive
system leaves the behavior-based system alone. Later, when a
knowledge source in the map builder module discovers large
boundaries between the empty and the unknown areas of the
occupancy grid representation being constructed, and that
such boundaries are not being traversed by the robot, this
information is made available to the arbitration rules in

84
the planning module and brought to bear on the behavior-
based system configuration by activating the target-nav
behavior with the center of the discovered boundaries as
target location. When the robot arrives at its target and
crosses the boundary, it is now reconfigured to discover the
new unknown areas.
Adhering to the biological analogy, the switches in an
arbitration network of a behavior-based system are not
purely binary all or nothing switches, but are active
elements with adjustable gains and thresholds, as modeled in
figure 4.6 by the operational amplifier circuit. The
programmable gain of the switch varies from a minimum to a
maximum allowable value. When the gain is either 0 or 1,
then a simple binary switch is obtained. The threshold is
another adjustable value above which the response of the
input behavior will produce an output. The programmable gain
and threshold constitute the basic parameters for adaptive
behavior and learning. The best values for such parameters
are not readily apparent, but can be fine tuned by some
adaptive learning algorithm or neural network. [Hartley 91]
proposes the use of "Genetic Algorithms" to accomplish such
tasks. Using the switch model of figure 4.6, the
implementation of an arbitration network involves the
knowledge of which behavior affects (adjusts the parameters
of) which other behavior, by how much, and in which
situations or context. The wiring of such an arbitration
network depends upon the goals or the desired competences of

Figure 4.5 Arbitration by production rules and
superposition.
Figure 4.6 Model of an arbitration network switch.
85

86
the robot. In addition, within a single system many
arbitration networks may exist, each servicing a sub-group
of behaviors that determine a certain competence level as
shown in figure 4.1. In our view, each competence level is
developed separately, and the thresholds and gains
determined experimentally (by using a neural network, for
example). The behaviors shown in figure 4.7 give a simple
example of one competence level, the "explore" competence
level used by the robot to explore unknown environments for
map building tasks and local navigation. As shown in figure
4.7, this level includes an "avoid", a "random-wander", a
"boundary-following", and a "target-nav" behavior. The
latter three behaviors are mutually exclusive while either
of the three is complementary to the "avoid" behavior. One
arbitration scheme for this subset of behaviors could give
the "target-nav" the highest priority of the three behaviors
with the "boundary-following" behavior having a higher
priority than the "random-wander" behavior. Thus, for
example, once the "boundary-following" behavior is
triggered, it inhibits the "random-wander" behavior causing
the output of the "boundary-following" behavior to be
combined (superimposed) with the output of the "avoid"
behavior which has the highest priority.

87
4.4 Map Builder: A Distributed Knowledae-Based Approach
The map builder generates two linked representations of
the world: a spatially indexed and an object or feature
indexed representation. The spatially indexed representation
consists of a 2-D tessellation of space where each cell in
the grid contains information about its state whether
occupied, empty, or unknown, in addition to information such
as what object the cell (if occupied) belongs to and whether
it is a boundary point or an edge point, etc... This
representation is useful for navigation in computing free
path, and for determining the identity of objects or
features in a given location. The object-indexed
representation is linked to the spatial representation and
contains entries such as the object's name, vertices,
bounding edges, and other discriminating features. This
representation is suited to responding to inquiries about
objects or features by name or by description.

88
We follow a knowledge-based approach as a framework for
the map builder since the goal of a multi-sensor system such
as the map builder, is not only to combine information from
multiple sensory sources, but also to combine this sensory
data with a priori domain-dependent and domain independent
knowledge. Thus, a powerful multi-sensor system must rely on
extensive amounts of knowledge about both the domain and the
problem solving strategy effective in that domain
[Feigenbaum 77]. This knowledge is needed to compensate for
the inadequacies of low-level processing, as well as to
generate reasonable assumptions for the interpretations of
features derived from lower level sensory data. Domain-
dependent knowledge consists of knowledge about domain
objects and ways of recognizing them. This involves semantic
descriptions of objects, semantic relationships between
objects, the use of interpretation context, experimentally
derived classification functions, and knowledge about the
task and the sensor. Domain-independent knowledge involves
general principles such as perspective distortion,
occlusion, varying points of view, etc..
The challenge of multi-sensory perception requires a
flexible inference strategy that supports both forward and
backward chaining. This flexibility allows the system to
dynamically alternate between data-driven and model-driven
strategies as the situation requires. The blackboard
framework allows for this flexibility [Nii 86a,b]. In a
blackboard framework forward chaining and backward chaining

89
steps can be arbitrarily interleaved. In addition, the many
knowledge sources have continual access to the current state
of the blackboard, and thus, can contribute
opportunistically by applying the right knowledge at the
right time.
Our proposed Sensory Knowledge Integrator [Bou-Ghannam
90b] follows a blackboard framework. The Sensory Knowledge
Integrator is described in detail in a technical report[Bou-
Ghannam 90a] Its highlights will be discussed in the
remainder of this section.
An intelligent multi-sensor system maintains an
internal description of the world which represents its "best
guess" about the external world. This world model is built
using sensory input and a priori knowledge about the
environment, the sensors, and the task. Thus, the problem of
constructing a world model representation involves two types
of information fusion: 1) Fusion of information from
multiple sensory sources, and 2) Fusion of sensory data with
a priori knowledge and object models [Kent 87] This
representation of the world consists of both a spatial and
an object or feature-based representation. The spatial
representation classifies the world space as occupied,
empty, or unknown, and explicitly represents spatial
relationships between objects. The feature-based
representation associates each object with the set of
features that verifies the identity of the object.

90
In previous work [Bou-Ghannam 90a,b] we introduced the
Sensory Knowledge Integrator (SKI), a knowledge-based
framework for sensor data fusion. What follows in this
section is a brief review of SKI, figure 4.8. SKI organizes
the domain knowledge and provides a strategy for applying
that knowledge. This knowledge, needed to describe the
environment being observed in a meaningful manner, is
embedded in data-driven and model-driven knowledge sources
at various levels of abstraction. These knowledge sources
are modular and independent emphasizing parallelism in the
SKI model. The knowledge sources use algorithmic procedures
or heuristic rules to transform information (observations
and hypothesis) at one level of abstraction into information
at the same or other levels. Processed sensory data in the
observations database cause the execution of the data-driven
knowledge sources while model data in the hypothesis
database cause the execution of model-driven knowledge
sources. This execution produces new data (on the
observations or hypothesis database) which in turn cause the
execution of new knowledge and the production of new data
until a high level description of the environment under
observation is incrementally reached. These high level
descriptions comprise the robot's local world model that is
continually updated by new sensor observations.
Data in the observations database range from intensity
and depth arrays at lower levels of abstraction, to lines,
edges, regions, surfaces, at intermediate levels, to objects

91
and their relationships at higher levels of abstraction. The
partitioning of the data into application dependent
hierarchies or levels of abstraction is essential because it
makes it easy to modularize the knowledge base. Thus,
certain knowledge sources become associated with a certain
level of data abstraction and could only be triggered by
data on that level of abstraction. This eliminates the need
to match all the knowledge sources to all the data. The
hypothesis database contains hypothesized high level goals
with hypothesized sub-goals derived from these goals in a
backward (top-down) reasoning scheme. The sub-goals are
matched to facts at lower levels of abstraction to assert
the validity of the hypothesis.
The control module handles the conflict resolution
among knowledge sources and thus determines what knowledge
source or group of knowledge sources to apply next. The
control module monitors the changes in the observations/
hypothesis database along with the potential contributions
of the related knowledge sources in the knowledge base and
determines the next processing steps or actions to pursue.
In other words, the control module determines the focus of
attention of the system. It contains knowledge about the
"big picture" of the solution space and, hence, can resolve
conflict among knowledge sources triggered by the current
situation data. We implement the control module in CLIPS.
To clarify the theoretical concepts introduced above,
section 5.2 of the next chapter discusses the implementation

92
of the map-builder under the Sensory Knowledge Integrator
framework. Specific knowledge sources for building
representations of the world are discussed. Figure 5.4 shows
a specific implementation of the Sensory Knowledge
Integrator.
Figure 4.8 Sensory Knowledge Integrator framework.

CHAPTER 5
EXPERIMENTAL SETUP AND IMPLEMENTATION
The ideal implementation of the hybrid architecture
presented in chapter four would include various specialized
hardware on-board the robot each dedicated to its special
task or behavior and directly controlling the actuators on
the robot. For example, the avoid-obstacles behavior should
ideally be implemented with some digital hardware such as
logic gates or a single-chip micro-controller generating an
output vector that will be fused with vectors from other
behaviors in the arbitration/superposition hardware
producing a resultant vector that directly controls the
actuators. Our implementation simulated these behaviors and
modules in software on general purpose computer workstations
(off-board the robot) that controlled a general purpose
mobile robot through the robot's dedicated PC controller and
not directly to its actuators.
The robot system was setup in an indoor lab environment
(the Nuclear Engineering Robots Lab) with open space and
irregularly shaped boundaries consisting of walls, cabinets,
and computer stations. The dimensions of the lab were
approximately 8 meters by 5 meters. The floor consisted of
smooth tiles necessary for the operation of the wheeled
Cybermotion K2A robot. The system setup is shown in figure
93

94
5.1 with the computational load distributed over three
Silicon Graphics workstations, a personal computer dedicated
to the control of the mobile robot, and a sonar sensor
system with 12 sonar sensors arranged in a circle 30 degrees
apart on top of the mobile robot. Figure 5.2a shows a photo
of the robot with the sonar sensor arrangement on top, while
figure 5.2b shows a graphical simulation of the robot as it
appears on the displays of the graphics workstations. The
graphically simulated robot is a dynamic entity that follows
in real-time and great detail the motion of the actual robot
allowing a user to remotely monitor the activity of the
robot from any of the workstations. The graphical simulation
of the robot is a graphical object that can be manipulated
in different ways such as enlarging or shrinking its size,
moving or rotating it, or changing the viewing angle.
Each of the three workstations was physically located
in a different building and networked using a shared memory
networking software called HELIX (Shared Memory Emulation
System for Heterogeneous Multicomputing). HELIX was
developed at the Center for Engineering Systems Advanced
Research (CEASAR) at Oakridge National Laboratory. A
description of HELIX is beyond the intentions of this paper,
and the reader is referred to a technical report by [Heywood
89]. The computational tasks were divided as follows: a
Personal IRIS workstation at the Machine Intelligence Lab in
Weil hall was dedicated to the CLIPS-based planning module,
an IRIS 4D workstation in Mechanical Engineering was

95
dedicated to the map builder implementation, and an IRIS
2400 in the Nuclear Engineering Robots Lab (where the robot
and its controller are physically located) performed the
lower-level behavior tasks. The latter workstation
communicated to the robot's dedicated PC controller via an
RS-232 serial link, and to the sonar sensors system via
another RS-232. The robot itself communicated to its PC
controller via a 2400 baud radio link. Later experiments
consolidated the functions of planning, map building, and
lower-level behaviors on one IRIS 4D workstation in the
Nuclear Engineering lab.
Persomal IRIS
IRIS4D IRIS 2400 IBM PC
Figure 5.1 System setup.

9(>
a. Photo with arrangment of sonar sensors on top.
b. Graphical simulation of the robot.
Figure 5.2 The K2A cybermotion robot

97
5.1 Implementation of the Planning Module
As mentioned earlier the planning module follows a
knowledge-based approach to accomplish user specific tasks
by reasoning about task planning and behavior fusion using
current data and status provided by the map builder and the
lower-level behaviors, in addition to knowledge about the
environment and the task. We use CLIPS (C Language
Implementation Production System), a knowledge-based system
shell, as an implementation tool to represent and reason
with this knowledge. One of the main reasons for selecting
CLIPS is that it is written in and fully integrated with the
C language providing high portability and ease of
integration with external systems. We have described a CLIPS
implementation of a knowledge-based distributed control of
an autonomous mobile robot in [Bou-Ghannam 91], while a
brief review of CLIPS is given in appendix A. The general
structure of the CLIPS-based planning module is shown in
figure 5.3. Note that the planning module, the map builder,
and the lower-level behaviors (behaviors 0-n) are all
running concurrently. Only one of the lower-level behaviors
has control of the robot at any one time. The selection of
which of these behaviors to turn on is accomplished by the
planning module through the "behavior selector" user-defined
CLIPS external function. The planning module reasons about
behavior selection (or fusion) using the knowledge embedded
in CLIPS rules and the current situation facts provided by
status from the map builder (asserted through the "map

Figure 5.3 CLIPS implementation of the planning module.
98

99
builder interface" external function), and from the lower-
level behaviors (through the "status asserter" external
function). The CLIPS-based planning module communicates with
the lower-level behaviors through a locally setup shared
memory, while it communicates with the map builder using a
shared memory structure called HELIX.
The "behavior selector" function is called from CLIPS
rules by a function call such as: (behavior-select "target-
nav" "on" ?xt ?yt) It simply posts in the local shared
memory the select state of the selected behavior (for
example shmptr->target_nav = 1), and waits for a select-
acknowledge from the selected behavior before returning to
CLIPS. If the select-acknowledge is not received within a
certain time interval the "behavior selector" asserts a
behavior communication error on the CLIPS facts list. This
function is simple and executes fairly quickly consuming on
the order of tenth of a second from the time it is called
until it returns.
The "status asserter" function is called periodically
from CLIPS, and it checks the status on the local shared
memory posted by the various lower-level behaviors. Such
status include variables representing the select state of
the various behaviors, error status variables, operating
conditions variables, and performance measure variables. The
select state variables indicate which behaviors are on and
which are not. The error status include variables indicating
errors such as robot communications error, or sonar data

100
error. The operating conditions variables represent a
behavior's operating conditions such as "target-reached" or
"break-detected", for example. The performance measure
variables include such variables as a behavior's shared
memory access frequency. The "status asserter" function only
asserts the status variables that has changed since the last
inspection.
5.2 Implementation of the Map Builder
The map-builder with the Sensory Knowledge Integrator
(SKI) as its underlying framework, runs on an IRIS 4D
workstation and implements a variety of knowledge sources
and representations. Figure 5.4 shows our specific
implementation of the map builder under the SKI framework
with a variety of knowledge sources and representations. The
two representations used include an occupancy grid empty,
occupied, or unknown (EOU) representation, and a 2-D line
representation. The EOU representation is a spatially-
indexed occupancy grid type of representation where space is
tessellated into cells each containing a value indicating
its state whether occupied, empty, or unknown. This
representation is described in section 3.3.2 and is useful
for map-based navigation strategies in computing free paths.
It is generated in our implementation by the EOU knowledge
source (KS) from sonar data and robot position and
orientation data, figure 5.4. Details of the EOU KS will be
discussed in the next section. The 2-D line representation

101
models the space in terms of 2-D line segments. This is
useful for outlining the boundaries of rooms and objects,
and it is mainly used in our implementation for robot re
referencing, i.e., correcting the position and orientation
of the robot as it travels within its environment. A
composite local 2-D line model of the environment (the
observed lines in figure 5.4) is generated by the line
finder KS, while the match/merge KS (in the consistency KSs
group, figure 5.4) generates and updates the model lines
which are a composite global 2-D line model accumulated over
time from the various observed lines models. Some model-

102
driven KSs transform the global line model to the expected
local scene (i.e., what the robot should be seeing at the
moment). The match/merge KS compares the expected scene to
the observed lines (what the robot is actually seeing at the
moment) in order to update the global world model. If a
match exists, the re-reference KS uses the difference in
position and orientation of the two models to correct the
position and orientation of the robot. All the implemented
KSs will be discussed in greater details in the following
sections.
5.2.1 The EOU Knowledge Source
This knowledge source (KS) generates the spatially-
indexed EOU representation from sonar sensor data and robot
position and orientation data. It takes advantage of all the
information available for an individual sonar scan including
the signal geometry of the sensor and its probabilistic
characteristics. The process of generating the EOU evolves
over several steps starting by first tessellating the space
into a finite grid of cells with each cell initialized to
the 'unknown' value. Then, sonar readings are analyzed each
corresponding to a cone, figure 5.5, (similar to the sound
energy burst of the sonar) which is overlaid onto the
tessellated space. At this point, cells that fall within the
cone are decremented increasing the confidence that those
cells are in the empty state, while cells that fall at the
end of the cone are incremented (increasing the occupied

A B
Tmplate cone. P = P
B
P
B
R
A
CD
1
A
P
^ _rg.
_ _ _
1
0 0
; 1
_1
Figure 5.5 Translation and rotation of a sonar template
cone.

104
State of those cells). These steps will now be discussed in
greater detail:
Stepl. Grid initialization:
An initial 2-D occupancy grid array of 1000x1000
declared as short integer (1 byte) is setup and each cell
initialized with a neutral value of 128 representing the
unknown state of the cell. The declaration of one byte per
cell was chosen for reasons of memory conservation. Thus,
the value of each cell ranges from 0 or definitely empty to
255 or definitely occupied, and the mid value of 128
indicating the unknown state. This grid of one million cells
represented an actual floor area of 10m by 10m with each
cell having an area of lcm^. Thus, the resolution of an
individual grid cell length was 0.1% of the total grid
length, or 0.00001% of the total grid area. This resolution
was much finer than needed for navigation tasks, so a later
version used a 5cm by 5cm cell causing an increase in the
speed of generating the EOU representation, with less
memory, and without any degradation in task performance. The
resolution or the area of a grid cell is determined by many
factors such as memory capacity, size of the environment,
and the intended use of the EOU representation. If the
intention is to use the EOU to determine large empty or
occupied space, then a rough (low) resolution will do. On
the other hand, if we want to identify doorways and tight

105
places the robot could maneuver through, then a higher
resolution is needed.
Step2. Sonar cone simulation:
Once the occupancy grid is initialized, we generate a
template 2-D cone with the same geometric dimensions of the
actual physical cone of sound waves emitted by the sonar
sensor. The template cone shown in figure 5.5, is 10m long
with a 12 degree beam angle (similar to the characteristics
of the Polaroid sonar sensor cone, see [Elfs 87] and
[Borenstein 88]), located at the origin in the x-direction,
and is tessellated into 105,000 lcm^ cells.
Step3. Analysis of sonar readings:
The template cone generated in the previous step is
translated and rotated from its initial position and
orientation to the position and orientation of the sonar
sensor at the time of the scan with its length reduced from
10m to the actual range returned by the sonar sensor. This
is illustrated in figure 5.5. Next, taking into
consideration the probabilistic characteristics of the sonar
sensor, i.e., the probability of space being empty is high
near the sensor and decreases as the target is approached,
while the probability of space being occupied increases near
the target, figure 5.6, the value of each cell within the
resultant cone is decremented according to its likelihood of
being empty, while values of cells at the end of the cone

106
are increased according to their likelihood of being
occupied. We use a simplified probabilistic model with
discrete steps, as shown in figure 5.6, to determine the
amount of increment or decrement.
Thus, this updating approach makes use of all the
information available for an individual sonar scan including
the sensor geometry and its probabilistic characteristics.

107
5.2.2 The Filter-Raw Knowledge Source
This KS overlays the raw sonar data over the EOU
representation and eliminates all raw points that are not
supported by the EOU representation. That is, if a sonar hit
(x, y) corresponds to a grid cell covering the same (x, y)
location of that hit, and if that cell is considered empty
in the EOU representation, then that raw data hit is not
supported by the EOU representation and is therefore not
passed as a valid hit to the line finding KS.
5.2.3 The 2-D Line Finding Knowledge Source
This KS generates a 2-D line representation of the
basic outlines of the environment. It is based mainly on the
Hough transform [Hough 62] [Gonzalez 87] and on a max-min
clustering algorithm [Tou 74]. A line is represented in its
normal (p, a) form (figure
5.7) as x
.cos(a)
+ y.sin(a)
= P-
The following
steps show how
these
algorithms
are
implemented to
find the
observed
lines
from sonar
and
position and orientation data.
Step 1. Initialize the Hough transform grid:
The (p, a) parameter space is subdivided into a grid of
accumulator cells as shown in figure 5.8. In our
implementation, a varies between am-¡_n of 0 degrees to amax
of 359 degrees, with a Aa of 1 degree, p varies between 0
and 5 meters with Ap = 5 cm. The accumulator cells in the

Figure 5.7 (p, a) representation of a 2-D line.
e
i
e
max
e.
min
ex
min
o>
ex
max
Figure 5.8 The parameter space grid.
108

109
resulting grid were all initialized with a zero value. That
is, A[i, j] = 0 for all i and j.
Step 2. Perform the Hough transform:
For every sonar data point (xk, yk) where k = 0 to N,
the total number of data points, we find:
Pki = xk.cos(ai) + yk. sin (a-¡_)
for each of the allowed subdivisions in a. That is a = 0 to
359. Then, if pmin Pki Pmax / we increment the
corresponding accumulator cell:
A[(int)(a^/Aa), (int)(pki/Ap)]=
A[ (int) (cci/Aa) (int) (Pki/Ap) ] +1
Step 3. Filter out poorly supported lines:
In this step only the heavily supported lines in the
Mi-/ j] grid are retained. The test is as follows: For all i
and j, if A[i, j] > Threshold, then xk = i and yk = j, and
point Sk(xk, xk) is saved as the coordinates of a heavily
supported line in the parameter space. The index k acts as
counter for the heavily supported lines. Assume we obtain
'n' such lines.
Step 4. Cluster the points in the grid:
Here we follow the max-min clustering algorithm,
starting with no cluster centers.
1. Arbitrarily, let the first sample point S^ = (x^,
yi) be the first cluster center C]_.
2. For all the sample points (k = 1 to n) find the
sample point furthest from C]_. That is, find Sk with the
maximum distance from C]_:

110
max{dkl = sqrt[(xk x^2 + (yk yi)2]}
Let C2 = Sk, and the distance between and C2 be d]_2-
3. Compute the distance from each of the remaining
sample points to and C2. That is, compute dk]_ and dk2 for
k= 1 to n. For every pair of these computations, save the
minimum distance, i.e., save min{dki, dk2} fr all k.
4. Select the maximum of all the minimum distances
obtained in 3 above: dmaxm;Ln (1) = max{min{dki, dk2} } for all
k. The '1' index indicates that the maximum distance
corresponds to the k=l sample point.
5. If dmaxm-[n > (l/2)d]_2 then C3 = S]_, else terminate.
6. Repeat steps 3-5 with the additional cluster center.
In general, in step 5 if the maxmin distance is an
appreciable fraction of all the previous maxmin distances
(such as greater than the average of these distances) then
the corresponding sample becomes a cluster center. Otherwise
the algorithm is terminated, and all cluster centers are
determined.
Step 5. Find the lines corresponding to the clusters found
in step 4:
First, for every point of the total number of n data
points, find out what cluster the point belongs to. This is
accomplished by finding the distances from the point to each
cluster center. The point belongs to the closest cluster
center. Next, for every point (x^, y-j_) in cluster k we
calculate the average line parameters for that cluster:
cck = [Z(wi.xi)]/[Zw-jJ

Ill
Pk = [Z(wi .y-L) ] / [XwjJ
where w-¡_ = A[x-¡_,y-jJ
Thus, we obtain a line for every cluster. The
parameters of the line (p, a) are weighted by the support of
each point in the original Hough grid.
Step 6. Group the raw data points as belonging to the lines:
For every raw data point find the perpendicular
distance to each of the lines obtained in step 5. This is
easy to accomplish since the lines are represented in the
normal form. Thus the perpendicular distance from a point
(xi/yi) to line k is given by p^:
Pk = x-j_.cos (a^) + y^.sinia^)
If pfc < Threshold, then (x,yj_) belongs to line k and is
thus stored in group k of points. The threshold value used
represents the line uncertainty in p, as the largest
distance from a point admitted to the line. This uncertainty
Op is used later in the consistency checking knowledge
sources.
Step 7. Sequentially order the points in the line groups
obtained above:
This is done by first projecting every point in the
line group onto the line, and then ordering the points
according to the x-coordinate of the projections so that the
group starts with points of minimum x-coordinates, followed
by points of incrementally increasing x-coordinate. If the
line is close to vertical, the points are grouped according
to the y-coordinate of the projections.

112
Step 8. Detect breaks in the line:
Once the points are sequentially grouped, we find the
distance dj_j between every pair of consecutive points Pj_ and
Pj in a group. If dj is larger than a certain threshold
(usually the diameter of the robot) then the line is split
into two line groups. The first starts with P]_ and ends with
P-j_, and the second starts with Pj and ends Pn. We continue
looking for splits in group Pj-Pn.
5.2.4 The Consistency Knowledge Sources.
The consistency knowledge sources maintain a consistent
world model (the 2-D lines model) as the robot visits new
places or revisits old ones. These KSs consist of the
Match/Merge KS and the Re-reference KS. The Match\Merge KS
is triggered once a new set of observed lines is generated
by the line-finder KS. These observed lines are matched to
model lines using the consistency checking techniques
described in chapter 2. If a model line is found to be
consistent with an observed line, then the two lines
represent the same physical line and will be merged into a
new line with reduced uncertainty using the fusion
techniques from chapter 2 also. The corresponding model line
is updated with the new merged line. If at the start of the
process no model lines exist yet, the model lines are
initialized to the observed lines. In addition, if an
observed line could not be matched to any model line, then

113
either this line belongs to an area of the environment not
modeled yet, or a sensing error has occurred. To resolve
such conflict, a new-area-visited hypothesis is posted on
the hypothesis panel to be confirmed or denied. If it is
found that the observed line comes from a newly visited area
not modeled yet, then it is merely added to the set of model
lines.
As the robot travels, odometeric errors in its position
and orientation accumulate. These errors are kept in check
and actually reduced by the re-referencing KS. This KS uses
the difference (error) in rotation and translation between
two consistent lines, calculates the average difference in
rotation and translation of all the matched lines, and
finally applies this average difference to the robot
position and orientation to find the new corrected position
and orientation of the robot. Details of the consistency KSs
and the methods described above are explained in the
following two sections.
5.2,4.1 The Mfrtch/Merge Knowledge., J5.pyrce
This KS is activated when new observed lines are
available. Each observed line is received as two end points
PI (xl, yl) and P2(x2, y2) The first step is to transform
the line from the end points representation to a
representation suitable for matching such as the one shown
in figure 5.9. This representation is similar to the one

114
following parametric representation of a line:
x.Sin(0) y.Cos(0) = p
and the parameters to be matched are p, 0, and d, with their
corresponding uncertainties Op, cjq, "0" is the orientation
of the line measured relative to the x-axis, "p" is the
offset of the line represented by the perpendicular distance
from the origin to the line, while d is the distance from
the mid point of the line to the point of intersection of
the line with the perpendicular to the line from the origin.
To find p, 0 and d, given the end points PI and P2, we
proceed as follows:

115
0 = tan 1 [ (Y2 Yi)/(X2 X]_) ]
xm = (x1 + x2)/2
Ym = (Yl + Y2)/2
p = xm. Sin (0) ym. Cos (0)
d = xm.Cos(0) + ym.Sin(0)
Next we find the uncertainties Gp, gq, g^. The uncertainty in
the line offset (Gp) is given from the line finding KS as
the largest perpendicular distance to the line from a point
(xi, yi) admitted to the line, that is:
Gp = max{pi- x-j_. Sin (0-¡_) + y. Cos (0-j_) } for all i.
The remaining uncertainties gq and G^ are represented
graphically in figure 5.9. To find gq and G we first find
L, the half length:
L = (1/2) [(x2 xx)2 + (y2 yi)2]1/2
Then, from [Crowley 87]:
GQ = tan-1(Gp/L)
Gd = L
Now, to match a model line to an observed line we apply
the techniques derived in section 2.4.1 and find the normal

116
distance (ND) between the model line parameters and the
observed line parameters. In this case, the ND calculations
are reduced to the 1-D case. Thus, we investigate if all the
following three types of matches are satisfied before
claiming a line match:
1. Orientation match: Accept if
ND [ (90, O0O) (9m, dQm) ] =
(0o-0m)2 /H0o>2 + (d0m) 2] 2. collinearity match: Accept if
ND [ (pQ/dpo)r (Pm'pm) 1 =
(PoPm)2 / [ (<7p0) 2 + (dpm) 2] 3. Overlap match: Accept if
ND [ (do, Lq) (dm, Lj^) ] =
(d0-dm)2 /[(L0)2 +(Lm)2] The three thresholds in the above tests are chosen depending
on the degree of confidence required for each match.
Considering the 1-D case, note that for a normal
distribution, the 95% confidence interval (i.e., 95% of the
measurements will fall in that interval) is the mean plus or
minus twice the standard deviation. Thus, if for example we
take the mean to be 0m then we can claim, with a 95%
confidence, that the observation 0O is within plus or minus
two standard deviations of 0m, that is:

117
10o-emI 2 C(a0O) 2 +(^0m)2]1/2
Similarly, the remaining two tests become:
IPo-pml <2[(cpo)2 + ((Jpm)2]1/2
|d0-dm| <2[ (L0)2 + (Lm)2]I/2
If all the three conditions above are valid for a
particular pair of lines, then we say that the observed line
and the model line in the pair are consistent, or they
represent the same physical entity. Next we merge the
parameters of each consistent pair of lines to provide a
better (reduced uncertainty) estimate. We use the estimation
techniques presented in chapter 2 to perform the merging
including the updating of the uncertainty. Thus, for our 1-D
case we obtain the following estimate:
new = m + (om) 2 (a0m) 2/t (0o) 2 +^<70m)2]
CT0new = (^m) 2 (G0o) 2/ [ (0o) 2 ^^m)2^
Pnew = Pm + (Po-Pm) 2 (apm) 2/ t (^po) 2 +(apm)2l
^pnew = (pm)2(po)2/t(po)2 +(^pm)2]
^new = ^m + (d0-dm) 2 (bm) 2/t (b0) 2 + (bm)2]
^new = (bm)2(L0)2/[ (L0)2 + (bm)2]

118
*.2.4.2 The Re-reference Knowledge Source
This KS corrects the position and orientation of the
robot. The error in position and orientation is caused by
the odometeric sensors of wheel and shaft encoders due to
wheel slippage and uneven weight distribution on the robot.
This error accumulates as the robot travels, and if not kept
in check will cause degradation in the world model
consistency. The steps performed by this knowledge source
are as follows:
Stepl. For all merged line segments, find the average
orientation error between merged line segments and observed
segments:
A0 = (1/n) X[0new (i) 0O (i) ] for i= 1 to n
Step2. For all consistent pairs of line segments, find the
average error in translation:
Ap = (l/n)E[pm(i) Rp0 (i) ] for i= 1 to n
Where,
Ap = [Ax Ay]T
Pm = [xrm yrnJT
Po = txro Vrol^
R = feos(A0) -Sin(A0)
|_Sin(A0) Cos (A0)
A good choice of a reference point to determine the
translation error Ap, is a corner or the intersection of
a selected model reference point,
a selected observed reference point.

119
two line segments in the model lines, and the corresponding
corner point in the observed lines. For example, if line Lm^
matches line L0^ and line Lm2 matches line LQ2, and if pm is
the intersection point between Lm]_ and Lm2, while pQ is the
intersection point between L0^ and LQ2, then pm and p0
correspond to the same physical point and hence makes for a
good reference point.
Step3. Correct the position (Pnew i-s the new corrected
position, and Cnew is the new covariance matrix or
uncertainty) and orientation of the robot knowing A0, Ax,
and Ay. Note that P0ld is the current uncorrected position
of the robot:
pnew = FFold
-new = FColdFT
Where the matrix F is represented as:
F =
Cos (A0) -Sin(A0) Ax
Sin(A0) Cos (A0) Ay
0 0 1

CHAPTER 6
EXPERIMENTAL RESULTS
6.1 Results From An Experimental Run
The robot started its autonomous mission from an
initial location near the
center of
the lab
with
the
motivation
of discovery
and building maps
of
its
environment.
The initial
behavior
enabled
was
the
"curiosity"
behavior which
allowed the
robot to
spin
360
degrees in place while taking sonar scan data before
venturing into the unknown. The end of curiosity triggered
the "avoid", "wander", and "follow-boundary" behaviors
simultaneously. This allowed the robot to move around
without bumping into things, and to follow the boundaries
(walls) of its environment. Figure 6.1 shows the emergent
path of the robot under the control of these behaviors. Note
that the map builder module is working concurrently with
these behaviors building representations of the world by
assimilating sensor data from the various locations visited
by the robot. Thus, in the map builder, the EOU
representation is being continuously updated, while the 2-D
line finder knowledge source is accumulating enough data
points to support and warrant the generation of its 2-D line
representation. Figure 6.2 shows the EOU knowledge source in
action updating the EOU representation as sonar data become
120

i
Figure 6.1 Robot behavior under "avoid", "wander", and
"boundary-follow" behaviors.
121

Figure 6.2 a. After the 1st.
full scan (12 data points).

Figure 6.2 c. After the fifth scan.
Figure 6.2 EOU knowledge source in action: Building the
EOU representation.
Figure 6.3 EOU representation with about 500
data points.
sp^isa^i

Figure 6.4 First 2-D line representation of the robot's
world.
Figure 6.5 First set of raw data points (500 points).

Figure 6.6 Filtered raw data points.
Figure 6.7 The second 2-D line representation.

Figure 6.8 First and second 2-D line representations.
Figure 6.9 Merged lines over 1st. and 2nd. line sets.

127
available. As described in section 5.2.1, the cones simulate
an actual sound burst of a sonar sensor with the inside of
the cone representing empty space, while the end of the cone
represents occupied space. The cones in the figure actually
extend all the way to the occupied region, but the graphics
display routine is set to only display relatively confirmed
empty space, and at the beginning the region of the cone
away from the sensor is not well confirmed yet, and hence
does not show. Figure 6.3 shows the accumulated EOU
representation after about 500 scans or raw data points. In
this experiment, the first 2-D line representation, figure
6.4, emerged upon the completion of about 45 scans (about
500 data points). Figures 6.5 and 6.6 show respectively the
corresponding raw data points, and the filtered raw data
points from which the line representation was generated.
As the 2-D line representation is generated, the robot
continues to gather data in the same generalized wandering
behavior. Meanwhile, the target generator knowledge source
in the map builder hypothesizes the presence of "curious"
locations at the end points of the 2-D lines that might be
worth further investigation for a better map representation.
End points in close vicinity of each others were regarded as
one target. The planning module decides that these targets
are worth investigating and sets the motivation state of the
robot as a location attraction with the locations of the
generated targets. This triggers the target-nav behavior,
and the robot starts moving towards its first target (one of

128
the corners of the room) while still avoiding obstacles.
When a target is reached, the curiosity behavior is
triggered for a more detailed picture of the target
location. Another target generator knowledge source works on
the EOU representation and hypothesizes targets as the
center of a sizable boundary between empty and unknown space
in the EOU representation. In this experiment no targets
were generated by this knowledge source. When all the
targets have been visited and the location attraction
motivation have been satisfied, the target-nav behavior is
disabled and the robot resorts to the generalized wandering
behavior of the earlier stages. When a second set of 2-D
lines is produced, figure 6.7, the consistency knowledge
sources match and merge the two sets of lines (figure 6.8)
generating the improved lines of figure 6.9. Next, we
qualify and quantify the results obtained during the
experimental test run.
6.2 Discussion of Results
The implemented control architecture performed as
intended during the experimental test run, guiding the robot
safely through its unknown and unstructured environment
without operator intervention. While the behavior-based
system reacted reflexively to immediate sensor readings, the
map builder generated models of the world and navigation
targets, and the planning module determined further actions
in order to achieve the desired goal. The goal of our

129
experimental run was to build as complete a map of the
environment as possible. As intended, this was achieved with
minimum intervention and guidance from the planning module
to the behavior-based system. The behavior-based system
adapted to dynamic changes in the environment. This was
demonstrated by placing an obstacle (a chair) at a later
time between the robot and its target when the robot was in
the target-nav mode. Even though the chair has not been
modeled yet into the environmental representations, the
robot avoided the chair and still reached its target solely
due to the competence of the behavior-based system and
without intervention from the planning module. Had the
chair, for example, made the target unreachable, then when
some time has passed without any progress, the planning
module would have intervened by either cancelling the
target, or providing an intermediate target to the robot
from which it can avoid the trap and advance to the original
target. Another validation of interaction to dynamic events
was demonstrated when people entered the lab space of the
robot and walked around. The robot avoided people running
away in the opposite direction as people approached it.
People were modeled temporarily in the EOU representation,
but since they did not stay in the environment for a long
time, their image in the EOU representation did not persist.
It is hard to quantify the real-time performance of the
robot and our proposed control architecture because the
numbers depend on the specific implementation and the

130
hardware used. In our implementation the behavior-based
system was implemented on an IRIS 2400 graphics workstation
which communicated to the robot controller (an IBM PC AT)
via a slow RS232 serial link. The robot controller, in turn,
relayed the commands to the robot via a radio modem. This
arrangement and communications protocol contributed to a
slow cycle time. The cycle time of the behavior-based system
generating a heading for the robot was much faster at about
1 second including a complete sonar scan (firing all
sonars) Relaying this heading to the robot and the time
delay of the robot before affecting this heading amounted to
about 15 seconds.
Concerning the results of the map builder we discuss
next the three types of representations used and give
qualitative and quantitative measures of their performance.
We start with the raw data, followed by the EOU and the 2-D
line representations.
The raw data representation consisting of raw sonar
data points, was quickly acquired from each sonar scan by
modeling the sonar cone as a straight line. Raw sonar data
is used by low-level behaviors such as "avoid obstacles",
and "boundary-following" that require immediate sensor data
for quick reflexive response. Such data is often inaccurate
due to the beam divergence and specular reflection of the
sonar signal. This representation does not make use of the
empty space between the sensor and the detected object.

131
The Empty, Occupied, Unknown (or EOU) representation is
an occupancy grid type representation that uses all
information available in a sonar scan including the empty
space between the sensor and the object. This representation
provides a basic spatially-indexed world model
representation suitable for determining regions of free and
occupied space suitable for navigation tasks. In addition,
this representation continuously improves as more sensor
data is assimilated. For this reason it was used in our
implementation to filter out false sonar readings that were
not supported by this representation.
The 2-D line representation provides the basic outlines
of the room and objects within it. It could easily be used
to find walls, doors, corners, hallways, etc.. Due to the
restricted lab space (one small room with no doors or
hallways to other rooms) in our experiment, the major use of
the 2-D line representation was for maintaining a consistent
world model by correcting the position and orientation
errors of the robot.
/

CHAPTER 7
SUMMARY AND CONCLUSIONS
The experimental setup developed during the course of
this research provides an excellent testbed for future
research in autonomous systems design. Within this testbed,
the addition of new behavioral modules is easily achieved
due to the flexibility and modularity of the proposed hybrid
control architecture. Moreover, the modular distributed
approach followed by the proposed Sensory Knowledge
Integrator, allows for easy addition and integration of new
sensors. The testbed is general enough to accommodate
various experiments associated with a wide variety of
research problems in the relatively new field of autonomous
systems design. Such research areas include adaptive
behavior arbitration and learning, explicit goal-driven
reactive systems, and methodologies and representational
structures for the interface between cognitive and reactive
systems. The classical problems of sensor data fusion and
consistent world modeling are still important, but, equally
important is how the knowledge accumulated in such world
models can be cleverly brought to bear on the behavior-based
subsystem to produce useful, efficient, and robust behavior.
Behavior-based systems follow a parallel decomposition
of robot control with many parallel stimulus-response type
132

133
of behaviors linking perception to action. Such systems
provide desirable, immediate, "reflexive" type of responses
to events in a dynamic environment. In Addition to behavior
arbitration, behavior-based systems are characterized by
their independence of world models and a global reasoning
activity. This make such systems prone to falling into local
minima traps with no efficient way of recovery. Another
characteristic of behavior-based systems is that complex
behavior emerges as the combination of the simple responses
of the various behaviors without the need for an explicit
statement of goals. Instead, goals are implicitly designed
into the system by the designer who predetermines the
interactions of the various behaviors with the environment.
This constitutes yet another limitation of behavior-based
systems since there is a limit to how far the designer can
forsee the various interactions in order to compile the
optimum arbitration strategy. Given the above stated
advantages and limitations of behavior-based systems, the
philosophy of our research is that the real-time performance
of such systems can still be enjoyed, while the limitation
can be avoided by putting to use a priori knowledge and
dynamically acquired models of the environment. For example,
one method of detecting a trap situation uses a priori
knowledge about different trap shapes and configurations,
and applies that knowledge to the dynamically acquired map
information. Once detected the trap can then be avoided by
reconfiguring the behavior-based system, by 'motivating' it

134
away from the trap. Reconfiguring a behavior-based system
consists of changing the arbitration priority and setting
the "motivation state" if needed. The motivation state
triggers motivated behaviors in the behavior-based subsystem
and biases the robot towards attaining the desired goal.
To give a summary of the research presented in this
dissertation, we have described an architecture which allows
for the implementation of a wide range of planning and
control strategies including those requiring timely
information about the environment, and the ones requiring
processing and assimilation of sensor data over time for
general purpose map-based planning. The architecture
exploits the advantages of both behavior-based and
traditional control architectures of an autonomous mobile
robot. This claim was validated in our experiments where the
robot demonstrated the real-time performance of behavior-
based systems in addition to the ability to build
representations of the world and put these representations
to use in aiding reactive control out of difficulties
(cyclic behavior) and, thus, effecting efficient exploration
of the environment. The interface between the planning
module and the reactive behaviors was accomplished by our
"motivated behaviors" and the "motivation state" of the
robot. Depending on the goal at hand and the current
situation status provided by the map builder and the lower-
level behaviors, the planning module sets the "motivation
state" which in turn triggers the "motivated behaviors" to

135
join in effecting the desired behavior. An example of a
"motivated behavior" in our implementation is the "target-
nav" behavior whose motivation state is a specified target
location, and whose action is to guide the robot towards
that target. In our experimentation, targets such as the
boundaries of empty and unknown areas in the occupancy grid
map were detected in the map builder and forwarded by the
planning module as the motivation state for the target-nav
behavior in order to help the robot discover new areas. The
behavior arbitration strategy in our implementation resides
in a set of rules in the planning module. The fact that our
arbitration rules are not hardwired gives our system the
flexibility of a general purpose testbed that could be setup
for operation with a variety of user specific tasks and
environments. We have demonstrated a specific implementation
of the arbitration strategy using CLIPS (a knowledge-based
systems shell) rules. The other major contributions of this
research is the Sensory Knowledge Integrator, the underlying
framework of the map builder module. This framework utilizes
a distributed knowledge-based approach for consistent world
modeling. We have presented the theoretical aspects of this
framework and supported the theoretical claims by
implementing a variety of knowledge sources as part of the
framework for consistent world modeling. Some of the
implemented knowledge sources include the EOU, the 2-D line
finder, and the consistency checking knowledge sources. The
theoretical basis and the details of implementation and

136
interactions among the knowledge sources were discussed in
chapter 5.
To close our discussion, let us address the question of
what are some of the important research issues in the
exciting field of intelligent autonomous systems. So far,
good progress has been made, but many research areas remain
to be explored. Valuable insights can be gained from animal
behavior, and hence, fruitful interactions can be obtained
between AI and the various fields of biology, ecology,
behavioral psychology, physiology, and neurobiology. While
animals possess a variety of reflexive behaviors that allow
the animal to quickly react to sudden environmental changes,
we note that the most striking characteristic of animal
behavior is that it is mainly adaptive. The behavior of an
animal is continuously adjusted to confront the dynamic
interactions with the environment. In artificial autonomous
systems, adaptive behavior-based control is equally crucial.
Moreover, animals adjust their behavior based not only on
external stimuli, but also on internal conditions or state.
In other words, animal behavior is goal oriented, triggered
by internal goals or motivations. Similarly for the case of
an artificial autonomous system, it is our belief that such
a system must have goals or drives that decide its actions.
So, another important research issue is how to make
behavior-based systems driven by explicit goals. Goals can
provide a measure of performance for the system, and are,
thus, particularly important for learning. The ability to

137
learn must be part of any intelligent system. Therefore,
learning paradigms must be a crucial research issue in the
field of intelligent autonomous systems. Learning is evident
in animals when they modify aspects of their future behavior
based on their past history of interactions with the
environment. One final analogy to natural systems that must
be duplicated in artificial autonomous systems is the fact
that animal behavior is robust. For example, when animals
lose certain parts of their body they can still survive and
cope with their environment. Similarly, an artificial
autonomous system must be robust and continue to operate
(probably with a lower level of competence) when some of
their parts fail.
Outside of the analogy to nature, other important
research issues exit. Given that an artificial autonomous
system does not have the luxury of waiting evolutionary time
as animals did to develop, one question is how can such a
system benefit from the knowledge of its designer. That is,
how can the a priori knowledge of the designer be brought to
bear and put to good use in making artificial autonomous
systems intelligent and useful for various user specific
tasks in a variety of domains? Additionally, what form or
representation should the a priori knowledge be compiled
into so that it could be directly used by the behavior-based
system to effect immediate actions?
All of the above mentioned issues are both challenging
and exciting. Live experiments in the real word greatly adds

138
to the excitement of the field. Our view is to encourage
real world experimentation even for modestly simple tasks,
not only to avoid the assumptions inherent in simulations,
but also to speed up progress in the field, especially, when
there exists no agreement among researchers about the
fundamental principles involved in the construction of
intelligent autonomous systems.

APPENDIX
INTRODUCTION TO CLIPS
CLIPS stands for C Language Implementation Production
System. It is a rule-based expert system shell developed by
the Artificial Intelligence Section of the Johnson Space
Center, NASA. CLIPS is written in and fully integrated with
the C language providing high portability and ease of
integration with external systems. It is a forward chaining
rule-based system based on the Rete pattern matching
algorithm developed for the 0PS5 system. The five main
topics that constitute CLIPS are: facts, rules, variables
and functions, input and output, and the user interface. In
the following three sections we will briefly introduce
facts, rules, and user-defined functions only, as these
topics are most needed for understanding our implementation.
For more detail on all the topics, the reader is referred to
the CLIPS User's Manual [Giarratano 89] as well as the
reference manual.
Al.l CLIPS Facts
A program written in CLIPS consists of rules and facts.
A fact is a true statement about some characteristic of the
problem being examined. Facts are the data that cause
execution of the rules. Reasoning propagates from the given
139

140
facts through the rules to their implied premises. The CLIPS
user's manual defines a fact as consisting of one or
more fields enclosed in matching left and right
parentheses." For example, to indicate that the robot has
completed a sonar scan at the tenth positional step, the
following fact is posted:
(position-index 10 sonar-scan complete)
Facts are added to the CLIPS facts list through the "assert"
command, while the "retract" command deletes facts from the
list. For example,
(assert (mode target-nav))
(retract ?fact5)
The first statement asserts the fact (mode target-nav)
indicating that the robot is now in the target-nav mode or
behavior. The second statement retracts from the facts list
the fact assigned to variable "fact5".
A1.2 CLIPS Rules
A rule is a collection of conditions and the actions to
be taken if the conditions are met. It is the primary method
for representing knowledge in CLIPS. The following rule from
our implementation serves as an example:

141
(defrule active-target
?mode <- (mode end-wander-scan)
(position-index ?num)
?f <- (position-index ?num frontier-center ?xt ?yt)
=>
(retract ?mode ?f)
(assert (target ?xt ?yt))
(behavior-select "target-nav" "CN" ?xt ?yt) )
This rule named "active-target" enables the target-nav
behavior whenever in the end-wander-scan mode, and a
frontier center at some position (xt, yt) has been asserted.
The frontier center is detected by the map builder module
and refers to the center of the frontier between empty and
unknown space in the spatially indexed map. This center now
acts as a target for the robot to go to and start
discovering new unmapped areas. In the last statement of our
sample rule above, the target-nav behavior is enabled by
calling the user-defined CLIPS external function called
"behavior-select" and passing it the name of the behavior to
be enabled (target-nav) and the associated parameters (xt,
yt) Next we discuss the powerful concept of user defined
external functions used in CLIPS.
A1.3 CLIPS User-Defined External Functions
Functions that are not predefined in CLIPS and required
to perform some special action, can be defined by the user.
This greatly extends the capabilities of CLIPS and improves
its efficiency since not all types of operations are well-
suited to expert systems. An external function call allows a
temporary exit from CLIPS to perform a certain operation in

142
a procedural language (such as C), and then return to CLIPS.
The external functions must be explicitly described to CLIPS
through a CLIPS function called "usrfuncs". Each user-
defined function is defined within a call to "usrfuncs"
using the CLIPS "define-function" routine. As an example of
declaring a user-defined function, we show below how our
behavior-select function is defined. In CLIPS file main.c we
find the "usrfuncs" function and modify it as follows:
main(argc, argv)
usrfuncs ()
{ /* Md the next two lines */
extern float behavior-select();
define_function("behavior-select",'f', selector, "selector");
}
In the define-function statement there are four arguments.
The first, "behavior-select" is the name of the function as
known by CLIPS. The second argument refers to the type of
value returned by the CLIPS function. The allowed types are:
* i for integer, f* for float, c' for character, 's' for
pointer to a character string, 'w' for pointer to a
character word, 'u' for pointer to an unknown data type, 'm'
for pointer to a multifield variable, and *v' for void. The
third argument is a C pointer to the name of the C function,
while the fourth one, "selector", is the actual name of the
external C function.
An external function can be called from either the
right or the left hand side of a rule. In our previous

143
example we call function "behavior-select" as (behavior-
select "target-nav" ?xt ?yt). The arguments "target-nav",
xt, and yt are to be passed from CLIPS to the external
function. Although these arguments are listed directly
following the function's name inside CLIPS rules, CLIPS
actually calls the function without any arguments and stores
the parameters internally. These parameters can be accessed
by the external function by calling any of the following
parameter access functions: "num-args", "rstring", and
"rfloat".
To pass data from the external function to the CLIPS
facts-list, the simplest method is to call the C function
"assert". For more detail on passing data between CLIPS and
external functions the reader is referred to the CLIPS
reference manual. Below we give a simple example of our
"selector" external function in order to illustrate the use
of the above mentioned functions for parameter passing:
float selector ()
{
char *fun_name, *state, buffer[50];
float x_target, y_target;
int numjpassed;
numjpassed = num_args () ;
fun_name = rstring(1);
state = rstring (2);
if (fun_name = "target-nav")
{
if(state = "CN")
{
x_target = rfloat(3);
yjtarget = rfloat (4);

144
err_code = 10; /* say we encounter sane error that should be
reported to CLIPS.*/
sprintf (buffer, "target_nav status %f", err_code) ;
assert (buffer) ;
return;
The function name and the state parameters passed to the
"selector" function are acquired by calling rstring(l) and
rstring(2) respectively while the xt and yt parameters are
acquired by calling rfloat(l) and rfloat(2) respectively.
Note also that status is passed back to CLIPS by calling the
function "assert" and asserting the fact (target-nav status
10) .

REFERENCES
[Agre 90]
Agre, P. E., and Chapman, D. 1990. "What Are Plans For?" In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA.
[Albus 81]
Albus, J. S. 1981. Brains, Behaviors, and Robotics. Byte
Books, McGraw-Hill, New York.
[Allen 88]
Allen, P. K. 1988. "Integrating Vision and Touch for Object
Recognition Tasks." Int'1 J. Robotics Res. 7 (6) :1533.
[Anderson 90]
Anderson, T. L., and Donath, M. 1990, "Animal Behavior as a
Paradigm for Developing Robot Autonomy." In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Anderson 88]
Anderson, T. L., and Donath, M. 1988, "Synthesis of
Reflexive Behavior for a Mobile Robot Based Upon a Stimulus-
Response Paradigm," In Mobile Robots III, SPIE Proceedings
Vol. 1007, pp. 198-211.
[Andress 87]
Andress, K. M., and Kak, A. C. 1987. "A Production System
Environment for Integrating Knowledge with Vision Data." In
proc. of the 1987 workshop on Spatial Reasoning and Multi
sensor Fusion, A. Kak, and S. Chen eds. Morgan Kaufmann
publishers, pp 1-12.
[Arkin 90]
Arkin, R. C. 1990. "Integrating Behavioral, Perceptual, and
World Knowledge in Reactive Navigation." In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Arkin 87]
Arkin, R. C. 1990. "Motor Schema Based Navigation for a
Mobile Robot: An Approach for Programming by Behavior."
Proc. IEEE Int'1 Conf. on Robotics and Automation, pp. 264-
271.
[Ayache 88]
Ayache, N., and Faugeras, O. D. 1988. "Building,
Registering, and Fusing Noisy Visual Maps." Int'l J.
Robotics Res. 7(6):45-65.
145

146
[Ayache 87]
Ayache, N., and Faugeras, 0. D. 1987. "Building a Consistent
3D Representation of a Mobile Robot 's Environment by
Combining Multiple Stereo Views." Proc. IJCAI-87, pp 808-
810.
[Beer 90]
Beer, R. D., Chiel, H. J., and Leon S. 1990. "A Biological
Perspective on Autonomous Agent Design. In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Borenstein 87]
Borenstein, J., and Koren, Y. 1987. "Obstacle Avoidance with
Ultrasonic Sensors. IEEE Journal of Robotics and
Automation, 4(2):213-218.
[Bou-Ghannam 91]
Bou-Ghannam, A., and Doty, K. L. 1991. "A CLIPS
Implementation of a Knowledge-based Distributed Control of
an Autonomous Mobile Robot." Proc. SPIE conference on
Applications of Artificial Intelligence IX, Orlando,
Florida, vol. 1468, pp. 504-515.
[Bou-Ghannam 90a]
Bou-Ghannam, A., and Doty, K. L. 1990. "Multi-sensor Data
Fusion: an Overview and a Proposed General Model. Technical
Report MIL-TR-90-1, EE Dept., Univ. of Florida, Gainesville.
[Bou-Ghannam 90b]
Bou-Ghannam, A., and Doty, K. L. 1990. "A General Model for
Sensor Data Fusion." Proc. 3rd. Conf. on Recent Advances in
Robotics. Boca Raton, Florida, May 31 June 1.
[Braitemberg 84]
Braitemberg, V., 1984. Vehicles. Experiments in Synthetic
Psychology. MIT Press, Cambridge, MA.
[Brooks 90]
Brooks, R. A., 1990. "Elephants Dont Play Chess." In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA.
[Brooks 86a]
Brooks, R. A., 1986 "A Robust Layered Control System for a
Mobile Robot, IEEE Journal of Robotics and Automation. Vol.
RA-2(1):14-23.
[Brooks 86b]
Brooks, R. A., and Connell, J. H. 1986. "A Distributed
Control System for a Mobile Robot, In Mobile Robots, SPIE
Proceedings Vol. 727. pp. 77-84. Cambridge, MA.

147
[Buchannan 84]
Buchannan, B. G., and Shortliffe, E. H. (eds.) 1984. Rule-
Based Expert Systems: The MYCIN experiments of the Stanford
Heuristic Programming Project. Addison-Wesley, Reading, MA.
[Chatergy 85]
Chatergy, R. 1985. "Some Heuristics for the Navigation of a
Robot, Int11 J. Robotics Res. 4(l):59-66.
[Chatila 85]
Chatila, R., Laumond, J. P. 1985. "Posistion Referencing and
Consistent World Modeling for Mobile Robots." Proc. IEEE
Int' 1 Conf. on Robotics and Automation, St. Louis, MO. pp
138-145.
[Cheeseman 86]
Cheeseman, P. 1986. "Probabilistic vs. Fuzzy Reasoning." In
Uncertainty in AI, L. N. Kanal, and J. F. Lemmer, eds.,
Elsevier Science Publishers, New York, pp.85-102.
[Cohen 85]
Cohen, P. R. 1985. Heuristic Reasoning about Uncertainty: An
Artificial Intelligence Approach. Pitman/Morgan Kaufmann
publishers.
[Connell 89]
Connell, J. H. 1989. "A Behavior-Based Arm Controller," IEEE
Transactions on Robotics and Automation, 5(6):784-791.
[Crowley 87]
Crowley, J. L., and Ramparany, F. 1987. "Mathematical Tools
for Representing Uncertainty in Perception." In Proc. of the
1987 workshop on Spatial Reasoning and Multi-sensor Fusion,
A. Kak, and S. Chen eds. Morgan Kaufmann publishers, pp 293-
302.
[Crowley 85]
Crowley, J. L. 1985. "Dynamic World Modeling for an
Intelligent Mobile Robot Using a Rotating Ultra-Sonic
Ranging Device," Proc. IEEE Int'1 Conf. on Robotics and
Automation, St. Louis, MO.,pp. 128-135.
[Culbertson 63]
Culbertson, J. 1963. The Minds of Robots: Sense Data, Memory
Images, and Behavior in Conscious Automata. University of
Illinois Press, Urbana.
[DeKleer 86]
DeKleer, J. 1986. "An Assumption Based Truth Maintenance
System." Artificial Intelligence 28:127-162.
[Doyle 79]
Doyle, J. 1979. "AAAA Truth Maintenance System." Artificial
Intelligence 12:231-272.

148
[Draper 88]
Draper, B. A., Collins, R. T., Brolio, J., Hanson, A. R.,
and Riseman, E. M. 1988. "Issues in the Development of a
Blackboard-Based Schema System for Image Understanding. In
Blackboard systems, R. Engelmore and T. Morgan eds. Addison
Wesley, Reading, MA, pp.189-218.
[Durrant-Whyte 86a]
Durrant-Whyte, H. F. 1986. "Consistent Integration and
Propagation of Disparate Sensor Observations." Proc. IEEE
Int'1 Conf. on Robotics and Automation, pp. 1464- 1469.
[Durrant-Whyte 86b]
Durrant-Whyte, H. F., and Bajcsy R. 1986. "Using a
Blackboard Architecture to Integrate disparate Sensor
Observations." DARPA workshop on Blackboard Systems for
Robot Perception and Control, Pittsburg PA.
[Elfs 89]
Elf s, A. 1989. "Using occupancy grids for mobile robot
perception and navigation." Computer, 22(6):46-57.
[Elfs 87]
Elfs, A. 1987. "Sonar-Based Real-World Mapping and
Navigation." IEEE Journal of Robotics and Automation, RA-
3(3):249-265.
[Feigenbaum 77]
Feigenbaum, E. A. 1977. "The Art of Artificial Intelligence:
Themes and Case Studies of Knowledge Engineering."
Proceedings of the Fifth International Joint Conference on
Artificial Intelligence (IJCAI 77), pp. 1014-29.
[Flynn 88]
Flynn, A. M. 1988. "Combining Sonar and Infrared Sensors for
Mobile Robot Navigation." Int'1 J. Robotics Res. 7(6):5-14.
[Fodor 83]
Fodor, J. 1983. The Modularity of Mind. MIT Press,
Cambridge, MA.
[Garvey 82]
Garvey, T. D., Lowrance, J. D., and Fischler, M. A. 1982.
"An Inference Technique for Integrating Knowledge from
Disparate Sources." Proc. of the 7th Int'1 Joint Conf. on
Artificial Intelligence, pp. 319- 325.
[Giarratano 89]
Giarratano, J. C. 1989. CLIPS User's Guide. Artificial
Intelligence Center, Lyndon B. Johnson Space Center.
Distributed by COSMIC, the University of Georgia, Athens,
GA.

149
[Giralt 84a]
Giralt, G. 1984. "Research Trends in Decisional and
Multisensory Aspects of Third Generation Robots." 2nd.
International Symposium on Robotics Research, Koyoto, Japan.
August 20-23, pp 511- 520.
[Giralt 84b]
Giralt, G., Chatila, R, and Vaisset, M. 1984. "An Integrated
Navigation and Motion Control System for Autonomous Multi-
Sensory Mobile Robots." In Robotics Research: The 1st.
International Symposium, M. Brady and R. Paul editors., MIT
Press, Cambridge, MA., pp.191-214
[Gonzalez 87]
Gonzalez, R. C., and Wintz, P. 1987. Digital Image
Processing. Addison-Wesley, Reading, MA.
[Gould 82]
Gould, J. L. 1982. Ethology: The Mechanics and Evolution of
Behavior. W. W. Norton & Company, New York.
[Hartley 91]
Hartley, R., and Pipitone, F. 1991. "Experiments with the
subsumption architecture." Proc. IEEE Int'1 Conf. on
Robotics and Automation, Sacramento, CA., pp.1652-1658.
[Henderson 88]
Henderson, T., Weitz, E., Hanson, C., and Mitiche, A. 1988.
"Multi-sensor Knowledge Systems: Interpreting 3D Structure. "
Int'1 J. Robotics Res. 7 (6):114-137.
[Henderson 84]
Henderson, T., Fai, W. S., and Hanson, C. 1984. "MKS: A
Multi-sensor Kernel System." Proc. IEEE Int'l Conf. on
Robotics and Automation, pp. 784-791.
[Heywood 89]
Heywood, T. 1989. "HELIX: A Shared Memory Emulation System
for Heterogeneous Multicomputing, Technical report CESAR-
89/31, Oakridge National Laboratory, Oakridge, TN.
[Hough 62]
Hough, P. V. C. 1962. "Methods and Means for Recognizing
Complex Patterns." U.S. Patent 3,069,654.
[Kadonoff 86]
Kadonoff, M. B., Benayad-Cherif, F., Franklin, A., Maddox,
J. F., Muller, L., and Moravec, H. 1986. "Arbitration of
Multiple Control Strategies for Mobile Robots, In Mobile
Robots, SPIE Proceedings vol. 727, pp.90-98.
[Kak 87]
Kak, A. C., Roberts, B. A., Andress, K., M., and Cromwell,
R. L. 1987. "Experiments in the Integration of World

150
Knowledge with Sensory Information for Mobile Robots." Proc.
IEEE Int'1 Conf. on Robotics and Automation, pp 734-740.
[Kent 87]
Kent, E. W., Shneier, M. 0., and Hong, T. H. 1987. "Building
Representations from Fusions of Multiple Views." Proc. IEEE
Int'1 Conf. on Robotics and Automation, pp. 1634-1639.
[Khatib 85]
Khatib, 0. 1985. "Real-time Obstacle Avoidance for
Manipulators and Mobile Robots." Proc. IEEE Int11 Conf. on
Robotics and Automation, St. Louis, MO., pp.500-505.
[Koren 91]
Koren, Y., and Borenstein, J. 1991. "Potential Field Methods
and their Inherent Limitations for Mobile Robot Navigation."
Proc. IEEE Int'l Conf. on Robotics and Automation,
Sacramento, CA., pp.1398-1404.
[Kreithen 83]
Kreithen, M. L. 1983. "Orientation Strategies in Birds: a
tribute to W. T. Keeton. In Behavioral Energetics: The Cost
of Survival in Vertebrates. Ohio State University, Columbus,
pp.3-28.
[Kriegman 89]
Kriegman, D. J., Triendl, E., and Binford, T. 0. 1989.
"Stereo Vision and Navigation in Buildings for Mobile
Robots, IEEE Transactions on Robotics and Automation,
5(6):792-803.
[Lettvin 70]
Lettvin, J. Y., et al 1970. "What the frog's eye tells the
frog's brain." In W. McCullock, Embodiments of Mind. MIT
Press, Cambridge, MA. pp.230-255.
[Luo 88]
Luo, R. C., and Lin, M. H. 1988. "Robot Multi-Sensor Fusion
and Integration: Optimum Estimation of Fused Sensor Data."
Proc. IEEE Int'l Conf. on Robotics and Automation, pp.1076-
1081.
[Maes 90]
Maes, P. 1990. "Situated Agents Can Have Goals." In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA, pp.49-70.
[Manning 79]
Manning, A. 1979. An Introduction to Animal Behavior.
Addison-Wesley Publishers, Reading, MA.

151
[Mataric 89]
Mataric, M. J. 1989. "Qualitative Sonar Based Environment
Learning for Mobile Robots." SPIE Mobile Robots,
Philadelphia, PA.
[McFarland 87]
McFarland, D. 1987. The Oxford Companion to Animal Behavior.
Oxford Unoversity Press.
[Mitiche 86]
Mitiche, A., and Aggarwal, J. K. 1986. "Multiple Sensor
Integration/Fusion Through Image Processing: a Review."
Optical Engineering 25 (3) :380-386.
[Moravec 85]
Moravec, H. P., and Elfs, A. 1985. "High Resolution Maps
from Wide Angle Sonar," Proc. IEEE Int'l Conf. on Robotics
and Automation, St. Louis, MO, pp.116-121.
[Newell 75]
Newell, A. 1975. "A Tutorial on Speech Understanding
Systems." Speech Recognition: Invited Papers of the IEEE
Symposium, D. R. Reddy, ed. Academic Press, New York, pp.3-
54.
[Ng 90]
Ng, K-C. and Abramson, B. 1990, "Uncertainty Management in
Expert Systems." IEEE Expert 5(2):29-48.
[Nii 86a]
Nii, H. P. 1986 "The Blackboard Model of Problem Solving,"
AI Magazine 7(2):38-53.
[Nii 86b]
Nii, H. P. 1986. "Blackboard Systems Part Two: Blackboard
Application Systems," AI Magazine 7(3):82-106.
[Nitzan 81]
Nitzan, D. 1981. "Assessment of Robotic Sensors." Proc. 1st.
Inti. Conf. on Robot Vision and Sensory Controls, Stratford-
Upon-Avon, UK, pp.1-12.
[Payton 90]
Payton, D. W. 1990. "Internalized Plans: A Representation
for Action Resources." In Designing Autonomous Agents, P.
Maes, ed. MIT Press, Cambridge, MA, pp.89-103.
[Payton 86]
Payton, D. W. 1986. "An Architecture for Reflexive Vehicle
Control," Proc. IEEE Int'l Conf. on Robotics and Automation,
pp.1838-1845.

152
[Porrill 88]
Porrill, J. 1988. Optimal Combination and Constraints for
Geometrical Sensor Data. Int'l J. Robotics Res. 7(6):66-77.
[Richardson 88]
Richardson, J. M., and Marsh, K. A. 1988. Fusion of Multi
sensor Data. Int'l J. Robotics Res. 7(6):78-96.
[Ruokagnas 86]
Ruokagnas, C. C., Black, M. S., Martin, J. F., and
Schoenwald, J. S. 1986. "Integration of Multiple Sensors to
Provide Flexibile Control Strategies." Proc. IEEE Int'l
Conf. on Robotics and Automation, pp.1947-1955.
[Shafer 86]
Shafer, S. A., Stentz, A., and Thorpe, C. E. 1986. "An
Architecture for Sensor Fusion in a Mobile Robot." Proc.
IEEE Int'l Conf. on Robotics and Automation, pp.2002-2011.
[Shafer 76]
Shafer, G. 1976. A Mathematical Theory of Evidence.
Princeton Univ. Press, Princeton, NJ.
[Tou 74]
Tou, J. T., and Gonzalez, R. C. 1974. Pattern Recognition
Principles. Addison-Wesley, Reading, MA.
[Willner 76]
Willner, D., Chang, C. B., and Dunn, K. P. 1976. "Kalman
Filter Algorithms for a Multi-Sensor System." Proc. IEEE
Conf. on Decision and Control, Clearwater, FL. pp.570-574.
[Winston 84]
Winston, P. H. 1984. Artificial Intelligence. Second
edition. Addison Wesley, Reading, MA.
[Zadeh 83]
Zadeh, L. A. 1983. "Commonsense Knowledge Representation
Based on Fuzzy Logic. Computer 16:61-65.
[Zadeh 78]
Zadeh, L. A. 1978. "Fuzzy Sets as a basis for a Theory of
Possibility." Fuzzy Sets and Systems, 1(1):3-28.

BIOGRAPHICAL SKETCH
Akram Bou-Ghannam was born on December 5, 1956, in
Aramoun, a small Lebanese town in the district of Aley on
the foothills of Mount Lebanon. He earned his high school
diploma (Lebanese Baccalaureate I) in 1974 from Dar El Hikma
High School in Abey, Lebanon, and the Lebanese Baccalaureate
II in 1975 from St. Mary Orthodox College in Beirut,
Lebanon.
In December 1975 he came to the United States of
America where he attended the University of Florida in
Gainesville, Florida, and earned his B.S. and M.S. degrees
(with honors) in mechanical engineering in March 1979 and
December 1980 respectively. Even though his degrees were in
mechanical engineering, he was interested in and involved
with the field of microprocessor-based and digital systems
design. Therefore, upon graduation he worked in this field
in industry as a research and development engineer. His
first employer was Vital Industries (1980-83) in
Gainesville, Florida. His second and current employer is the
IBM Corporation (1983-present) in Boca Raton, Florida. With
support from IBM, under the IBM Resident Study Program
(since August 1987), he is currently pursuing his Ph.D.

154
degree in electrical engineering at the University of
Florida.
Akram is happily married to Nada Jurdy (since July
1988) On November 15, 1990, both Akram and Nada were
blessed with a beautiful baby girl who was named Stephanie
Zena Bou-Ghannam.
V

I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
sor of Electrical Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Obcy ,
Carl D. Crane
Assistant Professor of Mechanical
Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Sencer Yerlan
Associate Professor of Industrial
and Systems Engineering

This dissertation was submitted to the Graduate Faculty
of the College of Engineering and to the Graduate School and
was accepted as partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
August 1991
Winfred M. Phillips
Dean, College of Engineering
Madelyn M. Lockhart
Dean, Graduate School

PUBLICATION
DATE:
consistent world...
1991
I, A kgft M as copyright holder for the
aforementioned dissertation, hereby grant specific and limited archive and distribution rights to the
Board of Trustees of the University of Florida and its agents. I authorize the University of Florida
to digitize and distribute the dissertation described above for nonprofit, educational purposes via
the Internet or successive technologies.
This is a non-exclusive grant of permissions for specific off-line and on-line uses for an indefinite
term. Off-line uses shall be limited to those specifically allowed by "Fair Use" as prescribed by the
terms of United States copyright legislation (cf. Title 17, U.S. Code) as well as to the
maintenance and preservation of a digital archive copy. Digitization allows the University of
Florida to generate image- and text-based versions as appropriate and to provide and enhance
access using search software.
Thisgra^efpermissionsprohihits use of the digitized versions for commercial use or profit.
Signature of Copyright Holder
AKg.fy/v/\, f\., Bou Gi4-fVfUAjAM
Printed or Typed Name of Copyright Holder/Licensee
Personal information blurred
if- [S'- Zoo%
Date of Signature
Please print, sign and return to:
Cathleen Martyniak
UF Dissertation Project
Preservation Department
University of Florida Libraries
P.O. Box 117007
Gainesville, FL 32611-7007



154
degree in electrical engineering at the University of
Florida.
Akram is happily married to Nada Jurdy (since July
1988) On November 15, 1990, both Akram and Nada were
blessed with a beautiful baby girl who was named Stephanie
Zena Bou-Ghannam.
V


57
3.3.2 World Model Representation
Various world model representations have been proposed.
The choice of an adequate representation depends on the
domain (indoor, outdoor, factory environment, etc..), the
task (navigation, manipulation, identification, etc..), and
the sensors used. Crowley [Crowley 87, 85] suggested a
representation in terms of line segments in a 2-D floor plan
world. Kak et al [Kak 87] also used a model represented as a
2-D line drawing of the expected scene. Chatila and Laumond
[Chatila 85] used three types of models: geometrical,
topological, and a semantic model. The geometrical model is
a 2-D model containing the position in absolute coordinates
of the ground projection of vertices of polyhedral objects.
The topological model is represented by a connectivity graph
of places where a place is defined as an area that is a
functional unit such as a workstation or a topological unit
such as a room or a corridor. The semantic model is a
symbolic model containing information about objects, space
properties, and relationships. Kent et al [Kent 87] proposed
a representation of the world that consists of both a
spatial and an object or feature-based representation. The
spatial representation classifies the world space as
occupied, empty, or unknown, and explicitly represents
spatial relationships between objects. The feature-based
representation associates each object with the set of
features that verifies its identity. Elfs [Elfs 89]
described the occupancy grid representation which employes a


87
4.4 Map Builder: A Distributed Knowledae-Based Approach
The map builder generates two linked representations of
the world: a spatially indexed and an object or feature
indexed representation. The spatially indexed representation
consists of a 2-D tessellation of space where each cell in
the grid contains information about its state whether
occupied, empty, or unknown, in addition to information such
as what object the cell (if occupied) belongs to and whether
it is a boundary point or an edge point, etc... This
representation is useful for navigation in computing free
path, and for determining the identity of objects or
features in a given location. The object-indexed
representation is linked to the spatial representation and
contains entries such as the object's name, vertices,
bounding edges, and other discriminating features. This
representation is suited to responding to inquiries about
objects or features by name or by description.


CHAPTER 5: EXPERIMENTAL SETUP AND IMPLEMENTATION 93
5.1 Implementation of the Planning Module 97
5.2 Implementation of the Map Builder ..100
5.2.1 The EOU Knowledge Source 102
5.2.2 The Filter-Raw Knowledge Source 107
5.2.3 The 2-D Line Finding Knowledge Source 107
5.2.4 The Consistency Knowledge Sources 112
5.2.4.1 The Match/Merge Knowledge Source 113
5.2.4.2 The Re-reference Knowledge Source 117
CHAPTER 6: EXPERIMENTAL RESULTS 120
6.1 Results From A Live Experimental Run 120
6.2 Discussion of Results 128
CHAPTER 7: SUMMARY AND CONCLUSIONS 132
APPENDIX: INTRODUCTION TO CLIPS 139
REFERENCES 145
BIOGRAPHICAL SKETCH 153
*
vi


133
of behaviors linking perception to action. Such systems
provide desirable, immediate, "reflexive" type of responses
to events in a dynamic environment. In Addition to behavior
arbitration, behavior-based systems are characterized by
their independence of world models and a global reasoning
activity. This make such systems prone to falling into local
minima traps with no efficient way of recovery. Another
characteristic of behavior-based systems is that complex
behavior emerges as the combination of the simple responses
of the various behaviors without the need for an explicit
statement of goals. Instead, goals are implicitly designed
into the system by the designer who predetermines the
interactions of the various behaviors with the environment.
This constitutes yet another limitation of behavior-based
systems since there is a limit to how far the designer can
forsee the various interactions in order to compile the
optimum arbitration strategy. Given the above stated
advantages and limitations of behavior-based systems, the
philosophy of our research is that the real-time performance
of such systems can still be enjoyed, while the limitation
can be avoided by putting to use a priori knowledge and
dynamically acquired models of the environment. For example,
one method of detecting a trap situation uses a priori
knowledge about different trap shapes and configurations,
and applies that knowledge to the dynamically acquired map
information. Once detected the trap can then be avoided by
reconfiguring the behavior-based system, by 'motivating' it


99
builder interface" external function), and from the lower-
level behaviors (through the "status asserter" external
function). The CLIPS-based planning module communicates with
the lower-level behaviors through a locally setup shared
memory, while it communicates with the map builder using a
shared memory structure called HELIX.
The "behavior selector" function is called from CLIPS
rules by a function call such as: (behavior-select "target-
nav" "on" ?xt ?yt) It simply posts in the local shared
memory the select state of the selected behavior (for
example shmptr->target_nav = 1), and waits for a select-
acknowledge from the selected behavior before returning to
CLIPS. If the select-acknowledge is not received within a
certain time interval the "behavior selector" asserts a
behavior communication error on the CLIPS facts list. This
function is simple and executes fairly quickly consuming on
the order of tenth of a second from the time it is called
until it returns.
The "status asserter" function is called periodically
from CLIPS, and it checks the status on the local shared
memory posted by the various lower-level behaviors. Such
status include variables representing the select state of
the various behaviors, error status variables, operating
conditions variables, and performance measure variables. The
select state variables indicate which behaviors are on and
which are not. The error status include variables indicating
errors such as robot communications error, or sonar data


116
distance (ND) between the model line parameters and the
observed line parameters. In this case, the ND calculations
are reduced to the 1-D case. Thus, we investigate if all the
following three types of matches are satisfied before
claiming a line match:
1. Orientation match: Accept if
ND [ (90, O0O) (9m, dQm) ] =
(0o-0m)2 /H0o>2 + (d0m) 2] 2. collinearity match: Accept if
ND [ (pQ/dpo)r (Pm'pm) 1 =
(PoPm)2 / [ (<7p0) 2 + (dpm) 2] 3. Overlap match: Accept if
ND [ (do, Lq) (dm, Lj^) ] =
(d0-dm)2 /[(L0)2 +(Lm)2] The three thresholds in the above tests are chosen depending
on the degree of confidence required for each match.
Considering the 1-D case, note that for a normal
distribution, the 95% confidence interval (i.e., 95% of the
measurements will fall in that interval) is the mean plus or
minus twice the standard deviation. Thus, if for example we
take the mean to be 0m then we can claim, with a 95%
confidence, that the observation 0O is within plus or minus
two standard deviations of 0m, that is:


86
the robot. In addition, within a single system many
arbitration networks may exist, each servicing a sub-group
of behaviors that determine a certain competence level as
shown in figure 4.1. In our view, each competence level is
developed separately, and the thresholds and gains
determined experimentally (by using a neural network, for
example). The behaviors shown in figure 4.7 give a simple
example of one competence level, the "explore" competence
level used by the robot to explore unknown environments for
map building tasks and local navigation. As shown in figure
4.7, this level includes an "avoid", a "random-wander", a
"boundary-following", and a "target-nav" behavior. The
latter three behaviors are mutually exclusive while either
of the three is complementary to the "avoid" behavior. One
arbitration scheme for this subset of behaviors could give
the "target-nav" the highest priority of the three behaviors
with the "boundary-following" behavior having a higher
priority than the "random-wander" behavior. Thus, for
example, once the "boundary-following" behavior is
triggered, it inhibits the "random-wander" behavior causing
the output of the "boundary-following" behavior to be
combined (superimposed) with the output of the "avoid"
behavior which has the highest priority.


46
complete absence of any external stimuli, and can greatly
outlast any external stimulus[Beer 90].
Given a diverse set of sensory information about the
environment, and a diverse behavioral repertoire, how does
an animal select which information to respond to, and
properly coordinate its many possible actions into a
coherent behavior needed for its long term survival? The
answer to the first part lies in the fact that many
different animals have sense organs specialized in detecting
certain environmental features. For example, [Anderson 90]
reports observations by [Lettvin 70] about the frog's visual
system as specialized in detecting movements of small, dark
circular objects at close range, while it is unable to
detect stationary food objects or large moving objects.
Other animals that posses a variety of specialized sensory
detectors can actively select a subset of these detectors to
initiate the response of the animal. For example,
experiments and observations by Lorenz reported by [Anderson
90], indicate that the herring gull can detect attributes of
shape, size, color, and pattern of speckles of its eggs.
Moreover, depending on the task performed by the gull,
certain attributes become important while others become
unimportant. For example, when herring gulls steal the eggs
of other herring gulls, the shape and size of the egg are
very important. But when retrieving its own eggs as they
roll from the nest during incubation, shape becomes
unimportant while attributes such as color and pattern of


101
models the space in terms of 2-D line segments. This is
useful for outlining the boundaries of rooms and objects,
and it is mainly used in our implementation for robot re
referencing, i.e., correcting the position and orientation
of the robot as it travels within its environment. A
composite local 2-D line model of the environment (the
observed lines in figure 5.4) is generated by the line
finder KS, while the match/merge KS (in the consistency KSs
group, figure 5.4) generates and updates the model lines
which are a composite global 2-D line model accumulated over
time from the various observed lines models. Some model-


i
Figure 6.1 Robot behavior under "avoid", "wander", and
"boundary-follow" behaviors.
121


129
experimental run was to build as complete a map of the
environment as possible. As intended, this was achieved with
minimum intervention and guidance from the planning module
to the behavior-based system. The behavior-based system
adapted to dynamic changes in the environment. This was
demonstrated by placing an obstacle (a chair) at a later
time between the robot and its target when the robot was in
the target-nav mode. Even though the chair has not been
modeled yet into the environmental representations, the
robot avoided the chair and still reached its target solely
due to the competence of the behavior-based system and
without intervention from the planning module. Had the
chair, for example, made the target unreachable, then when
some time has passed without any progress, the planning
module would have intervened by either cancelling the
target, or providing an intermediate target to the robot
from which it can avoid the trap and advance to the original
target. Another validation of interaction to dynamic events
was demonstrated when people entered the lab space of the
robot and walked around. The robot avoided people running
away in the opposite direction as people approached it.
People were modeled temporarily in the EOU representation,
but since they did not stay in the environment for a long
time, their image in the EOU representation did not persist.
It is hard to quantify the real-time performance of the
robot and our proposed control architecture because the
numbers depend on the specific implementation and the


6
handle explicit high-level user specific goals. Given a set
of goals and constraints, the planning module advances the
overall mission by deciding the robot's next move based on
an analysis of the local model of the environment
(constructed from current sensor data) and the existing
global model. The global or world model is obtained either
directly from the user, if the robot is operating in a known
environment, or it is autonomously constructed over time
from the various local models when operating in an unknown
environment. The world model representation employed in the
traditional approach is general purpose and, thus, useful
for a variety of situations and planning tasks. Without such
a general purpose model, features critical to plan execution
may not be discovered. But, a general purpose world model
puts some unrealistic demands on the perception task and has
the disadvantage of an unavoidable delay in the sensor to
actuator loop. Such delay is due to the computational
bottleneck caused by cognition and the generation of a
symbolic model of the world. Lack of real time operation
(speed) and inflexibility are the two major complaints from
the behavior-based camp about the cognitive approach. These
claims are supported by the fact that the few autonomous
mobile robot projects implemented with the traditional
approach suffer from slow response times and inflexibility
when operating in complex dynamic environments. The response
of the traditionalists is that while reflexive behavior can
keep a robot from crashing into a wall, a higher-level


19
complicated problem domains such as the problem of
understanding and interpreting a robot's environment based
mainly on data from its sensors, it becomes important to be
able to work on small pieces of the problem separately, and
then combine the partial solutions at the end into a
complete problem solution. This task of understanding the
environment is thus accomplished at various levels of
analysis or abstraction. Thus, sensor data exist at the
various levels of knowledge abstraction in the solution
space and appropriate fusion techniques are applied at the
various levels for an improved solution. Though the specific
abstraction levels are task dependent, they could be
generalized as follows:
Signal level. This level contains data that are close
to the signal or unprocessed physical data level. At this
level data are usually contaminated with random noise and
are generally probabilistic in nature. Therefore statistical
inferencing techniques are appropriate for data fusion on
this level.
Feature level.
Data
at this
level
consist
of
environmental/object
features
derived
from the
signal.
The
various features describe the object or solution. Often,
incomplete descriptions of the features must be used. This
calls for a type of reasoning that is subjective and based


A B
Tmplate cone. P = P
B
P
B
R
A
CD
1
A
P
^ _rg.
_ _ _
1
0 0
; 1
_1
Figure 5.5 Translation and rotation of a sonar template
cone.


115
0 = tan 1 [ (Y2 Yi)/(X2 X]_) ]
xm = (x1 + x2)/2
Ym = (Yl + Y2)/2
p = xm. Sin (0) ym. Cos (0)
d = xm.Cos(0) + ym.Sin(0)
Next we find the uncertainties Gp, gq, g^. The uncertainty in
the line offset (Gp) is given from the line finding KS as
the largest perpendicular distance to the line from a point
(xi, yi) admitted to the line, that is:
Gp = max{pi- x-j_. Sin (0-¡_) + y. Cos (0-j_) } for all i.
The remaining uncertainties gq and G^ are represented
graphically in figure 5.9. To find gq and G we first find
L, the half length:
L = (1/2) [(x2 xx)2 + (y2 yi)2]1/2
Then, from [Crowley 87]:
GQ = tan-1(Gp/L)
Gd = L
Now, to match a model line to an observed line we apply
the techniques derived in section 2.4.1 and find the normal


32
mA (1) =1, mA (2) =1, mA(3)=0.9/ mA (4) =0.7, . mA (30) =0.001
where mA(i) is the membership function that measures the
degree to which i (a positive integer) belongs to A.
To illustrate some of the combination rules of fuzzy
set theory, assume that both A and B are propositions with
C=A@B denoting the proposition that C is the combination of
A and B. Then, for
1. Conjunction: A AND B, is given by
m[A*B] (a, b)=min[mA(a),mB(b)].
2. Disjunction: A OR B, is given by
m[A+B] (a, b)=max[mA(a),mB(b)].
3. Implication: IF A THEN B, is given by
m[A/B](a,b)=min[1,(1-mA(a)+mB(b))].
For more details on possibility theory including evidence
propagation and truth quantification rules the reader is
.referred to [Zadeh 78], while [Cheeseman 86] provides a
comparison between fuzzy and probabilistic reasoning.
2.5.3.4 Belief Theory
Belief theory, developed by Dempster and Shafer [Shafer
76] as an alternative to the theory of probability, makes a
fundamental distinction between uncertainty and ignorance.
As mentioned above, in probability theory the extent of


Abstract of Dissertation Presented to the Graduate School of
the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING
By
Akram Bou-Ghannam
August 1991
Chairman: Dr. Keith L. Doty
Major Department: Electrical Engineering
Based on the philosophical view of reflexive behaviors
and cognitive modules working in a complementary fashion,
this research proposes a hybrid decomposition of the control
architecture for an intelligent, fully autonomous mobile
robot. This architecture follows a parallel distributed
decomposition and supports a hierarchy of control with
lower-level reflexive type behaviors working in parallel
with higher-level planning and map building modules. The
behavior-based component of the system provides the basic
instinctive competences for the robot while the cognitive
part is characterized by knowledge representations and a
reasoning mechanism which performs higher machine
intelligence functions such as planning. The interface
between the two components utilizes motivated behaviors
implemented as part of the behavior-based system. A
motivated behavior is one whose response is dictated mainly
by the internal state (or the motivation state) of the
Vll


59
position referencing of the robot as described in section
5.2.4.
3.3.3 Managing World Model Inconsistencies
At any point in time the robot has a global and a local
model of its environment. The local model is robot-centered
and represents the environment as perceived at the moment by
the various sensors onboard. The robot is first faced with
the challenge of constructing the local model by integrating
information from the various sensor systems with a priori
knowledge about the environment. Next it must update the
global model using the local model and its position and
orientation information. The difficult problem is to
maintain a consistent model given inaccuracies in position
measurements and in sensor data and its processing elements.
Inconsistencies between the global and the local models must
be resolved in order to avoid model degradation. Resolving
these inconsistencies should improve the accuracy of the
position and orientation information.
A variety of methods have been proposed for model
consistency checks. Crowley [Crowley 85] defines a function
named CORRESPOND that is called for every line in the sensor
model (most recent sensor data described in terms of line
segments) to be matched with every line segment in the
composite local model (an integration of recent sensor
models from different viewing angles). This function tests
the correspondence of two line segments by checking: (1) If


23
= E [ (e1) (e1)T] E [ (eM (e3)T] -
E[ (e3) (e1) T] + E [ (e^) (e3)T]
= c1 - CP1 + c^.
If the errors e1 and e3 are independent then = CP1 = 0,
and
C /H0 = C1 + CJ.
For Gaussian estimation errors the test of H0 vs. Hx is
as follows: Accept H0 if
d = (5e^)T c-1 (5^3) < 0,
where 0 is a threshold such that: p{ d>0 /H0} = a, and a is
a small number such as 0.05 for example.
If H0 is accepted, then Pg1 and pe^ are competitive and
can thus be fused to obtain p^i the combined estimate of p.
Using the standard Kalman filter equations, and letting the
prior mean of p be Pg1, we obtain the following combined
estimate and corresponding error covariance:
Peij = Pe1 + K (Pej Pe1)
Cov = C1 K C1,
where K is the Kalman filter gain, and for independent
errors e1 and e^ is given by
K = C (C1 + CP) -1


CHAPTER 2
SENSOR DATA FUSION: MANAGING UNCERTAINTY
In this chapter we concentrate on sensor fusion, an
essential issue of the traditional approach (discussed in
chapter 1) concerned with generating meaningful and
consistent interpretations (in terms of symbolic models) of
the environment being observed. We discuss sensor data
fusion techniques and methods for managing uncertainty at
various levels of abstraction, and illustrate the advantages
and disadvantages of each. Some of the techniques presented
in this chapter are used in the various knowledge sources of
the map builder module, an essential part of our hybrid
architecture. The implementation of the map builder
(including the sensor fusion techniques used) is presented
in detail in chapter 5.
2.1 Motivation
The increasing demand for robot systems to perform
complex tasks such as autonomous operation and flexible
manufacturing has spurred research aimed at upgrading robot
intelligence and capabilities. These complex tasks are often
associated with unstructured environments where information
is often uncertain, incomplete, or in error. This problem
calls for the development of a robot system capable of using
12


88
We follow a knowledge-based approach as a framework for
the map builder since the goal of a multi-sensor system such
as the map builder, is not only to combine information from
multiple sensory sources, but also to combine this sensory
data with a priori domain-dependent and domain independent
knowledge. Thus, a powerful multi-sensor system must rely on
extensive amounts of knowledge about both the domain and the
problem solving strategy effective in that domain
[Feigenbaum 77]. This knowledge is needed to compensate for
the inadequacies of low-level processing, as well as to
generate reasonable assumptions for the interpretations of
features derived from lower level sensory data. Domain-
dependent knowledge consists of knowledge about domain
objects and ways of recognizing them. This involves semantic
descriptions of objects, semantic relationships between
objects, the use of interpretation context, experimentally
derived classification functions, and knowledge about the
task and the sensor. Domain-independent knowledge involves
general principles such as perspective distortion,
occlusion, varying points of view, etc..
The challenge of multi-sensory perception requires a
flexible inference strategy that supports both forward and
backward chaining. This flexibility allows the system to
dynamically alternate between data-driven and model-driven
strategies as the situation requires. The blackboard
framework allows for this flexibility [Nii 86a,b]. In a
blackboard framework forward chaining and backward chaining


(c) Copyright 1991 by Akram A. Bou-Ghannam. All rights
reserved.


47
speckles gain importance. Based on these observations Lorenz
formulated his idea of the "innate releasing mechanism" that
actively selects a subset of the available sensory
information to trigger the appropriate response. This leads
us to the second part of the question posed at the beginning
of this paragraph: How do animals handle behavior conflict
where certain stimuli and motivations cause the tendency to
simultaneously perform more than one activity? The
observation is that the various behaviors of an animal
exhibit a certain organization or hierarchy [Manning 79] .
Some behaviors take precedence over others, while others are
mutually exclusive. Internal state of the animal and
environmental conditions determine the switches between
behaviors. Such switches may not be all or nothing switches,
and the relationship between behaviors may be non-
hierarchical. That is, behaviors can partially overlap
making it sometimes difficult to identify direct switches
between them [Beer 90]. Survival dictates that whatever
behavioral organization exists in animals, it must support
adaptive behavior, whereby, based on past interactions with
the environment, aspects of future behavior are modified.
Anderson and Donath [Anderson 90] summarize some of the
important observations in their research regarding animal
behavior as a model for autonomous robot control:
a) To some degree all animals possess a set of innate behaviors
which allow the animal to respond to different situations.
b) The type of behavior exhibited at any given time is the result
of some internal switching mechanism.


90
In previous work [Bou-Ghannam 90a,b] we introduced the
Sensory Knowledge Integrator (SKI), a knowledge-based
framework for sensor data fusion. What follows in this
section is a brief review of SKI, figure 4.8. SKI organizes
the domain knowledge and provides a strategy for applying
that knowledge. This knowledge, needed to describe the
environment being observed in a meaningful manner, is
embedded in data-driven and model-driven knowledge sources
at various levels of abstraction. These knowledge sources
are modular and independent emphasizing parallelism in the
SKI model. The knowledge sources use algorithmic procedures
or heuristic rules to transform information (observations
and hypothesis) at one level of abstraction into information
at the same or other levels. Processed sensory data in the
observations database cause the execution of the data-driven
knowledge sources while model data in the hypothesis
database cause the execution of model-driven knowledge
sources. This execution produces new data (on the
observations or hypothesis database) which in turn cause the
execution of new knowledge and the production of new data
until a high level description of the environment under
observation is incrementally reached. These high level
descriptions comprise the robot's local world model that is
continually updated by new sensor observations.
Data in the observations database range from intensity
and depth arrays at lower levels of abstraction, to lines,
edges, regions, surfaces, at intermediate levels, to objects


146
[Ayache 87]
Ayache, N., and Faugeras, 0. D. 1987. "Building a Consistent
3D Representation of a Mobile Robot 's Environment by
Combining Multiple Stereo Views." Proc. IJCAI-87, pp 808-
810.
[Beer 90]
Beer, R. D., Chiel, H. J., and Leon S. 1990. "A Biological
Perspective on Autonomous Agent Design. In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Borenstein 87]
Borenstein, J., and Koren, Y. 1987. "Obstacle Avoidance with
Ultrasonic Sensors. IEEE Journal of Robotics and
Automation, 4(2):213-218.
[Bou-Ghannam 91]
Bou-Ghannam, A., and Doty, K. L. 1991. "A CLIPS
Implementation of a Knowledge-based Distributed Control of
an Autonomous Mobile Robot." Proc. SPIE conference on
Applications of Artificial Intelligence IX, Orlando,
Florida, vol. 1468, pp. 504-515.
[Bou-Ghannam 90a]
Bou-Ghannam, A., and Doty, K. L. 1990. "Multi-sensor Data
Fusion: an Overview and a Proposed General Model. Technical
Report MIL-TR-90-1, EE Dept., Univ. of Florida, Gainesville.
[Bou-Ghannam 90b]
Bou-Ghannam, A., and Doty, K. L. 1990. "A General Model for
Sensor Data Fusion." Proc. 3rd. Conf. on Recent Advances in
Robotics. Boca Raton, Florida, May 31 June 1.
[Braitemberg 84]
Braitemberg, V., 1984. Vehicles. Experiments in Synthetic
Psychology. MIT Press, Cambridge, MA.
[Brooks 90]
Brooks, R. A., 1990. "Elephants Dont Play Chess." In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA.
[Brooks 86a]
Brooks, R. A., 1986 "A Robust Layered Control System for a
Mobile Robot, IEEE Journal of Robotics and Automation. Vol.
RA-2(1):14-23.
[Brooks 86b]
Brooks, R. A., and Connell, J. H. 1986. "A Distributed
Control System for a Mobile Robot, In Mobile Robots, SPIE
Proceedings Vol. 727. pp. 77-84. Cambridge, MA.


45
intensity and duration of the stimulus. Reflexive responses
allow the animal to quickly adjusts to sudden environmental
changes, and thus provides the animal with protective
behavior, postural control, and gait adaptation to uneven
terrain. Such reflexive responses are believed to be
instinctive and not learned since they have been observed in
animals which have been isolated from birth. Other reactive
types of behaviors include orientation responses where an
animal is oriented towards or away from some environmental
agent, and fixed-action patterns which are extended, largely
stereotyped responses to sensory stimulus [Beer 90].
The behaviors mentioned above are by no means solely
dependent on external stimuli. The internal state of the
animal plays an important role in the initiation,
maintenance, and modulation of a given behavior. Motivated
behavior are those governed primarily by the internal state
of the animal with no simple or rigid dependence on external
stimuli. For example, the behavior of feeding does not only
depend on the presence of food (external stimuli) but also
upon the state of hunger (the internal motivational
variable). Thus a behavior exhibited by an animal at a
certain moment enjoys the highest motivational potential
along with the proper combination of external stimuli
detected at that moment. The motivational potential of a
motivated behavior varies with the level of arousal and
satiation. In addition, such behaviors can occur in the


102
driven KSs transform the global line model to the expected
local scene (i.e., what the robot should be seeing at the
moment). The match/merge KS compares the expected scene to
the observed lines (what the robot is actually seeing at the
moment) in order to update the global world model. If a
match exists, the re-reference KS uses the difference in
position and orientation of the two models to correct the
position and orientation of the robot. All the implemented
KSs will be discussed in greater details in the following
sections.
5.2.1 The EOU Knowledge Source
This knowledge source (KS) generates the spatially-
indexed EOU representation from sonar sensor data and robot
position and orientation data. It takes advantage of all the
information available for an individual sonar scan including
the signal geometry of the sensor and its probabilistic
characteristics. The process of generating the EOU evolves
over several steps starting by first tessellating the space
into a finite grid of cells with each cell initialized to
the 'unknown' value. Then, sonar readings are analyzed each
corresponding to a cone, figure 5.5, (similar to the sound
energy burst of the sonar) which is overlaid onto the
tessellated space. At this point, cells that fall within the
cone are decremented increasing the confidence that those
cells are in the empty state, while cells that fall at the
end of the cone are incremented (increasing the occupied


26
z = Hx + v
This is the measurement equation for the standard Kalman
filter. Given the above assumptions, the optimal estimate of
x, given z measurements, xopt(z) is determined by minimizing
a loss or risk function. This risk function is the
optimality criterion which provides a measure of the
"goodness" of the estimate. A typical loss function is the
mean squared error given by:
L(x, xopt) = (x xopt)TW (x xopt)
where W is a symmetric positive definite weighting matrix.
Now let us consider two sensor systems producing two
sets of measurements zl and z2 of the state x. We are
interested in estimating x based on zl and z2, that is,
computing xopt(zl, z2) In the general case, one can not
compute xopt(zl, z2) based on the separate estimates xQpt(zl)
and xopt(z2) of sensors 1 and 2 respectively [Richardson 88].
However, this is possible in the special case of a linear
Gaussian measurement model. For more information on
estimation in multi-sensor system the reader is referred to
Willner et al. [Willner 76] and Richardson and Marsh
[Richardson 88].
The major limitations of the above mentioned methods
stem from the assumption that f (x) is known a priori. This
requires a large number of experiments to be performed on


44
Nature provides us with a variety of examples of animal
behavior as they successfully interact with their
environment in order to survive. Animals survive by
possessing the ability to feed, avoid predators, reproduce,
etc.. It is believed that animals survive due to a
combination of inherited instinctive responses to certain
environmental situations, and the ability to adapt to new
situations. Ethologists who study animal behavior in their
natural habitat, view animal behavior as largely a result of
the innate responses to certain environmental stimuli.
Behavioral psychologists, on the other hand, study animal
behavior under controlled laboratory settings, and believe
that animal behavior is mainly learned and not an innate
response. Observations and experiments by [Manning 79]
support the existence of both learned and innate behaviors
in animals. Animals with a short life span and small body
size such as insects seem to depend mostly on innate
behaviors for interacting with the environment, while
animals with a longer life span and larger body size
(capable of supporting large amounts of brain tissue
required for learning capacity) seem to develop learned
behavior.
Reflexive behavior is perhaps the simplest form of
animal behavior. A reflexive behavior is defined as having a
stereotyped
response triggered by
a certain class
of
environmental
stimuli. The
intensity
and duration
of
the
response of
a reflexive
behavior
depends only
on
the


91
and their relationships at higher levels of abstraction. The
partitioning of the data into application dependent
hierarchies or levels of abstraction is essential because it
makes it easy to modularize the knowledge base. Thus,
certain knowledge sources become associated with a certain
level of data abstraction and could only be triggered by
data on that level of abstraction. This eliminates the need
to match all the knowledge sources to all the data. The
hypothesis database contains hypothesized high level goals
with hypothesized sub-goals derived from these goals in a
backward (top-down) reasoning scheme. The sub-goals are
matched to facts at lower levels of abstraction to assert
the validity of the hypothesis.
The control module handles the conflict resolution
among knowledge sources and thus determines what knowledge
source or group of knowledge sources to apply next. The
control module monitors the changes in the observations/
hypothesis database along with the potential contributions
of the related knowledge sources in the knowledge base and
determines the next processing steps or actions to pursue.
In other words, the control module determines the focus of
attention of the system. It contains knowledge about the
"big picture" of the solution space and, hence, can resolve
conflict among knowledge sources triggered by the current
situation data. We implement the control module in CLIPS.
To clarify the theoretical concepts introduced above,
section 5.2 of the next chapter discusses the implementation


Figure 6.2 c. After the fifth scan.
Figure 6.2 EOU knowledge source in action: Building the
EOU representation.
Figure 6.3 EOU representation with about 500
data points.
sp^isa^i


64
builder module, exploits a priori knowledge about the
environment such as objects to be encountered or
manipulated. The a priori knowledge gives the robot an idea
about its relationship to the world and allows it to
efficiently use its resources.
The architecture for planning and control of a mobile
robot should be a synthesis of the traditional serial
approach and the parallel behavior-based approach. This
architecture consists of a hierarchy of control in which
lower level modules perform "reflexive" tasks while higher
level modules perform tasks requiring greater processing of
sensor data. We call our architecture a hybrid one because
it includes a cognitive component and a behavior-based
component, figure 3.2. A robot controlled by such a hybrid
architecture gains the real-time performance of a behavior-
based system while maintaining the effectiveness and goal
handling capabilities of a planner with a general purpose
world model. The basic instinctive competences for the robot
such as avoiding obstacles, maintaining balance, wandering,
moving forward, etc..are provided by the behavior-based
component of the system, while the cognitive part performs
higher mental functions such as planning. The higher the
competence level of the behavior-based system, the simpler
the planning activity. Motivated behaviors implemented as
part of the behavior-based system, and the associated
motivation state, form the interface between the two
components. A motivated behavior is triggered mainly by the


Figure 5.7 (p, a) representation of a 2-D line.
e
i
e
max
e.
min
ex
min
o>
ex
max
Figure 5.8 The parameter space grid.
108


72
4.1 The Planning Module: A Knowledge-Based Approach
The planning module performs reasoning and task
planning to accomplish user specific tasks, such as locate
all green drums in a warehouse for example. In order to
accomplished such tasks, the planning module performs
various tasks including map-based planning (such as route
planning) and behavior reconfiguration. For example, in our
implementation of the planning module, discussed in detail
in section 5.1, the task is to efficiently map the
environment without crashing into things. Thus our planning
module performs some simple map-based planning functions,
but deals mainly with behavior arbitration. The arbitration
strategy that effects the behavior reconfiguration is
embedded into a set of production rules in the planning
module. In determining the current behavior arbitration, the
arbitration rules utilize knowledge about the goals, the
environment, the individual behaviors, the current situation
status, etc.., and thus create a flexibile, goal-driven
arbitration strategy that is useful for a variety of
missions in different types of domains. We will learn more
about arbitration and the arbitration network in section
4.3. It is important to note that behavior reconfiguration
is only used when the behavior-based system encounters
difficulties such as a trap situation as described in
section 3.4. Various trap situation analogous to the "fly-
at-the-window" situation can be detected in the map builder
module, and recovered from or avoided using heuristic rules


ACKNOWLEDGEMENTS
It takes lots of special people to make a dissertation.
First and foremost, I would like to thank my wife Nada and
my daughter Stephanie, for giving me lots of love and moral
support. I am also very grateful to the IBM Corporation for
the financial support and wonderful treatment. My IBM
managers and colleagues at the Boca Raton site were
extremely helpful and understanding. I could always count on
their support whenever I needed parts, equipment, technical
material, or attended technical conferences. Special thanks
go to the following IBMers: Rit VanDuren, Jerry Merckel, Tom
Kludt, Barabra Britt, Vic Moore, Rick Mendelson, Steve
Henderson, and Sharon Amato.
Within the university, I would like to thank my advisor
and all my committee members responsible for the
intellectual stimulation that shaped this dissertation.
Special thanks go to Dr. Carl Crane for sponsoring and
encouraging the cooperation between the robotics groups in
electrical, mechanical, and nuclear engineering. This
fruitful cooperation allowed me to put my ideas to practice
and run live experiments. Guido Reuter worked closely with
me on the "map builder" implementation and deserves much
thanks. Also, thanks go to Tom Heywood for his help in the
system setup and robot control.
IV


95
dedicated to the map builder implementation, and an IRIS
2400 in the Nuclear Engineering Robots Lab (where the robot
and its controller are physically located) performed the
lower-level behavior tasks. The latter workstation
communicated to the robot's dedicated PC controller via an
RS-232 serial link, and to the sonar sensors system via
another RS-232. The robot itself communicated to its PC
controller via a 2400 baud radio link. Later experiments
consolidated the functions of planning, map building, and
lower-level behaviors on one IRIS 4D workstation in the
Nuclear Engineering lab.
Persomal IRIS
IRIS4D IRIS 2400 IBM PC
Figure 5.1 System setup.


83
Our approach for behavior arbitration implements simple
binary all or nothing switches in the arbitration network,
with the arbitration control strategy incorporated into a
set of production rules, figure 4.5. In our implementation,
the rules reside in the planning module running under CLIPS,
a knowledge-based systems shell. CLIPS implementation of the
arbitration strategy is discussed in section 5.1. We believe
that by encapsulating the arbitration strategy in a set of
rules under the control of the cognitive system, a robot
useful for a wide range of missions can be created. The
rules incorporate knowledge about the goals, the individual
behaviors, the environment, and the current situation
status. Note that the real-time performance of reactive
control is still maintained, since the behavior-based system
is always enabled. The arbitration rules in the cognitive
system reconfigure the arbitration switches of the behavior-
based system depending on the goal at hand, and the current
situation. For example, in our experiments, we initially
configure the behavior-based system into an "explore" mode
allowing the robot to wander around without bumping into
things. After the initial configuration, the cognitive
system leaves the behavior-based system alone. Later, when a
knowledge source in the map builder module discovers large
boundaries between the empty and the unknown areas of the
occupancy grid representation being constructed, and that
such boundaries are not being traversed by the robot, this
information is made available to the arbitration rules in


18
obstacle avoidance, while in good lighting conditions both
vision and sonar sensing modalities could be employed.
Independent sensor data interaction occurs in the natural
world where redundant sensors are abundant. For example,
pigeons have more than four independent orientation sensing
systems, that do not seem to be combined but rather,
depending on the environmental conditions, the data from one
sensory subsystem tends to dominate [Kriethen 1983].
2.4 Levels of Abstraction of Sensor Data
Levels of abstraction of sensor data are application
and task dependent. These levels of abstraction vary from
the signal level (lowest level) where the raw response of
the sensor is present, to the symbolic level (highest level)
where symbolic descriptions of the environment exist for use
by the planning subsystem. This model is based upon
psychological theories of human perceptual system that
suggest a collection of processors that are hierarchically
structured and modular [Fodor 83]. These processors create a
series of successively more abstract representations of the
world ranging from the low-level transducer outputs to the
highly abstract representations available to the cognitive
system. Thus, in order to bridge the wide gap between raw
sensory data and understanding of what those data mean, a
variety of intermediate representations are used. These
representations make various kinds of knowledge explicit and
expose various kinds of constraints [Winston 84], For


CHAPTER 7
SUMMARY AND CONCLUSIONS
The experimental setup developed during the course of
this research provides an excellent testbed for future
research in autonomous systems design. Within this testbed,
the addition of new behavioral modules is easily achieved
due to the flexibility and modularity of the proposed hybrid
control architecture. Moreover, the modular distributed
approach followed by the proposed Sensory Knowledge
Integrator, allows for easy addition and integration of new
sensors. The testbed is general enough to accommodate
various experiments associated with a wide variety of
research problems in the relatively new field of autonomous
systems design. Such research areas include adaptive
behavior arbitration and learning, explicit goal-driven
reactive systems, and methodologies and representational
structures for the interface between cognitive and reactive
systems. The classical problems of sensor data fusion and
consistent world modeling are still important, but, equally
important is how the knowledge accumulated in such world
models can be cleverly brought to bear on the behavior-based
subsystem to produce useful, efficient, and robust behavior.
Behavior-based systems follow a parallel decomposition
of robot control with many parallel stimulus-response type
132


To my daughter Stephanie
Through her eyes I see peace on earth,
and receive my motivation.


PUBLICATION
DATE:
consistent world...
1991
I, A kgft M as copyright holder for the
aforementioned dissertation, hereby grant specific and limited archive and distribution rights to the
Board of Trustees of the University of Florida and its agents. I authorize the University of Florida
to digitize and distribute the dissertation described above for nonprofit, educational purposes via
the Internet or successive technologies.
This is a non-exclusive grant of permissions for specific off-line and on-line uses for an indefinite
term. Off-line uses shall be limited to those specifically allowed by "Fair Use" as prescribed by the
terms of United States copyright legislation (cf. Title 17, U.S. Code) as well as to the
maintenance and preservation of a digital archive copy. Digitization allows the University of
Florida to generate image- and text-based versions as appropriate and to provide and enhance
access using search software.
Thisgra^efpermissionsprohihits use of the digitized versions for commercial use or profit.
Signature of Copyright Holder
AKg.fy/v/\, f\., Bou Gi4-fVfUAjAM
Printed or Typed Name of Copyright Holder/Licensee
Personal information blurred
if- [S'- Zoo%
Date of Signature
Please print, sign and return to:
Cathleen Martyniak
UF Dissertation Project
Preservation Department
University of Florida Libraries
P.O. Box 117007
Gainesville, FL 32611-7007


I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
sor of Electrical Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Obcy ,
Carl D. Crane
Assistant Professor of Mechanical
Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Sencer Yerlan
Associate Professor of Industrial
and Systems Engineering


49
through the environment. The next section discusses in
detail the advantages and limitations of the subsumption
architecture.
[Payton 86] also follows a behavior-based
decomposition, and describes a collection of reflexive
strategies in a hierarchy of control, all competing for the
control of the vehicle. The winning behavior is determined
by a winner take all arbitration mechanism. Later work by
[Payton 90] describes methods of compiling world knowledge
such as mission constraints, maps, landmarks, etc., into a
form of "internalized plans" that would have maximal utility
for guiding the action of the vehicle. He proposes a
gradient description to implicitly represent the
"internalized plans". Using this representational technique,
a priori knowledge such as a map can be treated by the
behavior-based system as if it were sensor data.
[Arkin 87] proposes a schema-based approach to the
navigation of a mobile robot. His motor schemas are
processes that run concurrently and independently, each
operating in conjunction with its associated perceptual
schemas. No arbitration mechanism is required, instead the
outputs of the various motor schemas are mapped into a
potential field and combined to produce the resultant
heading and velocity of the robot. Arkin demonstrates that
strategies for path following and obstacle avoidance can be
implemented with potential field methods by assigning


118
*.2.4.2 The Re-reference Knowledge Source
This KS corrects the position and orientation of the
robot. The error in position and orientation is caused by
the odometeric sensors of wheel and shaft encoders due to
wheel slippage and uneven weight distribution on the robot.
This error accumulates as the robot travels, and if not kept
in check will cause degradation in the world model
consistency. The steps performed by this knowledge source
are as follows:
Stepl. For all merged line segments, find the average
orientation error between merged line segments and observed
segments:
A0 = (1/n) X[0new (i) 0O (i) ] for i= 1 to n
Step2. For all consistent pairs of line segments, find the
average error in translation:
Ap = (l/n)E[pm(i) Rp0 (i) ] for i= 1 to n
Where,
Ap = [Ax Ay]T
Pm = [xrm yrnJT
Po = txro Vrol^
R = feos(A0) -Sin(A0)
|_Sin(A0) Cos (A0)
A good choice of a reference point to determine the
translation error Ap, is a corner or the intersection of
a selected model reference point,
a selected observed reference point.


34
BF (A) = X m(B)
BCA
And satisfies the following:
1. BF (0) = 0
2. BF (0) =1
3. BF(A) + BF(~A) < 1
The Dempster/Shafer theory is based on Shafer's
representation of belief and Dempster's rule of combination.
The Shafer representation expresses the belief in a
proposition A by the evidential interval [BF(A), p(A)],
where BF(A) denotes the support for a proposition and sets a
minimum value for its likelihood, while p (A) denotes the
plausibility of that proposition and establishes its maximum
likelihood. P(A) is equivalent to 1-BF(~A), the degree with
which one fails to doubt 'A'. The uncertainty of 'A',
u (A) =p (A) -BF (A) is thus implicitly represented in the
interval [BF(A) p(A)] The Dempster's rule of combination
is a method for integrating distinct bodies of evidence. To
combine the belief of two knowledge sources, suppose for
example that knowledge source 1 (KS1) commits exactly ml (A)
as a portion of its belief for proposition 'A', while KS2
commits m2(B) to proposition 'B'. Note that both 'A' and 'B'
are subsets of 0 the frame of discernment. If we are


69
level modules provide the cognitive function. Note that
action takes place only through the behavior-based
subsystem. When needed, the cognitive (planning) module
effects action by reconfiguring the behavior-based system.
Reconfiguration involves arbitration, and changing the
motivation state of the behavior-based subsystem, as will
become clear later in this chapter. We call this
decomposition a hybrid parallel/serial decomposition because
even though the whole system follows a parallel layered
decomposition, the higher level modules (namely the map
builder and planning modules) follow somewhat the
traditional serial approach of sensing, modelling, and
planning before task execution or actuation occurs. However,
unlike the traditional approach, the planning module does
not have to wait on the map builder for a highly processed
representation of the environment before it can effect
action. Instead, based on its current knowledge and the
status provided by the lower level behaviors, the planning
module can select from a variety of lower level behaviors or
actions. Thus, unlike the subsumption architecture proposed
by [Brooks 86a] where any behavior subsumes the function of
all the behaviors below it (by inhibiting their outputs or
suppressing their inputs), in our implementation the
arbitration strategy is incorporated in a set of rules in
the planning module. In what follows we discuss the
individual blocks of our proposed architecture starting with
the planning module.


89
steps can be arbitrarily interleaved. In addition, the many
knowledge sources have continual access to the current state
of the blackboard, and thus, can contribute
opportunistically by applying the right knowledge at the
right time.
Our proposed Sensory Knowledge Integrator [Bou-Ghannam
90b] follows a blackboard framework. The Sensory Knowledge
Integrator is described in detail in a technical report[Bou-
Ghannam 90a] Its highlights will be discussed in the
remainder of this section.
An intelligent multi-sensor system maintains an
internal description of the world which represents its "best
guess" about the external world. This world model is built
using sensory input and a priori knowledge about the
environment, the sensors, and the task. Thus, the problem of
constructing a world model representation involves two types
of information fusion: 1) Fusion of information from
multiple sensory sources, and 2) Fusion of sensory data with
a priori knowledge and object models [Kent 87] This
representation of the world consists of both a spatial and
an object or feature-based representation. The spatial
representation classifies the world space as occupied,
empty, or unknown, and explicitly represents spatial
relationships between objects. The feature-based
representation associates each object with the set of
features that verifies the identity of the object.


Ill
Pk = [Z(wi .y-L) ] / [XwjJ
where w-¡_ = A[x-¡_,y-jJ
Thus, we obtain a line for every cluster. The
parameters of the line (p, a) are weighted by the support of
each point in the original Hough grid.
Step 6. Group the raw data points as belonging to the lines:
For every raw data point find the perpendicular
distance to each of the lines obtained in step 5. This is
easy to accomplish since the lines are represented in the
normal form. Thus the perpendicular distance from a point
(xi/yi) to line k is given by p^:
Pk = x-j_.cos (a^) + y^.sinia^)
If pfc < Threshold, then (x,yj_) belongs to line k and is
thus stored in group k of points. The threshold value used
represents the line uncertainty in p, as the largest
distance from a point admitted to the line. This uncertainty
Op is used later in the consistency checking knowledge
sources.
Step 7. Sequentially order the points in the line groups
obtained above:
This is done by first projecting every point in the
line group onto the line, and then ordering the points
according to the x-coordinate of the projections so that the
group starts with points of minimum x-coordinates, followed
by points of incrementally increasing x-coordinate. If the
line is close to vertical, the points are grouped according
to the y-coordinate of the projections.


78
avoiding obstacles and maintaining balance, while motivated
behaviors execute goal-driven tasks triggered by the
associated motivation state set by the planning module in an
effort to bias the response of the behavior-based system
towards achieving the overall mission. Such motivated tasks
include moving to a specified target location as in the
"target-nav" behavior.
In addition to providing the specific task-achieving
functionality, the behaviors also serve as abstraction
devices by providing status information to the planning
module. Such status include variables representing error
status variables, and operating conditions variables. The
error status include variables indicating errors such as
robot communications error, or sonar data error. The
operating conditions variables represent a behavior's
operating conditions such as "target-reached" or "break-
detected" for example.
Some of the behaviors we are interested in include
"avoid-obstacles", "wander", "target-nav", "boundary-
follower", and "path follower".
4,2.1 Avoid obstacles behavior
This behavior uses sonar information from a ring of 12
sonar sensors placed around the robot. Each sonar hit (range
data) is modeled as the site of a repulsive force whose
magnitude decays as the square of range reading (distance to
the obstacle) of that sonar [Khatib 85], figure 4.4. The


TABLE OF CONTENTS
ACKNOWLEDGEMENTS iv
ABSTRACT vii
CHAPTER 1: INTRODUCTION 1
1.1 Philosophical Underpinnings and Overview 7
1.2 Contributions of the Research 11
CHAPTER 2: SENSOR DATA FUSION: MANAGING UNCERTAINTY 12
2.1 Motivation 12
2.2 Sensors and the Sensing Process 14
2.3 Classification of Sensor Data 15
2.4 Levels of Abstraction of Sensor Data 18
2.5 Sensor Data Fusion Techniques 20
2.5.1 Correlation or Consistency Checking 22
2.5.2 Fusion at Lower Levels of Abstraction 24
2.5.3 Fusion at Middle and High Levels of
Abstraction 27
2.5.3.1 Bayesian Probability Theory 28
2.5.3.2 Certainty Theory 2 9
2.5.3.3 Fuzzy Set Theory 31
2.5.3.4 Belief Theory 32
2.5.3.5 Nonmonotonic Reasoning 35
2.5.3.6 Theory of endorsements 36
2.6 Implementation Examples 37
CHAPTER 3: INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS ....40
3.1 Behavior-Based Approaches to Robot Autonomy 4 3
3.1.1 Lessons from animal behavior 43
3.1.2 Current behavior-based approaches to robot
autonomy 4 8
3.1.3 Limitations of the subsumption architecture ... 50
3.2 Control Architecture: Behavior-Based vs.
Traditional Control 52
3.3 Issues in World Model Construction 55
3.3.1 Position Referencing for a Mobile Robot 56
3.3.2 World Model Representation 57
3.3.3 Managing World Model Inconsistencies 59
3.4 Direction of Proposed Research 62
CHAPTER 4: THE PROPOSED HYBRID CONTROL ARCHITECTURE 68
4.1 The Planning Module: A Knowledge-Based Approach 72
4.2 Lower-Level Behaviors 75
4.2.1 Avoid Obstacles Behavior 78
4.2.2 Random Wander Behavior 7 9
4.2.3 Boundary Following Behavior 80
4.2.4 Target-Nav Behavior 81
4.3 Arbitration Network 82
4.4 Map Builder: A Distributed Knowledge-Based
Approach 87
v


3
reactions to external stimuli. These reactions are
apparently due to animal instinct and are not a consequence
of sophisticated processing or reasoning. Braitemberg
[Braitemberg 84] elaborates on the observation that animal
behavior could be an outcome of simple primitive behaviors
and such behavior could be reproduced in mobile robots whose
motors are driven directly by the output of the appropriate
sensors. Rodney Brooks' introduction of the subsumption
architecture [Brooks 86a] has given the behavior-based
approach a great push forward and has forced researchers in
the robotics community to reexamine their methods and basic
philosophy of robot control architecture. For example,
researchers have always taken for granted that a robot needs
to model its environment. Now, alerted by the main thesis of
the behavior-based approach of no global internal model and
no global planning activity, they ask questions of why and
for what specific tasks does one need to model the
environment. Brooks' subsumption architecture uses a menu of
primitive behaviors such as avoid obstacles, wander, track
prey, etc., each acting as an individual intelligence that
competes for control of the robot. There is no central brain
that chooses and combines these simple behaviors, instead,
the robot sensors and what they detect at that particular
moment determine the winning behavior. All other behaviors
at that point are temporarily subsumed. Surprisingly, the
conduct of Brooks' brainless "insect" robots often seems
clever. The simple responses end up working together in


30
The certainty factor approaches 1 as the evidence for a
hypothesis becomes stronger, with 1 indicating absolute
truth. As the evidence against the hypothesis gets stronger
the certainty factor approaches -1, with -1 indicating
absolute denial. A certainty factor around 0 indicates that
there is little evidence for or against the hypothesis. To
combine the certainty factors of different hypothesis, the
following rules apply:
CF(HI AND H2) = MIN[CF(H1), CF(H2)]
CF(HI OR H2) = MAX[CF(HI), CF(H2)]
Another problem is how to compute the certainty factor of a
conclusion based on uncertain premises. That is, if P
implies Q with a certainty factor of CF1, and CF(P) is
given, then CF(Q) = CF(P) CF1. Also how to combine the
evidence when two or more rules produce the same result.
Assume that result Q produced by rule R1 has a certainty
factor CF(R1) and that rule R2 also produced Q with a
certainty factor CF(R2), then the resulting certainty factor
of Q, CF(Q), is calculated as follows:
1. When CF(R1) and CF(R2) are positive,
CF(Q) = CF(R1) + CF(R2) CF(R1)*CF(R2)
2. When CF(R1) and CF(R2) are negative,
CF(Q) = CF(R1) + CF(R2) + CF(Rl)*CF(R2)


11
1.2 Contributions of the Research
We see the contributions of this work as:
1. The development and implementation of a hybrid control
architecture that combines both traditional and behavior-
based approaches.
2. The development and implementation of the Sensory
Knowledge Integrator framework which provides a parallel
distributed model for sensor data fusion and consistent
world modeling.
3. The development and implementation of a new approach for
consistent world modeling. This approach involves the
interactive use of the occupancy grid and the 2-D line
representations for filtering out unsupported raw input data
points to the line-finder knowledge source and thus
providing a better 2-D line representation.
4. A fast algorithm for position referencing of a mobile
platform using the 2-D line representation.
5. Addressing the important question of combining both the
behavior-based and the traditional approach and whether it
provides better performance.


48
c) Complex behavior can occur as the result of the sequential
application of different sets of primitive behaviors with the
consequence of a given behavior acting as a mechanism which
triggers the next one.
d) Simple reflex types of behavior occur independent of
environmental factors and provide the animal with a set of
protective behaviors.
e) Activation of more complex types of behavior typically depend
upon external and internal constraints.
f) Animals typically only respond to a small subset of the total
amount of sensory information available to them at any given time.
Animals have developed specialized types of detectors which allow
them to detect specific events.
g) Behavior is often organized hierarchically with complex
behavioral patterns resulting from the integration of simpler
behavioral patterns.
h) Conflicting behaviors can occur in animals. These will require
either a method of arbitration between such behaviors or the
activation of alternate behaviors. Pages 151-152.
3.1.2 Current behavior-based approaches to robot autonomy
[Brooks 86a] follows a behavior-based decomposition and
proposes the famous subsumption architecture for behavior
arbitration. The main idea is that higher-level layers or
behaviors override (subsume) lower-level ones by inhibiting
their outputs or suppressing their inputs. His subsumption
architecture have been used in a variety of robots [Brooks
90], and proved robust in dynamic environments. Most of his
robots are designed to be "artificial insects" with a
deliberate avoidance of map or model building. Brooks' idea
is that "the world is it's own best model", and intelligent
action is the outcome of many simple behaviors working
concurrently and coordinated through the context of the
world. There is no explicit representation of goals or
plans, rather, the goals are implicitly designed into the
system by the pre-determined interactions between behaviors


148
[Draper 88]
Draper, B. A., Collins, R. T., Brolio, J., Hanson, A. R.,
and Riseman, E. M. 1988. "Issues in the Development of a
Blackboard-Based Schema System for Image Understanding. In
Blackboard systems, R. Engelmore and T. Morgan eds. Addison
Wesley, Reading, MA, pp.189-218.
[Durrant-Whyte 86a]
Durrant-Whyte, H. F. 1986. "Consistent Integration and
Propagation of Disparate Sensor Observations." Proc. IEEE
Int'1 Conf. on Robotics and Automation, pp. 1464- 1469.
[Durrant-Whyte 86b]
Durrant-Whyte, H. F., and Bajcsy R. 1986. "Using a
Blackboard Architecture to Integrate disparate Sensor
Observations." DARPA workshop on Blackboard Systems for
Robot Perception and Control, Pittsburg PA.
[Elfs 89]
Elf s, A. 1989. "Using occupancy grids for mobile robot
perception and navigation." Computer, 22(6):46-57.
[Elfs 87]
Elfs, A. 1987. "Sonar-Based Real-World Mapping and
Navigation." IEEE Journal of Robotics and Automation, RA-
3(3):249-265.
[Feigenbaum 77]
Feigenbaum, E. A. 1977. "The Art of Artificial Intelligence:
Themes and Case Studies of Knowledge Engineering."
Proceedings of the Fifth International Joint Conference on
Artificial Intelligence (IJCAI 77), pp. 1014-29.
[Flynn 88]
Flynn, A. M. 1988. "Combining Sonar and Infrared Sensors for
Mobile Robot Navigation." Int'1 J. Robotics Res. 7(6):5-14.
[Fodor 83]
Fodor, J. 1983. The Modularity of Mind. MIT Press,
Cambridge, MA.
[Garvey 82]
Garvey, T. D., Lowrance, J. D., and Fischler, M. A. 1982.
"An Inference Technique for Integrating Knowledge from
Disparate Sources." Proc. of the 7th Int'1 Joint Conf. on
Artificial Intelligence, pp. 319- 325.
[Giarratano 89]
Giarratano, J. C. 1989. CLIPS User's Guide. Artificial
Intelligence Center, Lyndon B. Johnson Space Center.
Distributed by COSMIC, the University of Georgia, Athens,
GA.


94
5.1 with the computational load distributed over three
Silicon Graphics workstations, a personal computer dedicated
to the control of the mobile robot, and a sonar sensor
system with 12 sonar sensors arranged in a circle 30 degrees
apart on top of the mobile robot. Figure 5.2a shows a photo
of the robot with the sonar sensor arrangement on top, while
figure 5.2b shows a graphical simulation of the robot as it
appears on the displays of the graphics workstations. The
graphically simulated robot is a dynamic entity that follows
in real-time and great detail the motion of the actual robot
allowing a user to remotely monitor the activity of the
robot from any of the workstations. The graphical simulation
of the robot is a graphical object that can be manipulated
in different ways such as enlarging or shrinking its size,
moving or rotating it, or changing the viewing angle.
Each of the three workstations was physically located
in a different building and networked using a shared memory
networking software called HELIX (Shared Memory Emulation
System for Heterogeneous Multicomputing). HELIX was
developed at the Center for Engineering Systems Advanced
Research (CEASAR) at Oakridge National Laboratory. A
description of HELIX is beyond the intentions of this paper,
and the reader is referred to a technical report by [Heywood
89]. The computational tasks were divided as follows: a
Personal IRIS workstation at the Machine Intelligence Lab in
Weil hall was dedicated to the CLIPS-based planning module,
an IRIS 4D workstation in Mechanical Engineering was


80
emergent behavior is used in totally unknown environments
for random wandering. This is also useful for leading the
robot out of trap situations.
4.2.3 Boundary following behavior
This behavior uses sonar scan data to determine the
direction for safe wandering or the nav-vector as we call
it. In this behavior mode the robot keeps moving forward as
long as the range of the forward pointing sonar is larger
than a certain threshold, and the range values of the rest
of the sonar sensors are also larger than their respective
thresholds. The range data is dynamically read from a set of
12 sonar sensors onboard forming a ring around the robot.
When the conditions for a clear (no obstacle in the way)
forward motion are no longer satisfied, this behavior
determines the next clear nav-vector from the latest sonar
scan. Usually, many nav-vectors are valid and the one
closest to a forward direction is chosen, thus minimizing
the degree of turns for the robot. This also has the effect
of following the boundary of the indoor environment such as
walls. The algorithm involved is straight forward, fast and
utilizes sonar range data directly with minimal processing
of that data. This behavior could also be used as a
generalized wandering behavior for map building. Note that
while this behavior is active, the map builder is busy
(concurrently) assimilating sensor data from the various
locations visited by the robot.


14
uncertainty about the observed environment. However, in
addition to the fusion of information from multiple sensory
sources, the problem of generating an accurate world model
representation involves the fusion of sensory data with
object models and a priori knowledge about the environment.
2.2 Sensors and the Sensing Process
The field of robotic sensor design is rapidly growing
and undergoing a great deal of research activity. A variety
of sensors are available for robotic applications. These
include TV cameras, infrared cameras, ranging devices such
as acoustic, infrared, and laser range finders, touch
sensors, proximity sensors, force/torque sensors,
temperature sensor, etc. An assessment of robotic sensors is
presented in [Nitzan 81] Nitzan defines sensing as "the
translation of relevant physical properties of surface and
volume elements into the information required for a given
application." [Nitzan 81, p. 2]. Thus, physical properties
such as optical, mechanical, electrical, magnetic, and
temperature properties are translated by the sensing process
into the information required for the specific application.
For example, a parts inspection application might require
information about dimensions, weights, defects labeling,
etc.. The basic steps of sensing are shown in the block
diagram of figure 1.1 (from [Nitzan 81]).


82
around obstacles placed in its path. Keep in mind that some
topologies of the environment will trap the robot in
unproductive cyclic maneuvers when operating under the
target-nav behavior.
4.3 Arbitration Network
Behavior-based systems require some mechanism for
combining or arbitrating the outputs of the different
behaviors. In the case of a behavior-based mobile robot,
each behavior generates a heading for the robot, but the
robot can only accept one heading at a time. The resultant
or winning heading is determined by the arbitration
mechanism. One method of arbitration is the subsumption
architecture [Brooks 86a] where higher-level behaviors
subsume lower-level ones by suppressing their input or
inhibiting their outputs. Arbitration, however, can be
achieved in many ways. The simplest method uses a priority
list where the behavior with the highest priority on the
list gets control of the actuators. Another method
implements a strategy where once a certain behavior gets
control, it remains in control as long as it is active
regardless of other behaviors. Once the stimulus for
activating the controlling behavior disappears, the behavior
releases control of the actuators. Other strategies involves
combining the outputs of the various behaviors in some kind
of formulation, such as the potential field method [Arkin
87] .


141
(defrule active-target
?mode <- (mode end-wander-scan)
(position-index ?num)
?f <- (position-index ?num frontier-center ?xt ?yt)
=>
(retract ?mode ?f)
(assert (target ?xt ?yt))
(behavior-select "target-nav" "CN" ?xt ?yt) )
This rule named "active-target" enables the target-nav
behavior whenever in the end-wander-scan mode, and a
frontier center at some position (xt, yt) has been asserted.
The frontier center is detected by the map builder module
and refers to the center of the frontier between empty and
unknown space in the spatially indexed map. This center now
acts as a target for the robot to go to and start
discovering new unmapped areas. In the last statement of our
sample rule above, the target-nav behavior is enabled by
calling the user-defined CLIPS external function called
"behavior-select" and passing it the name of the behavior to
be enabled (target-nav) and the associated parameters (xt,
yt) Next we discuss the powerful concept of user defined
external functions used in CLIPS.
A1.3 CLIPS User-Defined External Functions
Functions that are not predefined in CLIPS and required
to perform some special action, can be defined by the user.
This greatly extends the capabilities of CLIPS and improves
its efficiency since not all types of operations are well-
suited to expert systems. An external function call allows a
temporary exit from CLIPS to perform a certain operation in


109
resulting grid were all initialized with a zero value. That
is, A[i, j] = 0 for all i and j.
Step 2. Perform the Hough transform:
For every sonar data point (xk, yk) where k = 0 to N,
the total number of data points, we find:
Pki = xk.cos(ai) + yk. sin (a-¡_)
for each of the allowed subdivisions in a. That is a = 0 to
359. Then, if pmin Pki Pmax / we increment the
corresponding accumulator cell:
A[(int)(a^/Aa), (int)(pki/Ap)]=
A[ (int) (cci/Aa) (int) (Pki/Ap) ] +1
Step 3. Filter out poorly supported lines:
In this step only the heavily supported lines in the
Mi-/ j] grid are retained. The test is as follows: For all i
and j, if A[i, j] > Threshold, then xk = i and yk = j, and
point Sk(xk, xk) is saved as the coordinates of a heavily
supported line in the parameter space. The index k acts as
counter for the heavily supported lines. Assume we obtain
'n' such lines.
Step 4. Cluster the points in the grid:
Here we follow the max-min clustering algorithm,
starting with no cluster centers.
1. Arbitrarily, let the first sample point S^ = (x^,
yi) be the first cluster center C]_.
2. For all the sample points (k = 1 to n) find the
sample point furthest from C]_. That is, find Sk with the
maximum distance from C]_:


66
virtual sensor data. The "target-nav" behavior, discussed in
section 4.2.4 of our implementation, is an example of a
motivation-driven behavior. Its motivation state is
represented as a location that is set by the planning
module. In addition to its virtual sensor data input, it
also acquires data from the real position shaft encoder
sensors in order to generate a vector heading towards the
target. The arbitration of the various behaviors in the
behavior-based system competing for control of the robot
actuators, is partly hardwired in the behavior-based system,
and partly encoded in a flexible arbitration strategy in the
cognitive system. The flexible strategy changes during the
operation of the robot depending on the current situation
and the task at hand. In section 4.3 of the next chapter, we
discuss various arbitration strategies including the one
used in our implementation.
As mentioned earlier, we view world models as essential
to intelligent interaction with the environment, providing a
"bigger picture" for the robot when reflexive behaviors
encounter difficulty. Thus, the research proposed in this
paper adopts a knowledge-based approach to the problem of
constructing an accurate and consistent world model of the
environment of an autonomous mobile robot. We propose to
construct this model within the framework of the Sensory
Knowledge Integrator proposed in [Bou-Ghannam 90a,b] and
described in chapter 4. A more accurate model is obtained
not only through combining information from multiple sensory


135
join in effecting the desired behavior. An example of a
"motivated behavior" in our implementation is the "target-
nav" behavior whose motivation state is a specified target
location, and whose action is to guide the robot towards
that target. In our experimentation, targets such as the
boundaries of empty and unknown areas in the occupancy grid
map were detected in the map builder and forwarded by the
planning module as the motivation state for the target-nav
behavior in order to help the robot discover new areas. The
behavior arbitration strategy in our implementation resides
in a set of rules in the planning module. The fact that our
arbitration rules are not hardwired gives our system the
flexibility of a general purpose testbed that could be setup
for operation with a variety of user specific tasks and
environments. We have demonstrated a specific implementation
of the arbitration strategy using CLIPS (a knowledge-based
systems shell) rules. The other major contributions of this
research is the Sensory Knowledge Integrator, the underlying
framework of the map builder module. This framework utilizes
a distributed knowledge-based approach for consistent world
modeling. We have presented the theoretical aspects of this
framework and supported the theoretical claims by
implementing a variety of knowledge sources as part of the
framework for consistent world modeling. Some of the
implemented knowledge sources include the EOU, the 2-D line
finder, and the consistency checking knowledge sources. The
theoretical basis and the details of implementation and


REFERENCES
[Agre 90]
Agre, P. E., and Chapman, D. 1990. "What Are Plans For?" In
Designing Autonomous Agents, P. Maes, ed. MIT Press,
Cambridge, MA.
[Albus 81]
Albus, J. S. 1981. Brains, Behaviors, and Robotics. Byte
Books, McGraw-Hill, New York.
[Allen 88]
Allen, P. K. 1988. "Integrating Vision and Touch for Object
Recognition Tasks." Int'1 J. Robotics Res. 7 (6) :1533.
[Anderson 90]
Anderson, T. L., and Donath, M. 1990, "Animal Behavior as a
Paradigm for Developing Robot Autonomy." In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Anderson 88]
Anderson, T. L., and Donath, M. 1988, "Synthesis of
Reflexive Behavior for a Mobile Robot Based Upon a Stimulus-
Response Paradigm," In Mobile Robots III, SPIE Proceedings
Vol. 1007, pp. 198-211.
[Andress 87]
Andress, K. M., and Kak, A. C. 1987. "A Production System
Environment for Integrating Knowledge with Vision Data." In
proc. of the 1987 workshop on Spatial Reasoning and Multi
sensor Fusion, A. Kak, and S. Chen eds. Morgan Kaufmann
publishers, pp 1-12.
[Arkin 90]
Arkin, R. C. 1990. "Integrating Behavioral, Perceptual, and
World Knowledge in Reactive Navigation." In Designing
Autonomous Agents, P. Maes, ed. MIT Press, Cambridge, MA.
[Arkin 87]
Arkin, R. C. 1990. "Motor Schema Based Navigation for a
Mobile Robot: An Approach for Programming by Behavior."
Proc. IEEE Int'1 Conf. on Robotics and Automation, pp. 264-
271.
[Ayache 88]
Ayache, N., and Faugeras, O. D. 1988. "Building,
Registering, and Fusing Noisy Visual Maps." Int'l J.
Robotics Res. 7(6):45-65.
145


106
are increased according to their likelihood of being
occupied. We use a simplified probabilistic model with
discrete steps, as shown in figure 5.6, to determine the
amount of increment or decrement.
Thus, this updating approach makes use of all the
information available for an individual sonar scan including
the sensor geometry and its probabilistic characteristics.


79
avoid behavior determines the resultant repulsive force
acting on the robot by summing the forces sensed by each
sonar sensor, as follows:
12
Fres = E(l/ri)2.ei
i=l
The magnitude of the repulsive force is then compared to an
experimentally determined threshold value. If the magnitude
exceeds the threshold value then the repulsive force
represents the response of the avoid behavior as a vector
heading for the robot. This threshold is fixed
(experimentally determined and normalized to outputs of
other behaviors) in our implementation, but within an
adaptive behavior-based system it is not fixed and
constitutes a variable parameter which is adjusted by the
outputs of other behaviors as the situation demands. For
example, a "charge-battery" behavior may tune down the
effect of the "avoid-obstacles" behavior by raising its
threshold as the robot approaches the battery charging
station, allowing the robot to dock and connect to the
charging pole.
4.2.2 Random wander behavior
This behavior generates a random heading for the robot.
This behavior, coupled with the avoid obstacles behavior,
form an emergent behavior with the following functionality:
"wander at random without bumping into things".
Such an


84
the planning module and brought to bear on the behavior-
based system configuration by activating the target-nav
behavior with the center of the discovered boundaries as
target location. When the robot arrives at its target and
crosses the boundary, it is now reconfigured to discover the
new unknown areas.
Adhering to the biological analogy, the switches in an
arbitration network of a behavior-based system are not
purely binary all or nothing switches, but are active
elements with adjustable gains and thresholds, as modeled in
figure 4.6 by the operational amplifier circuit. The
programmable gain of the switch varies from a minimum to a
maximum allowable value. When the gain is either 0 or 1,
then a simple binary switch is obtained. The threshold is
another adjustable value above which the response of the
input behavior will produce an output. The programmable gain
and threshold constitute the basic parameters for adaptive
behavior and learning. The best values for such parameters
are not readily apparent, but can be fine tuned by some
adaptive learning algorithm or neural network. [Hartley 91]
proposes the use of "Genetic Algorithms" to accomplish such
tasks. Using the switch model of figure 4.6, the
implementation of an arbitration network involves the
knowledge of which behavior affects (adjusts the parameters
of) which other behavior, by how much, and in which
situations or context. The wiring of such an arbitration
network depends upon the goals or the desired competences of


140
facts through the rules to their implied premises. The CLIPS
user's manual defines a fact as consisting of one or
more fields enclosed in matching left and right
parentheses." For example, to indicate that the robot has
completed a sonar scan at the tenth positional step, the
following fact is posted:
(position-index 10 sonar-scan complete)
Facts are added to the CLIPS facts list through the "assert"
command, while the "retract" command deletes facts from the
list. For example,
(assert (mode target-nav))
(retract ?fact5)
The first statement asserts the fact (mode target-nav)
indicating that the robot is now in the target-nav mode or
behavior. The second statement retracts from the facts list
the fact assigned to variable "fact5".
A1.2 CLIPS Rules
A rule is a collection of conditions and the actions to
be taken if the conditions are met. It is the primary method
for representing knowledge in CLIPS. The following rule from
our implementation serves as an example:


2
Should one design machines that mimic "insect intelligence"
with no central brain and symbolic models of the world? This
deep difference in philosophy currently divides the
artificial intelligence community into two camps: 1) The
"traditionalists," constituting the majority of researchers
who have long assumed that robots, just like humans, should
have models of their world and should reason about the next
action based on the models and current sensor data [Ayache
87] [Crowley 85] [Giralt 84b] [Kriegman 89] [Shafer 86]. 2)
The behavior-based camp of researchers [Brooks 86a] [Connell
89] [Payton 86] [Anderson 88] [Agre 90] who avoid symbolic
representations and reasoning, and advocate the endowment of
a robot with a set of low-level behaviors that react to the
immediacy of sensory information in a noncognitive manner.
The main idea is that "the world is its own best model", and
complex behaviors emerge from the interaction of the simple
low-level behaviors as they respond to the various stimuli
provided by the real world. The number of researchers in
this camp is small but growing rapidly. In the next
paragraphs we discuss the characteristics of each approach,
starting with the behavior-based one.
Ethological observations of animal behavior [Gould 82]
[Manning 79] [McFarland 87] provide the inspirational basis
for the behavior-based approach in robotics. The observation
is that animals use instinctive behaviors rather than
"reasoning" to survive in their ever-changing environment.
Apparently, their actions are the resultant of various


13
many different sources of sensory information in order to
overcome the limitations of single sensory robot systems.
Single sensory robot systems are limited in their ability to
resolve and interpret unknown environments, since they are
only capable of supplying partial information. The need for
multi-sensor robot systems is evident in the literature:
[Giralt 84a], [Durrant-Whyte 86a], [Henderson 84],
[Ruokangas 86], [Flynn 88], [Luo 88], [Mitiche 86], [Shafer
86] The motivation is to obtain from a set of several
different and/or similar sensors, information that would be
impossible or impractical to obtain from any one of the
sensors alone. This is often possible since different
sensors are sensitive to different properties of the
environment. Thus, each sensor type offers unique attributes
and contextual information in interpreting the environment.
The goal of a multi-sensor system is to combine information
from the various sensors, with a priori knowledge about the
environment, the sensors, the task, etc., into a meaningful
and consistent interpretation of the environment. In this
manner, the system maintains an internal description of the
world which represents its "best guess" about the external
world.
Sensor data fusion combines information from various
sensors into one representative set of data that provides a
more accurate description of the observed environment (an
improved world model) than the description provided by any
of the sensors acting alone. The objective is to reduce


Figure 4.1 A general framework of the proposed hybrid
control architecture.
70


144
err_code = 10; /* say we encounter sane error that should be
reported to CLIPS.*/
sprintf (buffer, "target_nav status %f", err_code) ;
assert (buffer) ;
return;
The function name and the state parameters passed to the
"selector" function are acquired by calling rstring(l) and
rstring(2) respectively while the xt and yt parameters are
acquired by calling rfloat(l) and rfloat(2) respectively.
Note also that status is passed back to CLIPS by calling the
function "assert" and asserting the fact (target-nav status
10) .


105
places the robot could maneuver through, then a higher
resolution is needed.
Step2. Sonar cone simulation:
Once the occupancy grid is initialized, we generate a
template 2-D cone with the same geometric dimensions of the
actual physical cone of sound waves emitted by the sonar
sensor. The template cone shown in figure 5.5, is 10m long
with a 12 degree beam angle (similar to the characteristics
of the Polaroid sonar sensor cone, see [Elfs 87] and
[Borenstein 88]), located at the origin in the x-direction,
and is tessellated into 105,000 lcm^ cells.
Step3. Analysis of sonar readings:
The template cone generated in the previous step is
translated and rotated from its initial position and
orientation to the position and orientation of the sonar
sensor at the time of the scan with its length reduced from
10m to the actual range returned by the sonar sensor. This
is illustrated in figure 5.5. Next, taking into
consideration the probabilistic characteristics of the sonar
sensor, i.e., the probability of space being empty is high
near the sensor and decreases as the target is approached,
while the probability of space being occupied increases near
the target, figure 5.6, the value of each cell within the
resultant cone is decremented according to its likelihood of
being empty, while values of cells at the end of the cone


Figure 6.6 Filtered raw data points.
Figure 6.7 The second 2-D line representation.


APPENDIX
INTRODUCTION TO CLIPS
CLIPS stands for C Language Implementation Production
System. It is a rule-based expert system shell developed by
the Artificial Intelligence Section of the Johnson Space
Center, NASA. CLIPS is written in and fully integrated with
the C language providing high portability and ease of
integration with external systems. It is a forward chaining
rule-based system based on the Rete pattern matching
algorithm developed for the 0PS5 system. The five main
topics that constitute CLIPS are: facts, rules, variables
and functions, input and output, and the user interface. In
the following three sections we will briefly introduce
facts, rules, and user-defined functions only, as these
topics are most needed for understanding our implementation.
For more detail on all the topics, the reader is referred to
the CLIPS User's Manual [Giarratano 89] as well as the
reference manual.
Al.l CLIPS Facts
A program written in CLIPS consists of rules and facts.
A fact is a true statement about some characteristic of the
problem being examined. Facts are the data that cause
execution of the rules. Reasoning propagates from the given
139


Figure 4.5 Arbitration by production rules and
superposition.
Figure 4.6 Model of an arbitration network switch.
85


15
In this chapter we are concerned with the sensing
process where information from a variety of sensors is
combined and analyzed to form a consistent interpretation of
the observed environment. As we will discuss later, the
interpretation process is complex and involves processing of
sensor data at various levels of abstraction using domain
specific knowledge.
2.3 Classification of Sensor Data
The fusion technique of choice depends on the level of
abstraction and on the classification of the sensor data. In
multi-sensor systems, data from the various sensors are
dynamically exchanged. The use of these data in the fusion
or integration process falls under one of the following
classes:
Competitive. In this case the sensors1 information is
redundant. This occurs when the observations of the
sensor (s) intersect; that is, they supply the same type of


SENSOR^
ACTUATORS
a. Serial decomposition.
SENSOR
REASON ABOUT BEHAVIOR OF OBJECTS
PLAN CHANGES TO THE WORLD
IDENTIFY OBJECTS
MONITOR CHANGES
BUILD MAPS
EXPLORE
WANDER
AVOID OBJECTS
CTUATORS
b. Parallel decomposition.
Figure 3.1 Control architectures, from [Brooks 86]:
a. Serial vs. b. Parallel decomposition.
42


56
3.3.1 Position Referencing for a Mobile Robot
A mobile robot can achieve position referencing by any
of the following methods:
Trajectory Integration Referencing. Uses odometeric
devices such as shaft encoders without external reference.
These methods are prone to errors (due to wheel slippage)
that are cumulative and cause position drift.
Absolute position referencing. Uses fixed known
external beacons throughout the environment. The more
external beacons we place at known absolute positions in the
environment, the more structured this environment becomes.
In this case the errors in the robot's position and
orientation are related to the beacon system measurement
accuracy.
Relative position referencing. Is performed with
respect to objects with characteristic features whose
positions in the environment are known with good accuracy.
This method is very desirable yet it introduces considerable
complexity. A challenging task in this case is for the robot
to define its own references.


33
knowledge about a belief B is expressed in a single
probability number P (B) In cases where the prior
probabilities are not known, the choice of P (B) may not be
justified. Belief theory proposes belief functions where
each function distributes a unit of belief across a set of
propositions (called the "frame of discernment") for which
it has direct evidence, in proportion to the weight of that
evidence as it bears on each proposition. The frame of
discernment (0) is defined as an exhaustive set of mutually
exclusive propositions about the domain. The role of 0 in
belief theory resembles that of the sample space (Q) in
probability theory except that in belief theory the number
of possible hypothesis is |2e| while in probability theory it
is |£2|. The basic probability assignment is a function m that
maps the power set of 0 into numbers between 0 and 1, that
is:
m: 2e > [0, 1]
If A is a subset of 0, then m satisfies:
1. m(0) = 0, where 0 is the null hypothesis.
2. I m(A) =1
AC 0
A belief function of a proposition A, BF(A) measures
the total amount of belief in A, and is defined as:


81
4,2,4 Target-N&v Behavior
This behavior is a location attraction behavior that
generates as its response a vector proportional to the
vector between the current location of the robot and the
specified target location. Robot position is provided by
wheel encoder sensors, while the target position is provided
by the planning module as part of the motivation state of
the robot. So, this is a motivation-driven behavior, the
motivation being to reach the target location. A robot R at
(xr, yr) is attracted to a target T at (xt, yt) by the
following heading vector:
V = RT = (xt xr)i + (yt yr) j
The magnitude of this vector decreases as the robot
approaches the target, and the target-reached flag is set
when the magnitude drops below a certain threshold. In
addition, our implementation normalizes the magnitude and
sets an experimentally determined threshold value as its
maximum amplitude. As a result, the avoid-obstacles behavior
gains higher priority as the robot encounters obstacles on
its path to the target. This behavior coupled with the avoid
obstacles behavior attempt to navigate the robot towards the
target without bumping into obstacles. The robot actually
tries to go around obstacles towards the target. Results of
our experimentation show the robot successfully navigating


67
sources, but also by combining this sensory data with a
priori knowledge about the domain and the problem solving
strategy effective in that domain. Thus, multiple sensory
sources provide the added advantages of redundancy and
compensation (where the advantages of one sensor compensates
for the disadvantages or limitations of the other) while
domain knowledge is needed to compensate for the
inadequacies of low-level processing, as well as to generate
reasonable assumptions for the interpretations of features
derived from lower-level sensory data. In addition we
propose to achieve a consistent model by making explicit the
knowledge about the inconsistencies or conflict, and using
this knowledge to reason about the conflict. This is similar
to the theory of endorsement proposed by Cohen (Cohen 85]
where resolution tasks are generated, and positive and
negative endorsements are accumulated in order to resolve
the conflict.


xml version 1.0 encoding UTF-8 standalone no
fcla fda yes
dl
METS:mets OBJID UF00082166_00001
xmlns:METS http:www.loc.govMETS
xmlns:mods http:www.loc.govmodsv3
xmlns:xlink http:www.w3.org1999xlink
xmlns:xsi http:www.w3.org2001XMLSchema-instance
xmlns:daitss http:www.fcla.edudlsmddaitss
xsi:schemaLocation
http:www.loc.govstandardsmetsmets.xsd
http:www.loc.govmodsv3mods-3-2.xsd
http:www.fcla.edudlsmddaitssdaitss.xsd
METS:dmdSec ID DMD1
METS:mdWrap MDTYPE MODS MIMETYPE textxml LABEL Metadata Object Description Schema
METS:xmlData
mods:mods
mods:identifier type oclc 25222039
alephbibnum 001693355
mods:location
mods:physicalLocation University of Florida
code UF
mods:name personal NAM1
mods:namePart Bou-Ghannam, Akram A.
mods:role
mods:roleTerm Main Entity
mods:originInfo
mods:publisher Akram A. Bou-Ghannam
mods:dateIssued 1991
mods:recordInfo
mods:recordIdentifier source ufdc UF00082166_00001
mods:recordContentSource University of Florida
mods:titleInfo
mods:title Intelligent autonomous systems
mods:subTitle controlling reactive behaviors with consistent world modeling and reasoning
mods:typeOfResource text
METS:amdSec
METS:digiprovMD AMD_DAITTS
OTHER OTHERMDTYPE DAITTS
daitss:daitss
daitss:AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
METS:fileSec
METS:fileGrp
METS:file GROUPID G1 J1 imagejpeg CHECKSUM 282624727505ae1fac00e971c2aedc87 CHECKSUMTYPE MD5 SIZE 70320
METS:FLocat LOCTYPE OTHERLOCTYPE SYSTEM xlink:href Copyright.jpg
G2 J2 d7a24b7ed458129a9223cf2c54fcd1d1 37477
00001.jpg
G3 J3 5dc3e20eb1072aface914c8e4bd32b86 26907
00002.jpg
G4 J4 b5886e27c3eed90eb41fe66d72efd6a0 27089
00003.jpg
G5 J5 85df2255adab22f4230dc01ffe731985 76300
00004.jpg
G6 J6 e57610bd4507233311d33a9a6756dc7c 107124
00005.jpg
G7 J7 8e2d20d980c6404d9e624433516aa7ff 56317
00006.jpg
G8 J8 93bee7725eb0c35e1552cfb59c6fee58 77321
00007.jpg
G9 J9 db1af5bb169d9f8d9e771f5dd861110f 83604
00008.jpg
G10 J10 92edc13edb881084de610b5fba8e851c 28929
00009.jpg
G11 J11 8a2f27ba886136e6c000757f5fe8523c 72311
00010.jpg
G12 J12 1320cf851a6837b1ef7af997effaead6 85880
00011.jpg
G13 J13 b6beab02c9861fe1a2705b20e084468b 88486
00012.jpg
G14 J14 efaf33ea355748b1210fc776f02a2948 88416
00013.jpg
G15 J15 e09e4a660100c9b8d19afd0f14097ecb 85271
00014.jpg
G16 J16 839eb77eb9986d093257d8d770f3c84b 87550
00015.jpg
G17 J17 ad43651ff9596d865594db4facfb1501 83485
00016.jpg
G18 J18 ed65e8f99c09ba233e1ac375a9a3b371 86064
00017.jpg
G19 J19 63b6ceb3d9daf1b297fc34854f515d7b 84904
00018.jpg
G20 J20 8a9f44f2a3c3e293fe059ef7964ce9ce 82217
00019.jpg
G21 J21 027e53f24322f77ac9171bb3fd031219 62807
00020.jpg
G22 J22 0e999454b2b1cfaf2600e9aa493959b0 72076
00021.jpg
G23 J23 86f49059c05a7898a07b3bc19d98401e 84368
00022.jpg
G24 J24 a90d446112180280724464a71e38dc13 81020
00023.jpg
G25 J25 a5551c7e2dbff520341790589300ce3d 72479
00024.jpg
G26 J26 b5a6348a6fde46e12f74c264961a925e 80082
00025.jpg
G27 J27 e32a100b10ee5ea9a319e0d77e238b08 80909
00026.jpg
G28 J28 b60153fa185e8d2ce6d91a2747d74590 83144
00027.jpg
G29 J29 5e333e53f2794fe5f308a279326c32c2 75060
00028.jpg
G30 J30 92b66a7215634713787513a541cc0587 79676
00029.jpg
G31 J31 c1327671c424cccf2053afb1dadb713a 79860
00030.jpg
G32 J32 7e416b819d414dadaed000613ede6bce 71924
00031.jpg
G33 J33 e5dbeeedb7d8172d99e1108dd9457824 52675
00032.jpg
G34 J34 5d44c5338ff60a62dbc51252c37353dd 75420
00033.jpg
G35 J35 cdb3db288f8861f16b6f88d61d9d1d62 59317
00034.jpg
G36 J36 74a5a55a1e75b25534b7c45170866e43 72917
00035.jpg
G37 J37 41364ece0d2481f4f4082567b4eed5b7 79870
00036.jpg
G38 J38 ce6ce933679d3bb4dd8f2b6a34da104e 64059
00037.jpg
G39 J39 5cb9f5ddbb95ab98cee61f743edec121 73354
00038.jpg
G40 J40 23e929104830fbbe84025eeb3d11c076 70791
00039.jpg
G41 J41 7a47490793762712e9e7be964508e633 72572
00040.jpg
G42 J42 bce62c31a8a3d947c66b5bff6bce6f8e 64234
00041.jpg
G43 J43 03a9813ea835d162448b53d77fef597f 68636
00042.jpg
G44 J44 60f9a05bcfc219ecda1c08865e5bf004 66691
00043.jpg
G45 J45 cbb6289ce2ff0fcc999acea7ec3f466a 66568
00044.jpg
G46 J46 0634f6a2991918bb3da36647b8d57700 83255
00045.jpg
G47 J47 842edb28c64590ac4af677976a98df3d 70556
00046.jpg
G48 J48 2a655217352e48d3f655d084adecd4d5 85280
00047.jpg
G49 J49 56498c05978225d9c97fcc0f42a5a3e7 84500
00048.jpg
G50 J50 1fbfe113a70fbae65790cf54ab6253b0 78181
00049.jpg
G51 J51 010439fd8a348a4340fcbc363ef01b8f 83173
00050.jpg
G52 J52 d8d7f2cd292c011fcbdd74457251c465 47254
00051.jpg
G53 J53 534657dc4aef3987b368733987d0dc6b 77460
00052.jpg
G54 J54 03c6d786e13f0daefcb7185c1212cf08 81809
00053.jpg
G55 J55 ebf466036d6adb6639bdd16514bd7bcd 83800
00054.jpg
G56 J56 0c5785babb990fb90e970df3bb5e50f1 84823
00055.jpg
G57 J57 a7f58a9654d7626334447ccbee56eccb 83988
00056.jpg
G58 J58 5e0a740a12d6471ea71b437c7e5ff12f 93086
00057.jpg
G59 J59 520c33b66cdf3034b316257115a04120 80660
00058.jpg
G60 J60 f65a7a43dd69dea3b84cd0507e1e6ef8 83643
00059.jpg
G61 J61 3437494c4b463c74b210a2b09dcf860c 76100
00060.jpg
G62 J62 a688e92c281301dc93117b20de77b5fc 85275
00061.jpg
G63 J63 32f38628e8230b00fec2cff035796d90 84989
00062.jpg
G64 J64 929dffd5913c6e124c70ba2ad721541e 84056
00063.jpg
G65 J65 1eaecd66f9651bd1f3e3577588b97412 79288
00064.jpg
G66 J66 3b6db253b95ceeb100100eb3bd223363 63397
00065.jpg
G67 J67 d105849216707c1f0aa4bbfa926db566 84681
00066.jpg
G68 J68 10d29fe11ea2d296d57c5dc5f9d75ee4 82821
00067.jpg
G69 J69 ba15c540eb4a5441df08f90cfa3a307a 80530
00068.jpg
G70 J70 ba9ad7a38ade44b35c405e97b452a700 83668
00069.jpg
G71 J71 ac876a06b81720382bd3843903ba6ef0 86287
00070.jpg
G72 J72 427ba990239ff239bb02e6cb9ba54217 78993
00071.jpg
G73 J73 e183e297507e60effb9fa845d5bb80c2 87628
00072.jpg
G74 J74 ee94bafe8b6d2de8f94450dc969026b3 86432
00073.jpg
G75 J75 4952d0238a2baebd5a86c775b276b1ab 70990
00074.jpg
G76 J76 6fef17e6d2d1c7e1ae025b08438c0642 86161
00075.jpg
G77 J77 4bead69c7cb20fd70d6e3d7f4df88805 60722
00076.jpg
G78 J78 e4aac0e93fd1c45ea7d9479921a08b94 77347
00077.jpg
G79 J79 f67dbd4f765236d865749465b1e08c56 84313
00078.jpg
G80 J80 8a77ff4a6ca39bacc57c9c7fc12ccb74 52835
00079.jpg
G81 J81 a9dc89f7bcf2b30ed25a14f38802a003 50883
00080.jpg
G82 J82 c70cb0400de924842485652a01a18790 84985
00081.jpg
G83 J83 52d8bcf5244e4c548636c0086b1807f6 87653
00082.jpg
G84 J84 9a5b18fef4310c3b39704df934e8301f 85751
00083.jpg
G85 J85 800b4b5a8ce681741cce17b74304166e 83026
00084.jpg
G86 J86 295c43de50cb82059defe8ed1385a850 81455
00085.jpg
G87 J87 0d4b612253ed446b472ee7203dd6283c 42596
00086.jpg
G88 J88 00bfe3f339c6e7d21ac864fae165a910 78150
00087.jpg
G89 J89 47cd390b5c7ea461cdaa7af6ce2bf060 73850
00088.jpg
G90 J90 e35a48a28f6734b3c9ac277d6aad6988 79982
00089.jpg
G91 J91 ab12ee72f26b483ba0c4b202146b53cb 74908
00090.jpg
G92 J92 a49ee4787c7daab5da045b55cb1406aa 78032
00091.jpg
G93 J93 023529f0c9f7ba2969b98a06cc757574 87431
00092.jpg
G94 J94 6d7c312263e3da5706360c99827b48c8 84037
00093.jpg
G95 J95 3568d9f5183156df25ddf0be15e9c520 46805
00094.jpg
G96 J96 fc150b9ca9d6ba6b8e42bd8e19260c74 77762
00095.jpg
G97 J97 9ffb30d087f389a70432b2d145ef2b57 78011
00096.jpg
G98 J98 6bc3fa9741846e224b286f706e1bd893 89607
00097.jpg
G99 J99 08b0fcdf299a8c61c9bd5c3be8714a9a 80769
00098.jpg
G100 J100 38ffdc61ab29da87624c389973cb9a2b 88730
00099.jpg
G101 J101 b71746da6d3116deeb450b5e8728b0ab 87813
00100.jpg
G102 J102 61b69afd1a241fb4b22b8eb4b35421e3 64321
00101.jpg
G103 J103 3be8700a8ff0a1ab27704bc651a4bce3 75086
00102.jpg
G104 J104 7cd312a4118ee33e95c811d147791e6d 89294
00103.jpg
G105 J105 307fd582c648af8d52da00a39ecb6173 66117
00104.jpg
G106 J106 114744d112f4036caea214464c40bf49 71851
00105.jpg
G107 J107 54ba196ea5c1588dd69c11f851a4d477 86526
00106.jpg
G108 J108 7465f4c762c9be1fa14307b4dcc1996f 62375
00107.jpg
G109 J109 e18200d990bfe8d07383b7cccaefaaf0 82066
00108.jpg
G110 J110 397e882ba1ffd2a1296f349e7c71f4d2 80610
00109.jpg
G111 J111 1d72bfcc1c953487259009924453514c 81874
00110.jpg
G112 J112 468c5502182640a7135650b5b2ec90a2 80073
00111.jpg
G113 J113 9f57adb8f4f5526d4d023c285a5532ff 43847
00112.jpg
G114 J114 b1ffe9d2bfcb469cbd2d30c6a8796d4f 78220
00113.jpg
G115 J115 d7025f296f1e35097745d3921f212408 77540
00114.jpg
G116 J116 55329b69aeaf936e2edb72faa09d497b 56720
00115.jpg
G117 J117 1c67556fc245fa86af312bbc0ba70fb0 72781
00116.jpg
G118 J118 237ad53999f226b41e1ed5d295a32794 39177
00117.jpg
G119 J119 b23d07b22e9df4280380e9d4a1d732da 72189
00118.jpg
G120 J120 79c843db9ac634cc2920c394a66fdb73 75711
00119.jpg
G121 J121 7d1796bfbe5f2c8d685c0519622147c1 73410
00120.jpg
G122 J122 76afb04988bed19a13188cd008008e5f 77472
00121.jpg
G123 J123 9d59356b19865bb527551f936fd13580 74690
00122.jpg
G124 J124 164f6ea39933e1f8bcf4727b55b513cb 65006
00123.jpg
G125 J125 b486e990bc679c1666d6bdb9562f48e8 54993
00124.jpg
G126 J126 4a16b437043f9c1ce40532ce04569a92 63911
00125.jpg
G127 J127 6365db69a2b82a31a34bdcbb482609e4 58451
00126.jpg
G128 J128 b9adf869c5c7a6bd74f447f27db678fd 63987
00127.jpg
G129 J129 026f9991b665f3b02f85c3c24c1c9ae9 54128
00128.jpg
G130 J130 d8ff16711419b565260937edb246be8e 73755
00129.jpg
G131 J131 d8096e4d06f6dc205b5da88fb110d224 45011
00130.jpg
G132 J132 0cc37b5d5a9f09cc261368629dee8536 98152
00131.jpg
G133 J133 9adac0fc6c448f98c78e6c01556b82af 102888
00132.jpg
G134 J134 77f065a70f739a86c65e5b2b346c4860 120198
00133.jpg
G135 J135 3e9c2d08a01916e55352baaa7cc9b981 120029
00134.jpg
G136 J136 34b2668cfb50838e229aa3a16782620b 107529
00135.jpg
G137 J137 85b24e25e55ba56af579e0795c39674d 86666
00136.jpg
G138 J138 4edfe3bb3e1aabe3e56bc24d78681c96 79708
00137.jpg
G139 J139 35fef32f41593714794b044b053999a6 86023
00138.jpg
G140 J140 454a1e562b146bedf3d4a400f0eb0310 81205
00139.jpg
G141 J141 b6976b4a1ab2c03cdeed378ff6e60478 64673
00140.jpg
G142 J142 ab6ba28a097284fd05921773bf834aca 77755
00141.jpg
G143 J143 217fea0692163f503d7b004f313c0959 86914
00142.jpg
G144 J144 f311243647a0ccfd521ba3d49f615398 84495
00143.jpg
G145 J145 8a4626ecd27ee10f277ad4cdfe1c223c 88273
00144.jpg
G146 J146 5462c2276404c7a7172774f9c306fd23 85663
00145.jpg
G147 J147 0584726332776df8cbe90826fe9c9b92 82846
00146.jpg
G148 J148 4226bb38435b3e40fedcdd090412e5a7 38953
00147.jpg
G149 J149 444930aa2757b1ce1b0f01b488f3d6a5 73655
00148.jpg
G150 J150 20c480f8379c7262613c869c03e42824 63128
00149.jpg
G151 J151 233197b789cd8a99c5aa6577bf4b26ef 80862
00150.jpg
G152 J152 017d7e33152a02d11b86a4dc5bcf184b 74624
00151.jpg
G153 J153 9a7de4be061c8b4f2bfa7bb548aaa677 69092
00152.jpg
G154 J154 e2cb8663673eb4a7e2fa5776f6d731ed 46126
00153.jpg
G155 J155 839657df9339f64a7edc7241790fcd2a 83802
00154.jpg
G156 J156 f5196155e9e801fdc791a13e6b1481b5 89720
00155.jpg
G157 J157 23209f4e863a633f27583c3e71699d0f 92773
00156.jpg
G158 J158 08ceb448d5f165b262d45d9675c64bad 93225
00157.jpg
G159 J159 e507293c9ed33a83c9a46c9ad630d38c 96375
00158.jpg
G160 J160 c5effc1fcd5e1cee0cda7092d65d1148 89109
00159.jpg
G161 J161 c7d4f1c939947fb4814a983ed2f8ca0b 85486
00160.jpg
G162 J162 22045467331b20d3b94343bfa5f29b04 77369
00161.jpg
G163 J163 90169f5c77d5ef6a5971fddcf7f3a799 73676
00162.jpg
G164 J164 efcb4cc64a2c5954fe67209eaaf1dc8c 35940
00163.jpg
G165 J165 2cc6df4eb7fafe8b6bc89274c45e8d41 85997
00164.jpg
G166 J166 3ed7c33b86ca1b0d154ee560f545aa1d 36215
00165.jpg
E1 imagejp2 922cd49ff7e42980790b79b7ae721ecc 1052367
Copyright.jp2
E2 b64c17f09599906845b8cd46604a46ea 1052404
00001.jp2
E3 e1e0e73ac08ca35de051fce85d2645ad 1053158
00002.jp2
E4 c78beb2e305de22a7398341f3ce57754 1053062
00003.jp2
E5 1d40f50f5b985422141c23e03cc4b577 1053191
00004.jp2
E6 fefc8bc3b16f4fda21707f302d647a0f 1051959
00005.jp2
E7 67905f1e77eeb3d7540860346ec0289f 1053734
00006.jp2
E8 15a60a65a4202acb62b081cad06ee11e 1053208
00007.jp2
E9 9c0b74316880926423ba023f13ba145a 1052554
00008.jp2
E10 30a57deeb8b75f5fdfb2914a98db50b9 1053711
00009.jp2
E11 650079b2efa5c593bd909ff22bda88a6 1052575
00010.jp2
E12 af6f9134555ba342680e3ad9af5e291e 1052352
00011.jp2
E13 9da29a13094618620d0c7385b9285033 1053855
00012.jp2
E14 53b49d84b702e02a63754dee85a0b225 1053215
00013.jp2
E15 8699fa62cae175d449755d02dbf778b8 1053199
00014.jp2
E16 bc2d4c0958f8f2729d0e19f84567078c 1053223
00015.jp2
E17 a0509250da9fbde0d4cd1bb75462b723 1052567
00016.jp2
E18 134fd7db400855cb398e8a911ca7b261 1053178
00017.jp2
E19 321e90b7d5cb247091f425d71c0e2318 1053862
00018.jp2
E20 bd696c0cb2c49be4b13c4d6d19ee4bb7 1052397
00019.jp2
E21 97343a80eca62f58862ba44379b8b1f3 1052467
00020.jp2
E22 36dfcc33f1c58d2c80b12fe81457d039 1052831
00021.jp2
E23 f1e938d9ef5985eb7843776b299ab086 1054488
00022.jp2
E24 4aa4ba6cc4d6c7bc74aee436c96a26c8 1054476
00023.jp2
E25 11e250ac74e3b25e89d3f7baecd7b83f 1052343
00024.jp2
E26 ca23d85ab636daa82707de80a2d3aba6 1052382
00025.jp2
E27 d94ebfb8edd6dbd044888bea64318d09
00026.jp2
E28 cc3bf7ce6096a9d20b7686e7fa4fee19 1053573
00027.jp2
E29 07c92682ccdb96e2bd27ff1b37e6d1c3 1053032
00028.jp2
E30 fddcf9cf5054ff6a3d0754f1df9c32f8 1053466
00029.jp2
E31 d7a9769b77b5d93d86f3e831db832b89 1052329
00030.jp2
E32 ff6fae2323166f58957a814179cd53ac 1051754
00031.jp2
E33 af5bc15206413431c98f463e1c2cd8a6 1051486
00032.jp2
E34 d10dabb3192c54d3ee4ca120095660b1 1053035
00033.jp2
E35 3a3b8258f5931e25bba1ac06db243a01 1051717
00034.jp2
E36 4808b3d41244ff27710db918e0d0790a 1051739
00035.jp2
E37 6ce0f1c22ec2ee1bebf52f14b535d335 1050923
00036.jp2
E38 d9bad627de56b2b4daa7e1b60760c93c 1053154
00037.jp2
E39 0b964f9c6f2fd2dd939a2b41a407f8b3 1053214
00038.jp2
E40 aff9092ad826079875aea4625b50e16e 1051935
00039.jp2
E41 3d2c4cbdd0160c53c43fe78b58516898 1051819
00040.jp2
E42 182fa0dd774b4c777b1bc280f021d7ed 1053221
00041.jp2
E43 02861d960bd64bde9095eb0b5942e7d1 1053128
00042.jp2
E44 97ad0dc0a274594c53308cd099e6e464 1053720
00043.jp2
E45 6dd032de993be104aa24d5657524b7f1 1053663
00044.jp2
E46 0281bb8e1eb4eea3908583f22915064b 1052582
00045.jp2
E47 6785db4d0ee1e4786d593f0b35aec3b5 1052562
00046.jp2
E48 e81838842eedc45cd70f51dd679b9dff
00047.jp2
E49 81a524725bfe3e608c2dc8a8113b6530 977839
00048.jp2
E50 814f3631c922ccab1f20c957f29bc088 1052335
00049.jp2
E51 6ce7901d6e6e7a5deb11333cc1afa4db 1052552
00050.jp2
E52 a5fbc7b673af9fe45cd7d2f355d9ad9b 1052941
00051.jp2
E53 09764ed043181c9c578ae68d7b2e17ab 1052809
00052.jp2
E54 4e15fc144a6809c05edc454659955f17 1052196
00053.jp2
E55 4c954f84cc98f8c4bb5829bf53a99ce8
00054.jp2
E56 b89940d13aae912f301f9c81fae8306e 1053205
00055.jp2
E57 204d087ec7e38ea2af1950c8c7d45262 1053856
00056.jp2
E58 b907ca1fedf7b6ad86673cef6405f57d 1053865
00057.jp2
E59 5f1d3dc346ff3441f1836fdf2857e0d5 1053012
00058.jp2
E60 8b5308eacde562c17678be8ee08cf8b3 1053737
00059.jp2
E61 1cefd588644c70dd9000fa84534ad4f0 1051761
00060.jp2
E62 62695c0e00f62ad61c5129cb3868d8d0 1053176
00061.jp2
E63 6397d6971abb6e9b7c3f8814e9879708 1052566
00062.jp2
E64 8ef110049ae628239e2feff9197c93ae
00063.jp2
E65 55c5375236999f0f1416d94e0552083a 1052587
00064.jp2
E66 f1e22c2a50f66e51d7b3f512591d990e 1053774
00065.jp2
E67 36ed44a3af39a70d1673e8a48f1716f4 1055118
00066.jp2
E68 76d45003a0b50230e189ca8b86e2771a 1054497
00067.jp2
E69 855a2620b83d8a91a5489a622b92e7f6
00068.jp2
E70 0098297c4e178076ad2b73daa4803854 1053141
00069.jp2
E71 e964f14445ec6584fd624191c8378778 1053775
00070.jp2
E72 68d8290fbaed3138a4bd593f90587f9c 1054462
00071.jp2
E73 5acd308d5f57b2745daf032a1cb6d996 1055004
00072.jp2
E74 40f1cdf5fb2b1a3d473c485823eb051f 1055768
00073.jp2
E75 552497c28e295c9b8f10e05c087700a2 1053137
00074.jp2
E76 d6b7dbeca4ae47706a164aa9f9a56da5 1053226
00075.jp2
E77 240dc8e626c0e47642ee48093fd8f406 1053598
00076.jp2
E78 16807518d65ebdce0269886b58a547fb 1053809
00077.jp2
E79 4f26c73b6d5991af0bf961fe2902e966 1053201
00078.jp2
E80 5801b9d1390ae9ac756a0aace8e651d3 1052392
00079.jp2
E81 509d80daa1f89e3bd3e2a3c11b0c6809 1053186
00080.jp2
E82 9a047a5a6098fd0ebf6824797347cb45 1054493
00081.jp2
E83 a3a100452bc47555dfcf86ac19ca72a9 1053851
00082.jp2
E84 aadd425db9ef65cd1e7715279db32668
00083.jp2
E85 4b093045668eed4f55be75b92f237412 1053140
00084.jp2
E86 a837d1aef1df331ace16750b102c2cd0 1055760
00085.jp2
E87 7c57cf11a3c473efbbf70ec2c2683efd 1054218
00086.jp2
E88 fec26e537d64ccd7ac1e7a30f9135b6e 1055734
00087.jp2
E89 b97b3a6b2f5f0e19ef498ea07b493bf7 1053709
00088.jp2
E90 849c96131bd76a38d86f9ec8fd1b9ded 1053846
00089.jp2
E91 f9100ed7f24e63b96c2c98b48257f8c4 1052583
00090.jp2
E92 aeb7757fb55a6100d487e1fe38627278 1053850
00091.jp2
E93 ceb1e9069fc0aa27456faf0a4dd359a4 1055123
00092.jp2
E94 2e0253e79e7cdd04afe1229dd7bbf3f3 1053165
00093.jp2
E95 12487ad1ab907f3916820ab7565d27ee 1052518
00094.jp2
E96 6ad762199257d0c2cc07cbeda8ec20af 1053222
00095.jp2
E97 dbe693e7569812e66d781b52a20f5402 1053804
00096.jp2
E98 c34c146c2b9d67b0fda7028136379a28
00097.jp2
E99 4253fc1f2e1e489a27868aa38f3310e3 1053770
00098.jp2
E100 d1e9b5212ece3bad6c36b2e18ca67f92
00099.jp2
E101 8318fe53c1278f2d08ed07fb0100001c 1054500
00100.jp2
E102 5aa8d3f9c2d4baddc3aa522e8648ca82 1054495
00101.jp2
E103 8eba68cfe78ea7b3d6733bcb0106c23a 1055132
00102.jp2
E104 95421d29d8170535c80c7c3334901456 1053228
00103.jp2
E105 0fdb5408ea6a647de3b764f5c1f7085f 1053860
00104.jp2
E106 7ba8de9272c8542440c14767c06c0f10 1053728
00105.jp2
E107 dbe0a54bbafea0a17891a19e0abc01e1
00106.jp2
E108 3c93362154527da7b862f0a83c5bdbb7 1053655
00107.jp2
E109 d7d369268720fa1409abe477a5efcb67 1054483
00108.jp2
E110 ce88103984ec6ef4d25394523a56080d 1053843
00109.jp2
E111 b2e7b5a229188d69dfb83ed5a047ae79 1052571
00110.jp2
E112 368b9c18fba7ada56dfcf56954edfc4d 1053840
00111.jp2
E113 0ceb25c47f592eb85c5d26c688c77180 1052164
00112.jp2
E114 d3f80bea0d36ae58420ba63ba29c3283 1053831
00113.jp2
E115 9ffd7e210cbcd796b5429485a6ebfc7d 1052401
00114.jp2
E116 170d752a133756687a05c0dd2dd55ddb 1051817
00115.jp2
E117 f9d12e766a41740f8348e243add24f68 1050689
00116.jp2
E118 ed5fa48954a2c2b3c60395a1a4d96982 1054307
00117.jp2
E119 ca48031aa9d7fb3b96cbf33b7419fa36 1052388
00118.jp2
E120 af94d00a9b0037f908597616758a0366 1053842
00119.jp2
E121 eb553ce94c58a443f6ed9d0be9607e3d 1051129
00120.jp2
E122 406fa9048e91d541acc63b71cb61b7f4 1050070
00121.jp2
E123 e0e43c54677fad51ffe230b960880498 1054454
00122.jp2
E124 ab3204318e0be301f934de7f06be44ff
00123.jp2
E125 7f033528162abd7aa3061095bb7cb48d 1054347
00124.jp2
E126 3edca0607600291ce0b80a91338cf7d7 1051524
00125.jp2
E127 85906e421981d1211bf6a0b4e5da294c 1054271
00126.jp2
E128 52ccba5ba02dd066fa3f01e22c4c4425 1053628
00127.jp2
E129 89e61f70679a371e0a4743107a924666 1054941
00128.jp2
E130 3c689e30b6ec77cc7c5aa8d61215effa 1054499
00129.jp2
E131 63a2a23193f804e8005302010b98be5b 1052422
00130.jp2
E132 d071b3f7f3ac7ab11081bc8300ac9fe5 1051565
00131.jp2
E133 6bb4b0b468352da96d123caa0fe511d4
00132.jp2
E134 e2f4aba18329a9290d2f344b18f424c5 1052179
00133.jp2
E135 9c6f06eaa5d98ebbd803ee86328fb852 1050489
00134.jp2
E136 e85c0151139790729a796b05dd384434 1050732
00135.jp2
E137 dbd73aeb8441daabbe9875d427373351 1053817
00136.jp2
E138 1bb3a26a21dc99666d8036d3504070b4 1053029
00137.jp2
E139 ade8a13d9235f343cd4a63ea3b3da8cb 1052619
00138.jp2
E140 0cf215542b6b18686c6a1e064300f91a 1051765
00139.jp2
E141 ba91b021bba0aa619d8b88863b2c0b4c 1052671
00140.jp2
E142 a4cf8ea94ee2d2f074263ffe30af9446
00141.jp2
E143 ba3dc726f00d8571893fbef832d32185 1054457
00142.jp2
E144 5e77e5d163a152a0e93e3fb30bc42f67 1054033
00143.jp2
E145 10de974534c0a6af8f3eafff1727791c 1054109
00144.jp2
E146 e2cc98ea7d0748a6848daef6735447dc 1053219
00145.jp2
E147 c68b8c8c72a47f2f96dbfbc47ea45932 1057282
00146.jp2
E148 6f6e5141e0de8f28d9781e5604e4e209 1057449
00147.jp2
E149 55726dcd0c9745e7480a1f72109897a2 1056645
00148.jp2
E150 08158e540a8811e621b89d35036bc461 1055167
00149.jp2
E151 aaf42e83c80399cbd1adce92432da311 1053210
00150.jp2
E152 dfa3f9f34fff4929bf1b2d7e0e117aae 1052369
00151.jp2
E153 4f1091a71b6d7d8fce98738b95e5da81 1053857
00152.jp2
E154 b777ecb1cbecab8366dddd3df0a513dd 1054387
00153.jp2
E155 e98f201f1a69881b6827f3d3be8e4e7b 1053822
00154.jp2
E156 4b4779eb4364fd7586ea18907f685cc1 1052590
00155.jp2
E157 14d9a36fb5f35dade185bd870c455bb1 1052398
00156.jp2
E158 10e45daee922b2a51ac627b47290b80a 1051171
00157.jp2
E159 4cf59d06ab135ab0c666532901e135e7 1052004
00158.jp2
E160 2124004936c830ffe8fb04742b2a9e9e 1053027
00159.jp2
E161 ded72db40fe32af8cca1107b27b18124 1052184
00160.jp2
E162 707cb60635310b95442c987a0a75bcc8 1050527
00161.jp2
E163 88579af078941b829323905069378fc8 1055322
00162.jp2
E164 122ed56211254f9143200247c89c957f 1052854
00163.jp2
E165 886f45b8cb5ba9b8f368fadc6d760b2e 1054933
00164.jp2
E166 d97c614aa62db07ea1dbd3d7acd5fbe9 1052556
00165.jp2
F1 imagetiff 6.0 b69f482484b60692d4cf7e1df476f0cc 8433792
Copyright.tif
F2 1e0bb0e33ba1af9eec03608e9547f051 8420359
00001.tif
F3 2548880abf4b0086d3f253951356697f 8425445
00002.tif
F4 4b56bb54e3b3acb4c46f1db0b3781ee3
00003.tif
F5 3cf74ae33e2e51127c7eba7be86926fd
00004.tif
F6 e3f9ec3f7470dbfc0fee7c30579d35c1 8415273
00005.tif
F7 1a41ca27ff3c1b6fadc1f4f84958ad5e 8430531
00006.tif
F8 071f2afd44976b3d73f0ff735593aea4
00007.tif
F9 78a47d99566921eb9d2194cd817077e6
00008.tif
F10 0107614f0970844b0172a3dea04bd22a
00009.tif
F11 c5c33ae36608c7411d8adc85dadaf27d
00010.tif
F12 2fd3793d7c9aed28492f39b626a18c62 8418819
00011.tif
F13 e5d65a2adedca37f6f08e9fe502d873d
00012.tif
F14 656e3f4ef6c8b0ff6a3d4b2ba3f8f55b
00013.tif
F15 9b04e8fbcddd0697e6dec29a1dead85f
00014.tif
F16 defc86931a9d301e22ed8097040f84a6
00015.tif
F17 8f96edb49f488c34095dff50ba1a3437
00016.tif
F18 1e91baffff8f68c7fca63496323fb2b3
00017.tif
F19 b692af3d1e2fc702040f58226d833a4e
00018.tif
F20 b4d3f708b3835948fa3fbf354d64e75b
00019.tif
F21 1a456a8727a3b265dcf7b4b2136c42c6
00020.tif
F22 305cbd5cfe502a674ca8e2b4d0c7a2fd 8422349
00021.tif
F23 3afd7f4b0c1a633f891a23b187881ba8 8435617
00022.tif
F24 6199c7f919188234fa352a93b558f5e1
00023.tif
F25 4201442108a744014e55e7422722033c
00024.tif
F26 974369f8743d2a42c34f06c184e2104f
00025.tif
F27 6791ef6d74d2d758ff77a064fee9ef99
00026.tif
F28 d4c00562deb4c519e1c71a02dcd6d9b4 8428983
00027.tif
F29 b006a68d609e6f492024c74efbbfee99 8423901
00028.tif
F30 9b8bec2a144d1f5f7294a67039c41824 8427427
00029.tif
F31 44d8b4bd40723a8751c61bafe8f459f3
00030.tif
F32 e3111da7a1970ab01a8a0ad8111701e3 8413737
00031.tif
F33 a391fc5bb1dc291ad9e94258b880e8a1 8412193
00032.tif
F34 23c861c65d83fba2796648cab21b4dca
00033.tif
F35 836cb2aa69d0436ec4eeeaf8822552c0
00034.tif
F36 f0c6d65a315beffc0522ef28344f5175
00035.tif
F37 f908b7c5673e444c2fdd6853acca5154 8407115
00036.tif
F38 b909066c790f8523eaeff334bb599c81
00037.tif
F39 3f627c5dc8c5a2d13d108bd74b096159
00038.tif
F40 185068fbb5b3c2fcafb0dbcbc3befea0
00039.tif
F41 d2274b41a980f2fb6b36dbcdb617aead
00040.tif
F42 c19db24789d5863844814520801b3589
00041.tif
F43 4bda791e21f093ab80f023274f2cf58c
00042.tif
F44 052cecd596f59070c4b5dacc1e57579c
00043.tif
F45 cc11f71f269ab8cd71220edae56e3c4c
00044.tif
F46 16a78a6f52c2d2f116120903b300e861
00045.tif
F47 ef18945be3e9d8dbc2b091e02c602af1
00046.tif
F48 5736cf7e02e94a8dae295c383c77b3a6
00047.tif
F49 0b7a384ae1ed9d267df30a5cfe415775 7840744
00048.tif
F50 3516d142690ef4f9c5453e4d9d0d160a
00049.tif
F51 637059efad4d513a2d7d680dd1b34f23
00050.tif
F52 ff2d64834ae724eab449edc423bc0272
00051.tif
F53 375ec4add373541421f67da7a61a5b78
00052.tif
F54 fb319932815bae210bcd32a35e1a657c 8417271
00053.tif
F55 1d06226d8f2fd05e92767d76312636fc
00054.tif
F56 3d9af2ae6a7ec095fee93f771a40b56b
00055.tif
F57 2c056f7449d6db39159a3f23f1caac15
00056.tif
F58 65a62a3049aa4dcc211ba97290fc7241
00057.tif
F59 525bac5a9302bdc5a653f4e74a0ade5a
00058.tif
F60 842d938d65209fca7a9aa4c8f9392a86
00059.tif
F61 ace649f7f886e6432335ef4af7158067
00060.tif
F62 41eae982d8948262789aaf93434ddbb6
00061.tif
F63 7a7c069b0da870c57bf104fc0508d976
00062.tif
F64 a55330834719c4866fc3f29c7e63cb77
00063.tif
F65 2551257c9b34fb2c458fff4dbe4e3646
00064.tif
F66 02beeafcde5ac8a3f02d3a8b970af018
00065.tif
F67 78a8d56851e9d10edcf6e0bee4ac1f8a 8440703
00066.tif
F68 53de7433c736dddef59234670b119e4e
00067.tif
F69 59e8a97fd7188a8b8db26df450f165d9
00068.tif
F70 3fc728510ce8b0e540c12264c0b67813
00069.tif
F71 f9342b8fe447ec007fd93057e3885b82
00070.tif
F72 966a10770f1fd24e144122693073551f
00071.tif
F73 044b215edc59e82b767cdd21b5c7e475
00072.tif
F74 629540d4ff585077cfae64a235dd4871 8445789
00073.tif
F75 32d6da9a255a4b09f532fe0cfce88de0
00074.tif
F76 da157a1437d2c4e8d560674ce087368f
00075.tif
F77 b1162851e067c7b74147a4f260cc4eef
00076.tif
F78 753726b3047504288e7ade53ac7ecdaa
00077.tif
F79 62433687743aee13dc0e909454e3a80a
00078.tif
F80 f89629539d395a7e0da7dfb845bde2ac
00079.tif
F81 a988c9a76a378d0f9a5dfeb5415a7cf5
00080.tif
F82 6c4546d9f04721d3663f67bcbef5ed12
00081.tif
F83 a1d9e493294c24855d4f8f99dc1d6a60
00082.tif
F84 0927d70cf2d90b23ee972affc121f3b5
00083.tif
F85 18bbd187bb63787839aca81999b3bce9
00084.tif
F86 6a6c9c574f10f677b1e12905eaecce90
00085.tif
F87 cc9b4f444a8463b1a713de3d52b40506
00086.tif
F88 0e185300d4b7946e17b4c47801ec31d9
00087.tif
F89 145376e6f2e18f6f51e0b6faf1a03065
00088.tif
F90 ae5ed3f569565891b3200ef46dac3caf
00089.tif
F91 3fe48a73676a8490246ad0b1853e9a24
00090.tif
F92 25da89404150fd092054975e0ec9804d
00091.tif
F93 7221a0077b1872cdf3bcfeec688d87da
00092.tif
F94 fd0c43eb519e68f7806b6f75ab3cc066
00093.tif
F95 08900184e37284aaaddc09fade045a27
00094.tif
F96 369a3e3c5fd1c61cf31a3687089ef268
00095.tif
F97 b3903fe46567471bd85e079eadf80821
00096.tif
F98 6df112046669c2d26e55ad68ee21ef03
00097.tif
F99 b89eab23c9370668a71892d2240b3d81
00098.tif
F100 526a6a380934b6e93dc64e0e216ba0bd
00099.tif
F101 21ab7304762a9881346bdebc254c3de3
00100.tif
F102 645125ffe59a2ee2b0f9e9719615af4b
00101.tif
F103 23b993bfef52bf718fbb1cac1bc7857e
00102.tif
F104 cf6d66e71f7a4eacd148fe218bad7ef6
00103.tif
F105 71a062b98f2dafdca5f063ccd15d44dd
00104.tif
F106 bf6293c1cbb9512e7108c170f2b1fdea
00105.tif
F107 2b5ac7be8d69e7bd83128cfa0c91b51c
00106.tif
F108 b4c5e4965cc132fe66654ddfef3a2d49
00107.tif
F109 c3792bb10715467f7c3209ef6ec075b2
00108.tif
F110 c9ea830704be18d15bca654ebba4bd82
00109.tif
F111 57bf30a48240a628e8afaf20f5578079
00110.tif
F112 eaf5b350ad30dbb27274b60edabec228
00111.tif
F113 e81f087916052fe6c00e97714ec54aea
00112.tif
F114 56453393473be780786b83208ab89929
00113.tif
F115 1fc24d6c9e4db78f101eaced548b69ef
00114.tif
F116 bec111717f540f297dcfa9693a94ce1c
00115.tif
F117 8cac29a9fb9643ec7b2873fd324774c3 8405567
00116.tif
F118 4d7022049294af3c4ea685c2149b87c6
00117.tif
F119 b81d11801439b177a88e0d6dcb6c44c8
00118.tif
F120 ba102d4ec66f7d22b4dd06180433c6ff
00119.tif
F121 8040af493afcc7d3ce90393aa78a0c77 8408655
00120.tif
F122 af754814522c754d9c3c5014cd0f6d02 8400493
00121.tif
F123 024eb2ccd2d85475f29f19bb72ea1c39
00122.tif
F124 988f841c7caf3c4dee32055ad17a49f9
00123.tif
F125 d9a8195d4358ff3534acf0d1e0b1ae38
00124.tif
F126 12223f5518793891d354447bd5b2a468
00125.tif
F127 25d7de7fa73116249caf2b557ff5fb67 8434065
00126.tif
F128 c27949e09475373cdb68f792e65e26bc
00127.tif
F129 83b1596d788cb1cc10281d32a5715946
00128.tif
F130 727e67292b692de2014d26e948b0ba2b
00129.tif
F131 5b6c663c03e2234142e0f7a847bd88db
00130.tif
F132 c902ab0dc2b3a878c98ee3f6857ba4d4
00131.tif
F133 81ea844cecb4d199dc57d605ca3a5916
00132.tif
F134 466bf89dddb5f62374bf6d76ab4fbff8
00133.tif
F135 845cdb957d9743a51e0f550c454c32e6 8404011
00134.tif
F136 23c9b6687ec54e211c02e668bd90978e
00135.tif
F137 7b14181703f12e01367567adc1004d49
00136.tif
F138 45063f81605120179bf85921e3772c9d
00137.tif
F139 462a4d278cb25a581422bbbbbd1a814c 8420789
00138.tif
F140 6dca373b366b4ebd348fd29d968ed515 8414151
00139.tif
F141 1c0c56430113ae6ea35cd64f6d20d44c
00140.tif
F142 a31be85b191888581a6431821304c145
00141.tif
F143 ba642995d4fd451d5dc9d478a1dce572
00142.tif
F144 b444efc9d5386d8dfeda9484c82f1620 8432505
00143.tif
F145 4c38cb4151bbf426b5262a8d445cc7a6
00144.tif
F146 70b3fbf9818b246a72b8ee4c73c22ca2
00145.tif
F147 b3c6e69600b208aeab45d00f26591d8c 8457895
00146.tif
F148 1bc39af4d3eeadc9eabb00c76b3f92e8 8459475
00147.tif
F149 897bad7437269b35b26bdf17cbc5a2e9 8452817
00148.tif
F150 5de488ab3bcaba34db967b45ae4be462 8442661
00149.tif
F151 f3630429a579e6688ec75729b9f21ead
00150.tif
F152 30900f0379c0f8201f899173b426f0c0
00151.tif
F153 b87a857c37fac2905eb4c617f983a80f
00152.tif
F154 f8ab563ac03367dcfd259d17e151ba00
00153.tif
F155 6a48d24dee85df36592342be36a51f26
00154.tif
F156 a72045cd33486d8611f94abec9ad6f32
00155.tif
F157 6f5aced7c0881bdc459b534d92b068da
00156.tif
F158 f7bd1740306563a0478ca10b77f25bc7 8409081
00157.tif
F159 25aeef664534c23be56faa2cf8b44452 8415715
00158.tif
F160 ddd9c4a2e158c35644f343bb83cee283
00159.tif
F161 8390f6577941f80dbe7a1ee79a49536f
00160.tif
F162 77e038aeffdfcd3c7c75d5cad1efac09
00161.tif
F163 e3a84c3bf532594175e52bb79883645b
00162.tif
F164 3b422ebd36c9167eaf0cfeab101ea0a5
00163.tif
F165 a1f169309163431db3dbb97e68556250 8439147
00164.tif
F166 a6e5fb782002bd0f4fcca0b4ca982b1b
00165.tif
R1 textx-pro 762fff1d257b821c0044d66292b02209 37396
Copyright.pro
R2 6f26185481c754c1177d276862b9c02c 8888
00001.pro
R3 e2e859b9aeb3a957b2d51cd136a7524e 1957
00002.pro
R4 fcfb16183bd8cff281a1aff8762f5cb7 2640
00003.pro
R5 e1ee0641b0eebb5b6f1acf0f95a40538 34939
00004.pro
R6 d56351a765c94a2f66d817573f0b4e2b 70374
00005.pro
R7 05fed2029f1858f0d69f11b11693c003 25440
00006.pro
R8 b066d8808a4f6ed71187c3404e61700f 36281
00007.pro
R9 e871e3efef966f378c472e5bd65fb316 39197
00008.pro
R10 73e0fabf24fe0f6064621b74c2436013 2385
00009.pro
R11 3b53d62566dc95419105c9d2bd983672 34034
00010.pro
R12 3ed7626e54f66236cb6a2ea42f59437c 42110
00011.pro
R13 2aef2ab13019164b1030038776501006 42500
00012.pro
R14 02577ed6be13f82a25c767777ae69bae 42248
00013.pro
R15 4c33b38e182630f2ef97694410b23255 40820
00014.pro
R16 c11dd2b6b32d78d27ace657336a76e4b 42316
00015.pro
R17 dbcfe96b281ff107a39aa1804cccb322 40159
00016.pro
R18 dc21207aa3560e43e22c6a6083db838c 42214
00017.pro
R19 8121f88944220b0738dec4a5ecc75203 40563
00018.pro
R20 2c51db004483c6db04f0108ab0ed71d9 37595
00019.pro
R21 3d0697a81183331f6f61ce15def19f9b 25982
00020.pro
R22 09cb90dc491727a1fb9a1423640ad007 32163
00021.pro
R23 4e009ea5e3ae829ebacd0b887439ba8f 40854
00022.pro
R24 c05c2b7acd9a3323cd92f907ebff6def 36100
00023.pro
R25 6ded0a25205fb2d2b9d9fee017bc5534 29930
00024.pro
R26 b69498b09dfc8810bd429ef9bab82494 37171
00025.pro
R27 9a1fb41615b8b25777e3438f8247b18d 38753
00026.pro
R28 a603724e124fab84ead907a4e31f2f73 40141
00027.pro
R29 17e2c4483da79315ea0b5e9720d41442 34932
00028.pro
R30 2da7b9ea04087061fb5c7711624de003 38166
00029.pro
R31 f199aa6b3df4549a86c7ae7cbc401075 36170
00030.pro
R32 bf5f5a84a924132420caffa96aeb9662 33825
00031.pro
R33 ba0e9c14760f14534fb8b1888882524e 21987
00032.pro
R34 b4aae5c90dee501ba5f682ca497ddc0d 35305
00033.pro
R35 0084238dec656674a819d8968f6a4187 25080
00034.pro
R36 b5b8d7e13ce95995b4ca81e960b2284a 33541
00035.pro
R37 7968a0a50a66d981345afa9ca51cc289 39245
00036.pro
R38 7e7cf2c5366d260340aa3d0f6d415bd0 26423
00037.pro
R39 5a420daa8f69425f97a6c3ed1aae1607 33451
00038.pro
R40 ebe1c7fce64f6cca4d9bac079279e89f 30882
00039.pro
R41 fbb9e73259f18e8ac660e66d59b26617 32756
00040.pro
R42 8a0a753c4e349dcd821f2675f7d2ff6e 28483
00041.pro
R43 2373556c2f6268250da633b5aaa31281 29658
00042.pro
R44 af7236c31cfcc5aab23848dd86fdb130 29322
00043.pro
R45 775dd44e8466b36da94349204561bb64 28572
00044.pro
R46 d69b500571838638904fbfd79bf1d980 39736
00045.pro
R47 88d67ee5e70339403f30c8233f711377 33161
00046.pro
R48 e08843438eedf8b1acd4751bd799403d 40794
00047.pro
R49 4b40e33d51c5fc6e9382dd9a607b2b31 39006
00048.pro
R50 28190f4357f61b27b778f14c35a4b325 35551
00049.pro
R51 dde8fc073ac51925932c5d43d644386b 38406
00050.pro
R52 e9b66fc181d93046c5dba7de74b4a916 10011
00051.pro
R53 9d55bc31f8da399c147065123345afdd 35419
00052.pro
R54 5bc7478283ffc4617143564bf44ab57d 40229
00053.pro
R55 71155f0b7d2381a1419586f96769ac95 40200
00054.pro
R56 b1c00520b58df98e49e2067d8814960b 41253
00055.pro
R57 d015eb8bf29ab784095e87432f62bc27 43574
00056.pro
R58 a932e85c3b2d2ccb8877a544bca9c4c5 51930
00057.pro
R59 f867d559e1d36aa8916b448c42b536c6 38283
00058.pro
R60 cfe50fd4a822a8372d036d68a131b418 40463
00059.pro
R61 f7cd2bffb2db10b62f13d8596815b7f9 33724
00060.pro
R62 40fafe8c8c2c9b2dea24ac67a084fa8f 41864
00061.pro
R63 2f4a840534abd9c806c233018118eeda 41927
00062.pro
R64 155e248617cc70e84a2b2940c1686617 41097
00063.pro
R65 cdeefd5f8628a5e9c35a6142e6ab51f8 38964
00064.pro
R66 2228292f287d8e09b49ee0cc0bdc5dee 27215
00065.pro
R67 4433de81b7f8b891b4512a2406c6eb82 41410
00066.pro
R68 746db961f910e6308e44447393bd3f57 40052
00067.pro
R69 47935b9e2df9b336e4884069adfec8ef 38556
00068.pro
R70 f4a0b293a8ef51595166d508216a88c9 40403
00069.pro
R71 868f478034b7f3d520eb9d33263929e5 41532
00070.pro
R72 9895a290b77634c7104bf19b0f9b51f0 37279
00071.pro
R73 5ae7c136bab7fb59ddd9e06ed6adb432 42498
00072.pro
R74 4a1ebf08c0715c364f49b6d6ac1141b3 40878
00073.pro
R75 f676b7abeac92f78871cc7a0caaaa406 16235
00074.pro
R76 a9d9674550b0e64714998cfa1f4980b2 41166
00075.pro
R77 fb152b789134c584217516c349813e6c 25035
00076.pro
R78 6a2f5244d2ddd9eed44228413eadab6a 35481
00077.pro
R79 09cac2c45bc9fe6a567d8313e1a8120f 40856
00078.pro
R80 8609614b4c2988f15d244a0367779da0 3199
00079.pro
R81 a4844ccb93922ea9530de3886c4a6485 2257
00080.pro
R82 a6d5e413f9ee8acc4a3d5abd2eba04bd 41684
00081.pro
R83 aab60013ebcba9a91d6ac0cc9f74a6aa 41808
00082.pro
R84 44c6e023820e812cfae0e0f19a1b79ec 40912
00083.pro
R85 7d07e1d90d5ebf845e327d31571d2d35 38225
00084.pro
R86 7bc8b539b24bc2e9d52e4b8f41b33d9c 38410
00085.pro
R87 9d6bfbb0deb4d7bd0c97b1b64cd5ab03 3945
00086.pro
R88 2625d9d73bec1ad49837934550bb62eb 36739
00087.pro
R89 c6bd04ca7ba0c043cdef455e9f3dbc44 33261
00088.pro
R90 0d039f605d22a8b260e6ef45e0e5b522 38186
00089.pro
R91 0ce15ab5aaf2866978f35ab652a8fe37 35028
00090.pro
R92 fb65c9a6e9ffec40d265a3549a3437a7 36847
00091.pro
R93 edb47486d2be3ac917d9dc81f99d530f 42480
00092.pro
R94 a4975343366f62fcf4452da2d47d62c8 40776
00093.pro
R95 28045feca716d0002c63e718c61e594c 9110
00094.pro
R96 a21da8124472cf0f73828a0891a9a4ad 35070
00095.pro
R97 3d66372c9fc3b2952625529b81adafb5 27443
00096.pro
R98 e6e852312caea035ad72709413ad9a73 41764
00097.pro
R99 560f4aa64903aa7a76fada569e7b871c 37974
00098.pro
R100 e5ce74dd6c19614d41fb50bfc4810ed8 41699
00099.pro
R101 414c70bb89a41f47e8818e0718200ef9 41362
00100.pro
R102 fee5d520f0b9bb06020b792c2a10416d 8272
00101.pro
R103 855c6093f5f415c27a4bff172f61308c 34919
00102.pro
R104 6875b4ff2975fb2347497fc2ca4bdad4 42322
00103.pro
R105 9022319151b44c7957f87248921bc2b0 15927
00104.pro
R106 3cb7d9e5737013570531234ada744811 3793
00105.pro
R107 db0c6b4b104fa13216d957c8e171c5e2 41918
00106.pro
R108 e6e8f3a0dc373d17dc8545f614d85f6a 1913
00107.pro
R109 ac5786e580f29d807e70c13b59dc8fca 40425
00108.pro
R110 39cff7a2385b8dee8cda617bb8ef4b22 39095
00109.pro
R111 f5589de060a8ef3c6eb625577da8c279 17520
00110.pro
R112 744122e0bb5f0575ddf0790a94279728 38893
00111.pro
R113 f8b61bab4251fd516c83005c905c8696 5027
00112.pro
R114 baffb6c715814f7c56c14b2c74ca8a8a 36980
00113.pro
R115 b3a404a6dacc48d30f632a2e988a34b9 36482
00114.pro
R116 ed1399600ce778f89ada9fde0b014d95 14170
00115.pro
R117 e4b126a32731140dea94a586c3b9eba0 34823
00116.pro
R118 9b263a10403a4133f603292b60d17c3e 5611
00117.pro
R119 dcbe9bdc549908d0f53ef3625e4aa3d2 33318
00118.pro
R120 baf12289187b5b84b72513062fd3b13f 37147
00119.pro
R121 db93bc66beb9c2c8cadc9c2e6b96d4f7 35100
00120.pro
R122 dd87873dca955560e7d880e32a2745e3 37288
00121.pro
R123 287180cf64c2a27f0fd4c48fb11ddbd7 36311
00122.pro
R124 f9f4900bdf27457dbe86bb43d6f10436 22802
00123.pro
R125 55eabe808719984b8ed6e94ebe02adff 19842
00124.pro
R126 056b1918393f9300212acf1858b85774 28119
00125.pro
R127 72a95b6c75034cedcd927c0511de9f69 23482
00126.pro
R128 6e2f4d2b3284feeb5c78fe1e1ff24a5d 27930
00127.pro
R129 05b6f4a57f06e407d25bd99648c53f92 20048
00128.pro
R130 187961ed6fbd7d338aa352bb2a29468a 34560
00129.pro
R131 0652fd49b1add14d6d2b154362a33e6b 2695
00130.pro
R132 015639da025967b067d9eeda9fb18ea1 2842
00131.pro
R133 4596439b5e4661d79729b7ab866b3309 4953
00132.pro
R134 d8f7243d63de86528390c0403d34698c 3489
00133.pro
R135 b8a78bf634ea64e2ad582a7df595bd35 2602
00134.pro
R136 c92f0f8d0664dd9c5a07ff435c851e18 3240
00135.pro
R137 5033245b5cc78bb085b5fb4dfcc35100 42850
00136.pro
R138 f6411831bb3ecbe9a454cbbfd05ef1ad 38388
00137.pro
R139 81564cec12134c6e2aede5dbc827726c 41861
00138.pro
R140 7d87af4bbde24a997fe8e3489d3d97f5 38558
00139.pro
R141 ef7e1b8a729fe916796b0560b5490cfe 27791
00140.pro
R142 f6bdce316912a4bac3aab4e8f0591910 35304
00141.pro
R143 75b49f6b7d66858af3ec4d224d2a6bd3 42391
00142.pro
R144 5276ed3e6aeebdb5b89a56ae739d84bd 41583
00143.pro
R145 63762a2369ad150defbd35fb88691999 42702
00144.pro
R146 f2fe087b230bcf696772b8493492fde8 41294
00145.pro
R147 3c1e3efd1f2f7374168b643d25ddad36 40696
00146.pro
R148 18e4a0ef5b902dbdbaff48e183808218 10358
00147.pro
R149 1278b66b5347ede771be1ba4cbb1f59c 31198
00148.pro
R150 37d50196cc408513c4e522a909042554 26855
00149.pro
R151 77e29fe3888d5b2d992a3bd3da0cced7 37975
00150.pro
R152 e7f74e8ffaa51d50fa8f74a69cbf1567 36404
00151.pro
R153 ee0a68d8fc7e7b15fa39dac39aba83dd 32035
00152.pro
R154 ca324c34e7cfefdd4f271b7866fcbef9 13982
00153.pro
R155 4af96a89d46546d0aee68334545ad40e 42289
00154.pro
R156 9162466b0affb593c83118787b3ced89 47661
00155.pro
R157 ced575f8236aafd9654277355dc33b3c 48843
00156.pro
R158 5172611563bdbb02fb2c2377d9bc8521 49669
00157.pro
R159 48cb3f1e246b21a58d66994cf2db90da 50928
00158.pro
R160 416b28b494994ee92680aa9350ca6112 45924
00159.pro
R161 25d7d123b1af40dbceb8bfe2bf3177df 42160
00160.pro
R162 e1e39837bb2e10b9fb5c767d1e0397fc 37352
00161.pro
R163 c9b43609b29575b01a46b9ceff9bca28 31630
00162.pro
R164 babc5fd170a0df070743c7d8490a5b5e 6931
00163.pro
R165 d2d747f372d813c83d9238414d60bfb7 38701
00164.pro
R166 8ef6af84e23d031482838ae02fc089e4 8580
00165.pro
T1 textplain 8fd87afe6d41f04b2963eb244cfccc29 1772
Copyright.txt
T2 579a63da8bd35bfd1f65815e4919399c 477
00001.txt
T3 28015d7ce2083a833bbe23f4c2b8e738 164
00002.txt
T4 fd4fc0d617a37d8de07cc40506b30552 176
00003.txt
T5 ee3c6e1e18f0e5ce13bd1e8728d67897 1449
00004.txt
T6 b08f8c9d11fd1884c87a0539feb5f931 2803
00005.txt
T7 74a764d39c995072b0ab16273bc730a1 1040
00006.txt
T8 8b188ee87d96537eb6b38a1809f014d7 1584
00007.txt
T9 e39307ca2776d1a0c650234fe6656994 1590
00008.txt
T10 65f4f6c85bde45579e869df6a51c92f2 107
00009.txt
T11 3136c4278d67855d4b4c9a76b39a4b35 1435
00010.txt
T12 e1881f8a997af32ca13d01c674558c59 1688
00011.txt
T13 e650b1b81ed2ece8f8790a3193d538d5 1698
00012.txt
T14 2b61c2152852a48aa47776cfe574a774 1677
00013.txt
T15 7b56f56ded90d9eef10858edd328103f 1642
00014.txt
T16 20d6ac36a12597d82c3928856b56e4fa 1686
00015.txt
T17 f2a78aa520d7e24fd97cef7f71fe65c5 1611
00016.txt
T18 309c823f88faf69f103008ef25c56073 1684
00017.txt
T19 f443ced6fff2b2763d59dfacf263c8da 1627
00018.txt
T20 309b3c53e76233e3fed7345e3d13c794 1517
00019.txt
T21 26d64e38f3e829102d57ee0144a63513 1099
00020.txt
T22 260ff9e01276b32af8b84231b8effc8e 1369
00021.txt
T23 0eeb213ac9cac32aedbe7f3b0c6d0142
00022.txt
T24 d9596c8448d0917bd40154f127667bf1 1472
00023.txt
T25 f5ad5dc7c386060a1519d52f8922cbc5 1296
00024.txt
T26 a88cbf4a58984a0b4014f65c8d40d90c 1523
00025.txt
T27 0764ff0b5b9c169dce631fe6c46d9514 1568
00026.txt
T28 1eba199cb309733e32615fc8a4d9e600 1621
00027.txt
T29 5cd2aeb08a7548eea95df53db1ee8852 1427
00028.txt
T30 a7a00f3dc3adac49f0fc956cc54e1420 1555
00029.txt
T31 28265d295661e53356e764c31eaa2eef 1456
00030.txt
T32 78cbe0cd0aa56b693d6cf9823140589e 1413
00031.txt
T33 56b190278bb04bcbf992562bff0cdc82 999
00032.txt
T34 875d3b35fce8e196f8d01d483c155f04 1458
00033.txt
T35 9af7f4cdfc5a0d93815f9f4d04c8e20e 1062
00034.txt
T36 fdaddc11ecd6f097d09d5b643a1e021e 1388
00035.txt
T37 692ff7fc35a6181bf355a37874d56140
00036.txt
T38 048c88bbc87e9cf231a04d9c9e67c229 1171
00037.txt
T39 05284f27ddfa7359df5777edc00aeb19 1376
00038.txt
T40 ec65714c5fa6a3cbe64da05783d0042c 1273
00039.txt
T41 aeacb8484bd4024c4ae3cb0da42ba8c2 1365
00040.txt
T42 c257f44807cd54d5f3d150caf42499f4 1217
00041.txt
T43 01ee98424e60c243a621156a7ba8f354 1237
00042.txt
T44 e7b0faeb6793f731cc6feefa4cfb6160 1216
00043.txt
T45 10c76e9d5faae8ab3e8940661b32253d 1223
00044.txt
T46 a1310abbd6d6df10b1fd777db72d22c6 1608
00045.txt
T47 19aca54036240f8f1e4fcf3a0c057c89 1387
00046.txt
T48 178d233b850bf998fca439a818c05071 1661
00047.txt
T49 41494c7d10b7a82575fc9f804f80b6c5 1569
00048.txt
T50 4d737a39d59ed5c5c94f0b6323fef083 1461
00049.txt
T51 c136b56fa5fd846585cdbdbe21934f25 1560
00050.txt
T52 680ee6cd96d42ca8f0bc1b7cee50b940 597
00051.txt
T53 aaa802c8ad5f4012c839f039ca3602d5 1453
00052.txt
T54 dd035937bf3895f4b11dbfdbef7299b6
00053.txt
T55 d6315083159ae79d19cecd5c86828e25
00054.txt
T56 215a7836ef53381beba60ff66952b4f1 1655
00055.txt
T57 f7cd6810b6186f53557e80d1d5ff3cbd 1768
00056.txt
T58 a200f0cf92a8c3401a8e402db2bc35ce 2120
00057.txt
T59 d8d3df432b013ef1c29d936f17c6c511
00058.txt
T60 1dc19e0536a03677c1a70449a8e14889
00059.txt
T61 6cc5f4096e145aafa01c58d21a38d967 1366
00060.txt
T62 e69f9d5d2da7ca99851c531493b6f7ba 1706
00061.txt
T63 0ed4ccbac8f2e3ebf8f77b95861d4764 1676
00062.txt
T64 d48425d1e620cb957bc4cfce648bef12 1663
00063.txt
T65 64cedfeab08b321f93c701395f5cbea5 1582
00064.txt
T66 a58e2829da68a0f9ae3a5b97afa5eb1c 1155
00065.txt
T67 bf4e4eb998d2bd2dfae12e9c76d81496
00066.txt
T68 ac12f1f538368914fe382124abac67b5 1605
00067.txt
T69 d50f00a3a41b5de5a3aa3a0cfea56caf 1622
00068.txt
T70 5441ac343e857b699db4c70f66fc7c1e
00069.txt
T71 278ec1829c0a306a7c6c722fef3cdb97 1683
00070.txt
T72 df27b6f8f1cacd81ebb22ba52d9daaf0 1532
00071.txt
T73 582122e936241266dbb68d7c817e2cfc
00072.txt
T74 e9fec512b6eb33a38c87daa04827d966 1637
00073.txt
T75 93aad5a58e4ed895eb5f68eacffa2dac 714
00074.txt
T76 dd7028e50bb7c6aaa01ff1f39fef97be 1641
00075.txt
T77 83facbd3b00531481eaa9a734a71e604 1019
00076.txt
T78 32a8852a04cdd45d1b2ba77943c99794 1521
00077.txt
T79 68c6854f4d3c7dad0d7ba1fef56d34ae 1665
00078.txt
T80 f755a724745d43266878efd2755d201e 182
00079.txt
T81 56f5fc0f36cf3c7b0786663d107fe6dd 238
00080.txt
T82 6e0325e253c2b43192dcec21f7bc3007
00081.txt
T83 c955cbb834435f582b52a1156cda3a2d 1668
00082.txt
T84 98068f0b048d636625dddda179a523a5 1632
00083.txt
T85 c6306fc11be5a83de20d6facf0b15b85 1552
00084.txt
T86 571d822a5023c9143c342646651c0cee 1580
00085.txt
T87 f17fae822851d83cdbaaf1a0d314b182 177
00086.txt
T88 16d09b5c047fc2da7f7ba4a5a660fb1d 1494
00087.txt
T89 ba93b3f2eda208c1d032324f4b03f65b 1393
00088.txt
T90 8b4a14f18a3d0fc9ce686d51d0f7b28a 1545
00089.txt
T91 aa2b9fcd00b6b7f23391485791561f63 1414
00090.txt
T92 28d8fea18d47c8866a04d52d13c9f1fe 1527
00091.txt
T93 4b00e53919918d297dfc81e39da8cf84 1713
00092.txt
T94 413f36559681357e74e58144e1d91af8 1638
00093.txt
T95 18e62239d3b0c91091349a1a582cffa1 385
00094.txt
T96 bd2234cc0b4a74b9443a7cdc730eeafb 1430
00095.txt
T97 17c8d12c463984ccbc95c4548747e8c6 1144
00096.txt
T98 cc0efc5cb14afb5c82b3770d5b7d428e 1687
00097.txt
T99 f2dd3f8ebbdaec28658dcd2d2a26c180 1575
00098.txt
T100 a85b41c6453f22955987b0f6dec755f6
00099.txt
T101 9e85871a471690f841de066e4b4194cf 1727
00100.txt
T102 3428b00632437a7cdd9bedc014cb8a9d 341
00101.txt
T103 486c758c1ecfd9b6a1234974492d668f
00102.txt
T104 2b6f16995b31744c124177eff146cbf8 1695
00103.txt
T105 a07c38264a0d3697ef3938591df80fec 651
00104.txt
T106 c92a817eff7c1edd0721e06b8baa36ba 213
00105.txt
T107 dabb99139cad19c3f52451a3552fbb94 1691
00106.txt
T108 ac3e5f0da7612499229e1c53cd7a80b5 201
00107.txt
T109 d47c152d25cf7e384d060fe2ceab5b79 1618
00108.txt
T110 2d3dbf9537cadcfea4fd2d33f44697aa 1606
00109.txt
T111 bad3a182599aab1487e77cc95ab6e9ff 685
00110.txt
T112 bcbb3f068328af6ad145ef3bfdde6e12 1546
00111.txt
T113 fab548eff86925604200ef13b9355b32 272
00112.txt
T114 4c0eb1584f5867601d29829d8451d93b 1483
00113.txt
T115 dc1aa99cd6ad7846aa938d035cea7940 1477
00114.txt
T116 20af931d3676032a2cc71d537d6b65d2 572
00115.txt
T117 a61c81c0dd69b5a0a959a01df222b39e 1415
00116.txt
T118 80ff3a1c1e034e27f2f36a56a1cbf2d3 267
00117.txt
T119 d2ddf7390596c81dca5cceeeca612f79
00118.txt
T120 bbcb463d8fadf65df15b2cfbd5874c4e 1502
00119.txt
T121 2ae2deb81bcb42f8df7c6cb7f190a3a6 1429
00120.txt
T122 0410da1519d1a23d93378db17f04451c 1497
00121.txt
T123 757eeed70274f71d98b85a161531dad2 1467
00122.txt
T124 ed9ed32d4e7f68b9df24f4a1322515c3 1112
00123.txt
T125 e157ab2825221f25c3521ddebbb664a9 865
00124.txt
T126 cfe3d585d3d237e9bb095d106e43b6cb 1179
00125.txt
T127 a383e08337fde60f4a93d1da11ebe363 977
00126.txt
T128 51b340e9fc77b0e7f5b98d98a00a2b20 1172
00127.txt
T129 18b3ed93d18763b734f48b3bbb410369 852
00128.txt
T130 b402fc9c2802aa5aebf886dc684d4145 1476
00129.txt
T131 57ebbfef3ffc4f2efd965fda1dd54550 175
00130.txt
T132 9beaf46013e1feed62962636cf6c025c 167
00131.txt
T133 767a306792f1951830c625d68af4a1f0 256
00132.txt
T134 371e101f885b8dd9695ba1482af7e28a 198
00133.txt
T135 060fd1b8004481a94ec1501fffa13777 147
00134.txt
T136 9d8397c8dc9c1ade1ca12ef871ea7844 173
00135.txt
T137 5b0465370383ff50f094d50c022c9b12 1701
00136.txt
T138 7f1e6252012b1bf55c8a3952913ea49a 1562
00137.txt
T139 fdefebb0ad841d5d5523065daa43e00f 1666
00138.txt
T140 d2fbcf3e7015f0fbb44a65039915bf82 1550
00139.txt
T141 a074cb9f4fa884ffd1feaac5d83cf153 1139
00140.txt
T142 0496f6471f40a9d575dd865ea456682b 1479
00141.txt
T143 8352aaa140fa518aa9b962c9a7aec7f4 1681
00142.txt
T144 30233af50c38ddfd7f410e18adcdb327 1674
00143.txt
T145 5b60323f7774ba07bdc93c15b4f375b6 1697
00144.txt
T146 53c9371a364e237481b3535e64991adc 1640
00145.txt
T147 2b433b373eab9f51152a1708fbfbd54c
00146.txt
T148 8bb07c406a0f4dfbc3500b478f6519c8 486
00147.txt
T149 d8aa9cfbf0c64cd33a0a1ceea4151c1f 1343
00148.txt
T150 0ea0e81c0b2bbb1eea2639a924819db6 1142
00149.txt
T151 4cc90548b57439db2e6214cd34eb133d 1577
00150.txt
T152 b9225bcbb07168cf6b5cc2b4c46fdb93 1583
00151.txt
T153 ce5ccc008f3d71031ca17f8638c8c0d6 1401
00152.txt
T154 c2cc56d6efd122a431e89875568eaf13 593
00153.txt
T155 9573e6d4d8f80d076d5a3bd6c671a107
00154.txt
T156 5b6ff1e7d07f9a2fcc45c2355896347d 1879
00155.txt
T157 5172b34bef1bd37bde233dab245dfd7a 1905
00156.txt
T158 1fd39df27160f219d21f8eb3594f8242 2038
00157.txt
T159 bea3b445ba858f24c0a9e451bd5f96de 1991
00158.txt
T160 e632f3c74e6fe167b4044ef9d56a7159 1812
00159.txt
T161 014dd548da470b3562381adec6d626e6 1654
00160.txt
T162 0ca609e6812d6312247a9a454915f8fc 1482
00161.txt
T163 99eb56fa28e12ce057eab9fdd23583cf 1287
00162.txt
T164 d10a48d8c40a3bf10c9a8d42a352ae86 290
00163.txt
T165 cc791a04007f7bd098547cb9d42163ad 1914
00164.txt
T166 da41042621cf1047e6494b5c85a96508 461
00165.txt
UR1 ec13b97470acffe4654223f1837e66d8 2890
00001thm.jpg
AR1 15e84e33f65469e29953583e1f492a9e 9731
00001.QC.jpg
AR2 44faa05ec5cb2c25d257a8eeae5c943e 6723
00002.QC.jpg
AR3 de67862f11770e45ca1d9c5cd1c6cfad 2070
00002thm.jpg
AR4 d0c9fd162e852a89d3f6eba0c35e39f4 6926
00003.QC.jpg
AR5 d7c899fcae7ccad7ba367ef0bf3734cd 2060
00003thm.jpg
AR6 ece4f852df831852dca0ef01f87620ae 23293
00004.QC.jpg
AR7 d7c24a46488ecfc444ba8ced1720bcea 5866
00004thm.jpg
AR8 15917d6404bc84bb950863d6a04f7b12 28413
00005.QC.jpg
AR9 680cee7c994844f6c59fc149ad2122d9 6403
00005thm.jpg
AR10 8ec5e41beff3e4c06da614ebe9215e60 15249
00006.QC.jpg
AR11 27a78f73cf4f1e077f526fb075a61035 3836
00006thm.jpg
AR12 904e7935ccd6a70186ec525cbf5d4643 22876
00007.QC.jpg
AR13 ad70a09109fbdeada7d1e722d4b98e9a 5949
00007thm.jpg
AR14 9c81ff3883cd38c78fd6c62dcfb4cb9c 26142
00008.QC.jpg
AR15 e4f9afa26f088e2c24d5dc507ae2c56c 6671
00008thm.jpg
AR16 2a990440b6fb5129a82036e67f13c7a0 7258
00009.QC.jpg
AR17 e9863f99fbdcd6b5ecffae5b433606f9 2088
00009thm.jpg
AR18 7e9c5856c84873896c85c565f6d5cd06 22897
00010.QC.jpg
AR19 3d2256cfdde286944ac36dbb438ede05 5836
00010thm.jpg
AR20 2d26f82b2082639633a208684ad66785 26528
00011.QC.jpg
AR21 13a256dd3e6009be289b3bf98529b3bd 6500
00011thm.jpg
AR22 022845f8f99d5be50918586f41d6a567 27478
00012.QC.jpg
AR23 38775f9671a4e56484a6595991edfd9f 7053
00012thm.jpg
AR24 b288c9ef9efe215e5cee9fdbf89a513c 27761
00013.QC.jpg
AR25 b6a4369aa7abcae4f4bb09c120aa9c78 6790
00013thm.jpg
AR26 a9f5b1f6c391022a10962e6e039f9650 26878
00014.QC.jpg
AR27 91296646bbc4f5e9b281e0ed2bbdafe5 6455
00014thm.jpg
AR28 962c7d9083c034b00457cb4974e781c1 27546
00015.QC.jpg
AR29 dbc604350e41ad4b748155cb687a6b49 6713
00015thm.jpg
AR30 1a8190f6f34facd8e064c961ff31af67 25885
00016.QC.jpg
AR31 80a50fb1f6615eac6173d60e907999ca 6419
00016thm.jpg
AR32 91b964168114f0012863f7e42b0b5c76 27261
00017.QC.jpg
AR33 0213967d9ce17be7eca7659ce4e45d73 6933
00017thm.jpg
AR34 059ee906238d1a15dd3922133ff2fff8 26584
00018.QC.jpg
AR35 ec071d81933816682e4eefdec65b52d7 6496
00018thm.jpg
AR36 aa24e8ea9409e7bcd804cd842c065dd1 25287
00019.QC.jpg
AR37 b56368c3882bebd58e7db012e0f9a18b 6126
00019thm.jpg
AR38 7c65a63b3d1aae44edf70b52722c6abb 19094
00020.QC.jpg
AR39 64f65c2616510b33b4c657320af678a5 4762
00020thm.jpg
AR40 b7d280a58f6f21ff1f50fe1c06deb32b 22111
00021.QC.jpg
AR41 558acca0aa9f6fe4b6b4027edde662eb 5615
00021thm.jpg
AR42 37e56da7f22ce0f863c21801026d5ec9 26508
00022.QC.jpg
AR43 14526b03034c8021ff99f6d77d15b0b8 6654
00022thm.jpg
AR44 75fa52bab1fe56dee984129f0a327fa1 25154
00023.QC.jpg
AR45 a3d0994e06fadf5b60f2a45628815d60 6282
00023thm.jpg
AR46 83c8758b5c9188536b3ef0adf9f16d41 21988
00024.QC.jpg
AR47 9db679da4e1db51df32fe1640c20e4b0 6148
00024thm.jpg
AR48 429fc9562a17c4b8782442f7a7b584fa 24726
00025.QC.jpg
AR49 bc8bd3d6a2af7d82d2b207c4af4565ff 6101
00025thm.jpg
AR50 c2f2f1f60d2443511d5a2afd7d3ef90d 24583
00026.QC.jpg
AR51 a7891dfd76953a2dd4093829d320f621 6431
00026thm.jpg
AR52 944119583d1667dc56883d963948b186 25984
00027.QC.jpg
AR53 c8a3f431a7553a4ffa3b9f517e64a676 6642
00027thm.jpg
AR54 48508fc6092b903ca417cbf42b1257cb 23171
00028.QC.jpg
AR55 785b3e595b4c8018f7dfb1d962df36ff 5989
00028thm.jpg
AR56 e7a16e18793597b0e4d38b432a9b1782 24416
00029.QC.jpg
AR57 656d6efb9f84c7e10a1a02c9a623865e 6207
00029thm.jpg
AR58 1aef84ac5c4e85cd7b47df45111dea15 24704
00030.QC.jpg
AR59 2a024ca926be34f7251f403782d53930 6037
00030thm.jpg
AR60 4be269bde79bce2a99358c75a1056b08 21625
00031.QC.jpg
AR61 465d85ae2836ef95cde652f11a95750e 5568
00031thm.jpg
AR62 3a2258ea374a5fce7bc18f9b39282bd8 16033
00032.QC.jpg
AR63 e9581c393fae62fb137c6a8eefeb7343 4549
00032thm.jpg
AR64 f71e3580fa57744f31db11d94937f7bf 23552
00033.QC.jpg
AR65 7538b8b4bd085312497638c390238e31 5978
00033thm.jpg
AR66 5600edf39d7d4b71aa04105631786a8c 17978
00034.QC.jpg
AR67 9eb9981f8c781ec3cf9a42f4a280d7f5 4875
00034thm.jpg
AR68 843db6f7f1a12596e0c47f663e8e1cb1 22858
00035.QC.jpg
AR69 079ca0ed57302100e057165d032c3782 5567
00035thm.jpg
AR70 6d98da858c75935bc55aae93dd2f538a 24849
00036.QC.jpg
AR71 da6a0bde012b8460d8cf0bb130fe46cb
00036thm.jpg
AR72 4b6fe71f2ab82d7cb884db2dfaa2ba27 18998
00037.QC.jpg
AR73 f7ba20137c60fd05ce41337ba6dd3ebe 4925
00037thm.jpg
AR74 3d6a33bc63557e290899a2223baa59e6 23637
00038.QC.jpg
AR75 90d1de0ec97c4234b7741372cd5b037d 6215
00038thm.jpg
AR76 e0dbc01face2a4c6c4af3c59610ac334 21355
00039.QC.jpg
AR77 97d0fdfa5edab0b9fceb02f1a025d800 5517
00039thm.jpg
AR78 8d51ed07b27c8e3c7d59d8b194a412f1 21838
00040.QC.jpg
AR79 7aa74f287be9065820a4eed3208f8aed 5677
00040thm.jpg
AR80 f385e0b0576fd1413231b16300c26b08 19173
00041.QC.jpg
AR81 7d745d35aa31ed096f39ae50825484a3 5099
00041thm.jpg
AR82 6dba6abc1249345c9a6e9a729025c48b 21321
00042.QC.jpg
AR83 cb89122472c974e926435b0b66855ae8 5476
00042thm.jpg
AR84 02846cb166442b70d2fdbbddc7af7793 20759
00043.QC.jpg
AR85 68fbeedf4ff391526377a0e117acc488 5431
00043thm.jpg
AR86 41cfd7361f9ffb66ad57c7ed3838a9b1 20442
00044.QC.jpg
AR87 261c5a95f76ededd1211f5fe2cd68c2c 5468
00044thm.jpg
AR88 74376682b771665382fb53ba873dabda 25973
00045.QC.jpg
AR89 a52eaa836756a15fea0ffb7eb47998a6 6650
00045thm.jpg
AR90 fc32275781e7f80daab38ea651e4ff51 21960
00046.QC.jpg
AR91 1e03045d3966f3e2baa173b15624ddf4 5690
00046thm.jpg
AR92 c3e04ba1c6cb88068d272678c5ae6d88 26752
00047.QC.jpg
AR93 a8b41d9ce8a1637fea9cef345f3ce21a 6680
00047thm.jpg
AR94 434725e99fee54e0801e175dc908a969 26182
00048.QC.jpg
AR95 b49e117e24e17b99d2a25db9a5638978 6502
00048thm.jpg
AR96 766b19f4cb12ef70a003f043d3a8bfa6 24037
00049.QC.jpg
AR97 4a881507ed156dc1aa4be54b04127b34 6121
00049thm.jpg
AR98 ae52a9623d69ea21d858fdab57e72353 26079
00050.QC.jpg
AR99 c5d79100ce39d95a82c8356411fb64f1 6679
00050thm.jpg
AR100 c03e0dd196f5f30b688a11355703840c 13519
00051.QC.jpg
AR101 3593f404bdbe8f1b7c7b7f4a6dc1e9b9 3916
00051thm.jpg
AR102 ce1371aa244a13b42e31e6a2ddd7d813 23823
00052.QC.jpg
AR103 b061869bc1076c7e13d3abfa73db5c24 6255
00052thm.jpg
AR104 cfde0babd4b251dd8466e66bb28d581e 26028
00053.QC.jpg
AR105 2d4ffd357285595828219795dd5e9f2c 6399
00053thm.jpg
AR106 ce6290a258d49acaecdf1424ddbf16b1 26298
00054.QC.jpg
AR107 9a6f87d6d3ddf6d0bb741bf7fcb12457 6653
00054thm.jpg
AR108 12f8bd308d904eb9595720e5aaadfb13 26991
00055.QC.jpg
AR109 3990c8445a2abf51cbfee67d5b3cf853 6683
00055thm.jpg
AR110 a26ff497fd45e2591119b09a63882643 26059
00056.QC.jpg
AR111 6c7b32128394ae830d2f4d2d313d6ef4 6514
00056thm.jpg
AR112 9265508e648392c7b901f77fc23e8528 26770
00057.QC.jpg
AR113 da14ffe3b240f384be36f458531ffaef 6636
00057thm.jpg
AR114 7d8b90a531d361c9912bdc4b9960d92e 25169
00058.QC.jpg
AR115 55c2b81de87c30c8c0637eb222d993a4 6429
00058thm.jpg
AR116 bab2e77c540b5ba36a8e918c5864c5be 26265
00059.QC.jpg
AR117 74e18c38a60c458137897e23422ed52c 6532
00059thm.jpg
AR118 81e865cb7ef29e08bff46265e2738172 23194
00060.QC.jpg
AR119 d09e5a5e0391425ab9e6b0d24eeb6d10 5892
00060thm.jpg
AR120 573b9313f278e724f1457bdff9777b2c 26708
00061.QC.jpg
AR121 fb415d1c1a7b3031edd493a1b08e823b 6881
00061thm.jpg
AR122 026edabf2ed97220438154f1d25ae37e 26604
00062.QC.jpg
AR123 ef392887702a49d502c8bf9ce55fe140 6967
00062thm.jpg
AR124 6880771fd00a70d2e388adf4678d44c0 26259
00063.QC.jpg
AR125 03276e4a5da0137ae08735762fa32f1e 6643
00063thm.jpg
AR126 a696c839683a3d5194c05485b696f18b 24972
00064.QC.jpg
AR127 74c8b3e1a88de2397d358ea8456877e9 6476
00064thm.jpg
AR128 39d33316dc8e22ed23cabe6de1787e33 19522
00065.QC.jpg
AR129 a483868c89dfe0b0aeb145dd762a79e2 5153
00065thm.jpg
AR130 01947e1bdf9cbc28f2618f20c5f55a74 26248
00066.QC.jpg
AR131 7b6b1ddd415f8445b029d717d34a7971 6588
00066thm.jpg
AR132 4162cfb4520db29cbef35df83e25e684 26384
00067.QC.jpg
AR133 824f3aa14a080d1b89f6323178d05c1f 6559
00067thm.jpg
AR134 42896ed8f9b6b9318c36c4866115a13a 25321
00068.QC.jpg
AR135 b82fc958bc3b8d22d36f8a3d80d645f3 6561
00068thm.jpg
AR136 59b463b89d7f5476572a46520759442e 26473
00069.QC.jpg
AR137 f6d97f9a8cd4cf990cb43c8e4dafa842 6711
00069thm.jpg
AR138 0857dbf81efffdf7c7dc691ef48b4e32 27545
00070.QC.jpg
AR139 8ad17fe9e0f4094ead48ed5f456dff77 6783
00070thm.jpg
AR140 a7bcd4fea5e83eee5210ff2c1609da2e 24722
00071.QC.jpg
AR141 91bc551cde5aee216713d4902953c39f 6468
00071thm.jpg
AR142 b500923ba9f6f45cdef330aad677c8c1 27668
00072.QC.jpg
AR143 69dfb03234ddbc38b6ea7420581ef6a3 6808
00072thm.jpg
AR144 787a1890e4a0300d1b089557693b844e 27344
00073.QC.jpg
AR145 a347814dc8eeaf3027c9fa308d5c7395 6562
00073thm.jpg
AR146 f408913a7b2edaf40ea9c3ce8ad0cd47 22339
00074.QC.jpg
AR147 d1370a705d959e2070a6922347c83bac 6199
00074thm.jpg
AR148 8b0494fa36041c1064827b2cf7598b26 26830
00075.QC.jpg
AR149 2126b4e719486b35fe5546fe27ed7941 6826
00075thm.jpg
AR150 a880d53810cf535fbbf711e6146ef773 18676
00076.QC.jpg
AR151 ec576af6af2084700a6d930dca6a576d 5011
00076thm.jpg
AR152 8d27b2178297d269dbb73ae1aaa40e4a 23704
00077.QC.jpg
AR153 d3e7f9296587ab9703597856dbf15b4c 6111
00077thm.jpg
AR154 16d3d4a3ac3f51be0cc2d44e9183a873 26027
00078.QC.jpg
AR155 c2ae8274f8a6b3f66944b928da6c7e72 6617
00078thm.jpg
AR156 1f038610f4c98e2e62e98343dafc39d9 15933
00079.QC.jpg
AR157 4116196afc9809a28b44a83377a16ba3 4746
00079thm.jpg
AR158 8c0f367d25f6e1cd60b2c98ad3cc5f51 15308
00080.QC.jpg
AR159 83b6e51f9c4337373d02a5e8f24e5c19 4588
00080thm.jpg
AR160 893ab8e050390cb2c7e9cf9e00515326 27127
00081.QC.jpg
AR161 368ffe00b749522e55c08e63d53879d6 6789
00081thm.jpg
AR162 4528736967064f01be7f834f19bab99f 27987
00082.QC.jpg
AR163 9e1aaac78e1fa84d7596598580829af7 6816
00082thm.jpg
AR164 adcf7deedff9d403beaea37fca99859d
00083.QC.jpg
AR165 a5e2b3ed68702d6ce7d7bd52cdf159c8 6687
00083thm.jpg
AR166 fdb477b326132de07b775d49dfb1ba27 25805
00084.QC.jpg
AR167 4c90d066a22d13e06d602afd2b3e69f9 6327
00084thm.jpg
AR168 79e31668e582dbc77e89b2b35857e06a 25347
00085.QC.jpg
AR169 1c37845e6cfa14d68e9c51c31e2c4b0d 6280
00085thm.jpg
AR170 7b28d0895f238df94b214298042a4398 13536
00086.QC.jpg
AR171 7b7e92301380b2c456f8533d52c45001 4108
00086thm.jpg
AR172 923bd7d79c06ba435e59c01d62416a7e 24385
00087.QC.jpg
AR173 6600d8a57981e8a19ad21a32a7b4c518 6379
00087thm.jpg
AR174 5b1c248b43c5d0f882b5cbb13221393e 23510
00088.QC.jpg
AR175 dd2e3286927a8322e4b0e8dfc4f360cb 5879
00088thm.jpg
AR176 42d7b32d5789c48ee16fbcfaef79c70b 25773
00089.QC.jpg
AR177 1d91452c1e2ea62b51aaf65922c9ede6
00089thm.jpg
AR178 6dc0473033429f775fb2ea3fce72213b 23775
00090.QC.jpg
AR179 9b38e4d315f9a76452d5172664004aa8 6202
00090thm.jpg
AR180 ae738dd544e5715c9671a77218753cc9 24639
00091.QC.jpg
AR181 52a24a0b65ca1fb95d04f4793738a35e 6210
00091thm.jpg
AR182 fd297c6f46d976bffac9e7a918053b5e 27476
00092.QC.jpg
AR183 27d20764939225ac5bee1460cfb2b1ee 7077
00092thm.jpg
AR184 ec4f01a5f80b9772ae641f284ee9d42a 26404
00093.QC.jpg
AR185 d3a4ce3e7e069047756f6277a642c39c 6763
00093thm.jpg
AR186 2c9a44660e14241e8eda6f26fd891c47
00094.QC.jpg
AR187 df3c89128c96e230bd5a763a5a7f08ff 4348
00094thm.jpg
AR188 5f6a1385cad579d9a0040b62543bf3b7 23927
00095.QC.jpg
AR189 243f4ebe3d603b81bf5c618b6d695950 6019
00095thm.jpg
AR190 120a28d99da125372b57e8875190d532 25089
00096.QC.jpg
AR191 c4750c16ee2fd11df75cb8bc02b5924b 6657
00096thm.jpg
AR192 4d0eb4d180541434620915a59d1cd792 28741
00097.QC.jpg
AR193 b8d2d7597543419ef40c52cf0d99446c 7166
00097thm.jpg
AR194 e1f99d71fd574eeba802595854b919af 25078
00098.QC.jpg
AR195 feaf19b1580df34806624faed42de97b 6491
00098thm.jpg
AR196 e022c238871ed1b53195cd537a52fca5 27821
00099.QC.jpg
AR197 28069d464bea6b5af0621c5c67626df6 7033
00099thm.jpg
AR198 b4fda7d361415337e900ee38e8eace6d 27897
00100.QC.jpg
AR199 0c622c9742064da37b8c4cfeee547fe1 6675
00100thm.jpg
AR200 bdc68cc914b847eb0bbe2c5d9210ebfc 20375
00101.QC.jpg
AR201 d55a65ae718140a36d548a622cdac102 5982
00101thm.jpg
AR202 302a45b976aab45ed93707e3ee021569 23438
00102.QC.jpg
AR203 0f082159a1694dccba85942967e670f6 6072
00102thm.jpg
AR204 87bb1944f680b175362272841bb053e3 28455
00103.QC.jpg
AR205 65c17f57b4d3323ff4df102a1498d549 7046
00103thm.jpg
AR206 b83f542b6c92298e034e9eeff48cd9fa 21314
00104.QC.jpg
AR207 90f35ee3343028de9f16f224530295e8 5785
00104thm.jpg
AR208 ecdd934a7a4807ef1cdb858bf2d3332a 17961
00105.QC.jpg
AR209 b9da655c5754e05fd7244f09fcdff5bc 5016
00105thm.jpg
AR210 679fda47ed605b70088425f5aae280c7 27528
00106.QC.jpg
AR211 471bf92bebd53a162c3e3887d8d3c45d 6986
00106thm.jpg
AR212 38243d58a72d547b986f7d8befe4621f 19401
00107.QC.jpg
AR213 3d0955f00cf33746fb06637b595ee70f 5566
00107thm.jpg
AR214 e3b1552dc7e2674bebc831595424a113 26063
00108.QC.jpg
AR215 6d503270cf33cd81e5c335b334e94d0d 6608
00108thm.jpg
AR216 d4281b5dd1b319dee781c274fac5a6af 24814
00109.QC.jpg
AR217 b30be226b610533ad2c4d88b3e895cee 6565
00109thm.jpg
AR218 f11b1f571370a237e997d57a9a14b59d 25987
00110.QC.jpg
AR219 194d9a337d48bf3ac81e2fe0e664c8d8 6883
00110thm.jpg
AR220 6c53d6425597ee50f80df997fb2b31ab 25105
00111.QC.jpg
AR221 610e7f678a2e499821c781e70c1d524b 6450
00111thm.jpg
AR222 2a815d8dfe7c8dade7c472cee963a118 13325
00112.QC.jpg
AR223 1c3a20ede913cddc1b095e643c9bfc07 4270
00112thm.jpg
AR224 ebc4c0eaa5727cf22ad83d917793cefc 24801
00113.QC.jpg
AR225 fa8fb528966b1ce0e4372e508b6a1c75 6191
00113thm.jpg
AR226 9bf95235e36ea9cddf6dcabb68ed25a9 24247
00114.QC.jpg
AR227 8093005ab0c7b999fece16de4b6e23ec 6260
00114thm.jpg
AR228 99a726be64284e5e6f27789efa10acae 17295
00115.QC.jpg
AR229 deada3ef44c3d0501f745041ae83a777 5190
00115thm.jpg
AR230 353006a57996daaa7e76423fe9ddfa1d 22756
00116.QC.jpg
AR231 000daa604744dc1ae2a767083a71427d 6023
00116thm.jpg
AR232 d56d230dc12892556a3879e6a238bc4f 11921
00117.QC.jpg
AR233 458cbefd7d25543f66fc7b773e8253af 3561
00117thm.jpg
AR234 0d923d0b53a77f7b72610d55a573d548 22151
00118.QC.jpg
AR235 b1da1f2d1ebeb67d728283dba1f9366e 5934
00118thm.jpg
AR236 59ad32b6e5224f622ee91e99400f5313 23998
00119.QC.jpg
AR237 e46bd8492e757d1c126515c31cef18dc 5946
00119thm.jpg
AR238 305e32bb5dc81e3dff7672a212214ad5 23298
00120.QC.jpg
AR239 c18b52af733272e85a0738255fd38aba 6022
00120thm.jpg
AR240 b45ed9e46c77840433731308e40064a9 24073
00121.QC.jpg
AR241 6019261dd0961eab7133479b32f5389e 6339
00121thm.jpg
AR242 9b6c4024241ccfd125081d7d07be19ed 23947
00122.QC.jpg
AR243 953103fd06959d1591988462d65687cc 6099
00122thm.jpg
AR244 780ee87dcf5a7d53c9e72f75a466d7ad 20206
00123.QC.jpg
AR245 63374c568c89fbf8f168a7361c4a01a7 5612
00123thm.jpg
AR246 adff55a94ac2bfd3fcd67cc7b4a1bc4c 16491
00124.QC.jpg
AR247 d602fab554b5d9905bee38e42c352d3d 4658
00124thm.jpg
AR248 ffc251e500bc6dc6ae8c280f9f2cbb00 19758
00125.QC.jpg
AR249 3a2e0f70ddf21f11535a10dd70d1b273 5422
00125thm.jpg
AR250 7478fca3c683fbed3d802c8e19304c8a 17536
00126.QC.jpg
AR251 9cb38ddc0648ade6e283abb1af497d5c 4755
00126thm.jpg
AR252 ef6d3eb70ccdd89a4839756a88ceb4e1 19480
00127.QC.jpg
AR253 063fb0c464e3d4fa65878a135ffd027d 5426
00127thm.jpg
AR254 0ba6918a16d499b6a7255a7a23188161 16596
00128.QC.jpg
AR255 c44a380095512cea3b5c8a9ba9d6f153 4356
00128thm.jpg
AR256 ed18d4ff323ff45fa5b429cb893c9482 22964
00129.QC.jpg
AR257 d36fcf041e4d1bebc98308483aaa0813 5859
00129thm.jpg
AR258 2845f1ea7940451f91e60bdf880a3722 14221
00130.QC.jpg
AR259 cfac36abc2b987ab4ed1edd2f1213b58 4197
00130thm.jpg
AR260 e38fd90c64db39431e412167f2823169 28512
00131.QC.jpg
AR261 d1af2b44b63a0872df7c9d7384260269 7344
00131thm.jpg
AR262 02e0c907c5fd042d7e475ee51bbb8162 24616
00132.QC.jpg
AR263 0cb5caa39011954e97bcebe7313bd71e 6745
00132thm.jpg
AR264 ee22431337475f7febdf5ff422c0a803 24089
00133.QC.jpg
AR265 746758fef9c8e9b5c826fb8e7f9c7fd2 5884
00133thm.jpg
AR266 6c85a382b3119a3b66a443f0490d72e2 23669
00134.QC.jpg
AR267 7e561e560242f84d13ab5e4ec7dc6b3e 5975
00134thm.jpg
AR268 60d23126d5c356a76da1b809a78e5606 21933
00135.QC.jpg
AR269 28038676c24dee261235480310012c39
00135thm.jpg
AR270 e3d5f2b33fb8ecad238133c79e490e3d 26871
00136.QC.jpg
AR271 95e6a2545811770afc9446da425658b3 6705
00136thm.jpg
AR272 e1a3b98ad03611c73173176482151f13 25211
00137.QC.jpg
AR273 6c52f56f0a633853019cf2ed5aab5b42 6555
00137thm.jpg
AR274 79ada2e2cdc4d099eb01331a0b0916df 27241
00138.QC.jpg
AR275 550386d88fdd4a8319bbc11a232a217f 6859
00138thm.jpg
AR276 b37eff2f4f84c500a54873271168f343 25332
00139.QC.jpg
AR277 4f01881acda733f95c158a1311341bf3 6360
00139thm.jpg
AR278 adbfa99c0e2a6e10a8ab702f05ad55d9 20273
00140.QC.jpg
AR279 406a3c413c0917c645b1ee5c873ef7a5 5182
00140thm.jpg
AR280 a83c758b9713d7db0bd3e79159ed466c 23806
00141.QC.jpg
AR281 101d352389fe6274ecd0f2c3e972d9e8 5964
00141thm.jpg
AR282 4da9c588090006440f0ebb8877e729c0 27998
00142.QC.jpg
AR283 1829c6bd7d5b3c5d15c824e1c4d9a9b4 7013
00142thm.jpg
AR284 06aa041d7d96c32147378abe420c34bf 27152
00143.QC.jpg
AR285 87a5c177a53953d74851136b6fdf3414 6752
00143thm.jpg
AR286 66bc734a8b7523047acca65633c2bdcd 27801
00144.QC.jpg
AR287 7e5e7804afa5511d16587940c4a249ea 6957
00144thm.jpg
AR288 2832cc4d4e2de865e1a69110eabe12c1 27595
00145.QC.jpg
AR289 00152097b7531e376ccf33436e039fa3 6765
00145thm.jpg
AR290 7dde1bd2466aa708809a512d1abda484 26339
00146.QC.jpg
AR291 6eb671aff646ae216fc7b324b644ad63 6595
00146thm.jpg
AR292 06cf5e6d134a65cb89072519eddddd1b 11304
00147.QC.jpg
AR293 87eb91435fcb344b3bf32ee83ebf4e76 3202
00147thm.jpg
AR294 5eba65960988d0befd546ed0ed3be18c 22616
00148.QC.jpg
AR295 326ac793ff990a6cd877b38631ae2378 5527
00148thm.jpg
AR296 37a8a2bd3175fd04e2f04bc9cf67cab5 19413
00149.QC.jpg
AR297 48d8612fb7903340442df7e31c30c6b7 4964
00149thm.jpg
AR298 fdbba3347f05cfc4575aa3d00fcb7ee0 24923
00150.QC.jpg
AR299 82aa0d7565597ebe27c8de3f4dd1caf3 6209
00150thm.jpg
AR300 1a17b7ef717f47794dfe4bc1b512678b 23352
00151.QC.jpg
AR301 f88e765004ca6a43ab843f9ef6d9d7f9 6183
00151thm.jpg
AR302 0aea25a5fb5a0e240a6b26cfe7801583 21380
00152.QC.jpg
AR303 bdd5e1b3ac02fe63fa3e77721e4986dc 5510
00152thm.jpg
AR304 8af2c294e365ed6cbc87127b58fcc793 13308
00153.QC.jpg
AR305 0310504585c7c91b72620dd6ecc4d367 3404
00153thm.jpg
AR306 6a83ebebc2880ff43806150147cbfd14 23449
00154.QC.jpg
AR307 12ee060b28d46c4d9c45739d21eebb92
00154thm.jpg
AR308 31f56cf3dadad2aaba4fd84a1c445300 25340
00155.QC.jpg
AR309 ff462a7db15a08c02170588aeec6d883 6525
00155thm.jpg
AR310 c559c761e046bc3f3ed50e68e71b1483 25930
00156.QC.jpg
AR311 b2bd1166ff69f0adc0d11896f95728c2
00156thm.jpg
AR312 6f275a32aeca990527e48318d1ca9051 26243
00157.QC.jpg
AR313 a530a29804031c68fc64882363bb8ead 6659
00157thm.jpg
AR314 fbe02389cd20124062abe138b06128ac 27057
00158.QC.jpg
AR315 09128b791be04b0b315bb5ae5563d2f7 6841
00158thm.jpg
AR316 ecf3fa151f2a89630e00d88ba5fe9c3b 25349
00159.QC.jpg
AR317 cae1f30da1e95acbebe912de49884880 6534
00159thm.jpg
AR318 39308b3df44d44da07f3ce1ee5940841 24681
00160.QC.jpg
AR319 a193db93217c5ef6e0ea3f5a0beb991b 6411
00160thm.jpg
AR320 c2642640ab0b2fce643bd8363d93ae65 21813
00161.QC.jpg
AR321 cd768d8b8ab3eaea328a120d1db9d2f6 5865
00161thm.jpg
AR322 1c1765c897c84695c10cedf05b5d6351 23000
00162.QC.jpg
AR323 370f04650d038ce619f37ab86337b737 5558
00162thm.jpg
AR324 0d519891343fcc79a8f527aa18d3aa8f 10088
00163.QC.jpg
AR325 21bd02b2dcca57bb72c2fa47cb68dcc7 2744
00163thm.jpg
AR326 dba980b7a80a22aa84de8714eef26a52 22812
00164.QC.jpg
AR327 e210f63d5f59fe9bb4fe3dc5e5db1974 5696
00164thm.jpg
AR328 7e6b788ce9660246876d1417d9bd04b9 9581
00165.QC.jpg
AR329 ddaa7ed041815b51679dd13f715ddea2 2657
00165thm.jpg
AR330 4637f07bac76f0ba3ae6f29abbcad5e5 19583
Copyright.QC.jpg
AR331 ef4fe8286c3890fb4acb006aedfcbf5b 5158
Copyrightthm.jpg
AR332 07967dbba861911cd2008a6b55408ca8 41752
Copyright_Archive.pro
AR333 1267d36b07938a8683cd5570ee42899c
Copyright_Archive.tif
AR334 d375b4112a451c37f0459dbb42740892 1945
Copyright_Archive.txt
AR335 27e9f19b6f42d32ffbbe2bcffdabafe3 245272
UF00082166_00001.mets
METS:structMap STRUCT1 TYPE mixed
METS:div DMDID Intelligent autonomous systems ORDER 0 main
D1 1 Title Page
P2 i
METS:fptr FILEID
D2 2 Copyright
P3 ii
D3 3 Dedication
P4 iii
D4 4 Acknowledgement
P5 iv
D5 5 Table of Contents
P6 v
P7 vi
D6 6 Abstract
P8 vii
P9 viii
P10 ix
D7 Introduction 7 Chapter
P11
P12
P13
P14
P15
P16
P17
P18 8
P19 9
P20 10
P21 11
D8 Sensor data fusion: managing uncertainty
P22 12
P23 13
P24 14
P25 15
P26 16
P27 17
P28 18
P29 19
P30 20
P31 21
P32 22
P33 23
P34 24
P35 25
P36 26
P37 27
P38 28
P39 29
P40 30
P41 31
P42 32
P43 33
P44 34
P45 35
P46 36
P47 37
P48 38
P49 39
D9 fully mobile robots
P50 40
P51 41
P52 42
P53 43
P54 44
P55 45
P56 46
P57 47
P58 48
P59 49
P60 50
P61 51
P62 52
P63 53
P64 54
P65 55
P66 56
P67 57
P68 58
P69 59
P70 60
P71 61
P72 62
P73 63
P74 64
P75 65
P76 66
P77 67
D10 The proposed hybrid control architecture
P78 68
P79 69
P80 70
P81 71
P82 72
P83 73
P84 74
P85 75
P86 76
P87 77
P88 78
P89 79
P90 80
P91 81
P92 82
P93 83
P94 84
P95 85
P96 86
P97 87
P98 88
P99 89
P100 90
P101 91
P102 92
D11 Experimental setup and implementation
P103 93
P104 94
P105 95
P106 96
P107 97
P108 98
P109 99
P110 100
P111 101
P112 102
P113 103
P114 104
P115 105
P116 106
P117
P118 108
P119 109
P120 110
P121 111
P122 112
P123 113
P124 114
P125 115
P126 116
P127 117
P128 118
P129 119
D12 results
P130 120
P131 121
P132 122
P133 123
P134 124
P135 125
P136 126
P137 127
P138 128
P139 129
P140 130
P141 131
D13 Summary conclusions
P142 132
P143 133
P144 134
P145 135
P146 136
P147 137
P148 138
D14 to clips Appendix
P149 139
P150 140
P151 141
P152 142
P153 143
P154 144
D15 Reference
P155 145
P156 146
P157
P158 148
P159 149
P160 150
P161 151
P162 152
D16 Biographical sketch
P163 153
P164 154
P165 155
P166 156
D17
P1


10
unstructured environment in our lab for controlling a K2A
Cybermotion mobile robot. Chapter 5 covers the experimental
setup and implementation issues, while chapter 6 presents
and discusses the results obtained. We share the belief that
a world representation and sensory confirmation of that
representation are essential to the intelligence of an
autonomous mobile robot. Thus, the map builder is an
important part of the hybrid control architecture. We
propose, also in chapter 4, a distributed knowledge-based
framework called the Sensory Knowledge Integrator (SKI) as
the underlying model for the map builder. The SKI framework
organizes the domain knowledge needed to describe the
environment being observed into data-driven and model-driven
knowledge sources, and provides a strategy for applying that
knowledge. The theoretical concepts of the SKI model are
presented in section 4.3, while the implementation of the
map builder is discussed in chapter 5. The results of
implementing the various knowledge sources of the map
builder are also presented in chapter 5. These results show
two types of representations: an occupancy grid
representation and a 2-D line representation generated from
sonar sensor data. Results of position correction or re
referencing of the robot are also presented. Chapter 7
concludes this dissertation and discusses limitations and
future research trends.


73
in the planning module. Reconfiguration is accomplished by
changing the motivation state of the motivation-driven
behaviors, and by performing some behavior arbitration.
Thus, the planning module selects (enables and disables)
from a variety of behaviors (such as avoid-obstacles,
target-nav, follow wall, etc..) the appropriate set of
behaviors at the appropriate time for the task at hand. For
example, in our implementation given the task of mapping the
environment, when the 2-D line representation becomes
available, the corner detector knowledge source in the map
builder examines this representation and posts a corner
hypothesis on the hypothesis panel, which in turn causes a
specific rule of the arbitration strategy rules in the
planning module to fire selecting as a result the target-nav
behavior with the location of the corner to be investigated
as its target. Thus, the behavior selection is based upon
status inputs from the various behaviors and from the map
builder module which, in time, will also provide a high-
level spatially-indexed and object-indexed representation of
the environment. In addition, the planning module uses a
priori knowledge about the task, the environment, and the
problem solving strategy effective in that domain. For
example, if the robot's task is to locate all green drums of
a certain size, a priori knowledge (such as "drums of this
size are usually located on the floor in corners") can
greatly help the task of searching for the drums. This
knowledge about the task and objects in the environment,


CHAPTER 1
INTRODUCTION
Organisms live in a dynamic environment and tailor
their actions based on their internal state and on the
perceived state of the external environment. This
interaction with the environment becomes more complex as one
ascends the ladder of hierarchy of organisms, starting with
the simplest ones that follow a stimulus-response type of
interaction where actions are a direct response to the
sensory information, and ending with humans that are endowed
with intelligence. Intelligence enables humans to reason
with symbols, to make models of the world and to make plans
to favorably alter the relationship between themselves and
the environment. Given the ability to reason does not mean
that people are devoid of the primitive instinctive type of
behaviors. As a matter of fact, reflexive responses account
for most of what people do when walking, eating, talking,
etc.... Less than half the brain is dedicated to higher-
level thinking [Albus 81].
Relative to the development of a machine (robot) which
exhibits various degrees of autonomous behavior, which of
the following two approaches is most appropriate: 1) Should
one design machines to mimic human intelligence with
symbolic reasoning and symbolic models of the world?, 2)
1


92
of the map-builder under the Sensory Knowledge Integrator
framework. Specific knowledge sources for building
representations of the world are discussed. Figure 5.4 shows
a specific implementation of the Sensory Knowledge
Integrator.
Figure 4.8 Sensory Knowledge Integrator framework.


138
to the excitement of the field. Our view is to encourage
real world experimentation even for modestly simple tasks,
not only to avoid the assumptions inherent in simulations,
but also to speed up progress in the field, especially, when
there exists no agreement among researchers about the
fundamental principles involved in the construction of
intelligent autonomous systems.


17
different information about the same or different feature.
The measurements are added (set union) to the total
environment description without concern for conflict. A good
example of complementary sensor data fusion is Flynn's
combining of sonar and IR sensor data [Flynn 88]. The sonar
can measure the distance to an object but has poor angular
resolution, while the IR sensor has good angular resolution
but is not able to measure the distance accurately. By using
both sensors to scan a room, and combining their information
in a complementary manner where the advantages of one sensor
compensates for the disadvantages of the other, the robot
was able to build a better map of the room.
Cooperative. This occurs when one sensor's information
is used to guide the search for another's new observations.
In other words, one sensor relies on another for information
prior to observations. For example, the guiding of a tactile
sensor by initial visual inspection [Allen 88].
Independent. In this case one sensor or another is used
independently for a particular task. Here fusion is not
performed, but the system as a whole employs more than one
sensor for a particular task and uses one particular sensor
type at a time while the others are completely ignored, even
though they may be functional. For example, in an
environment where the lighting conditions are very poor, a
mobile robot may depend solely on a sonar sensor for


75
with the robot oriented towards the yet unknown area, the
planning module might enable the wander behavior again to
allow discovery of unknown areas. The actual implementation
of the planning module is presented in chapter 5 and follows
a knowledge-based approach. We use CLIPS, a knowledge-based
system shell, as an implementation tool to represent and
reason with the knowledge used to accomplish the tasks of
the planning module. A review of CLIPS is given in the
appendix.
4.2 Lower-Level Behaviors
As mentioned earlier, the behavior-based subsystem
provides the robot with the basic instinctive competences
which are taken for granted by the planning module. The
subsystem consists of a collection of modular, task-
achieving units called behaviors that react directly to
sensory data, each producing a specific response to detected
stimuli. These behaviors are independently running modules
that perform specific tasks based on the latest associated
sensor data. Each of these modules is associated with a set
of sensors needed to perform the specialized behavior of the
module. Sensor data is channelled directly to the individual
behavior allowing for "immediate" reaction. Thus each module
constructs from its sensor data a specialized local
representation necessary to effect the behavior of the
module. The responses from multiple behaviors compete for
the control of the robot actuators, and the winner is


130
hardware used. In our implementation the behavior-based
system was implemented on an IRIS 2400 graphics workstation
which communicated to the robot controller (an IBM PC AT)
via a slow RS232 serial link. The robot controller, in turn,
relayed the commands to the robot via a radio modem. This
arrangement and communications protocol contributed to a
slow cycle time. The cycle time of the behavior-based system
generating a heading for the robot was much faster at about
1 second including a complete sonar scan (firing all
sonars) Relaying this heading to the robot and the time
delay of the robot before affecting this heading amounted to
about 15 seconds.
Concerning the results of the map builder we discuss
next the three types of representations used and give
qualitative and quantitative measures of their performance.
We start with the raw data, followed by the EOU and the 2-D
line representations.
The raw data representation consisting of raw sonar
data points, was quickly acquired from each sonar scan by
modeling the sonar cone as a straight line. Raw sonar data
is used by low-level behaviors such as "avoid obstacles",
and "boundary-following" that require immediate sensor data
for quick reflexive response. Such data is often inaccurate
due to the beam divergence and specular reflection of the
sonar signal. This representation does not make use of the
empty space between the sensor and the detected object.


114
following parametric representation of a line:
x.Sin(0) y.Cos(0) = p
and the parameters to be matched are p, 0, and d, with their
corresponding uncertainties Op, cjq, "0" is the orientation
of the line measured relative to the x-axis, "p" is the
offset of the line represented by the perpendicular distance
from the origin to the line, while d is the distance from
the mid point of the line to the point of intersection of
the line with the perpendicular to the line from the origin.
To find p, 0 and d, given the end points PI and P2, we
proceed as follows:


62
accumulating endorsements from a priori expectations about
the object, and from sub-part and co-occurrence support for
the object.
3.4 Direction of Proposed Research
As mentioned in chapter 1, this research follows a
hybrid (behavior-based and cognitive) approach to the
problem of controlling an autonomous mobile robot. The goal
is to enable autonomous operation in a dynamic, unknown, and
unstructured indoor environment. The robot knows about
certain objects to expect in the environment, but does not
have a map of it. The robot has considerable general
information about the structure of the environment, but
cannot assume that such information is complete. It
constructs a model of the environment using onboard
sensors. We aim to develop a general purpose robot useful
for a variety of explicitly stated user tasks. The tasks
could either be general or specific including such tasks as
"do not crash or fall down", "build a map", "locate all
green drums" etc..
Unlike most behavior-based approaches which avoid
modelling of the world and the use of world knowledge, our
view is that while world models are unnecessary for low-
level actions such as wandering around while avoiding
obstacles, they are essential to intelligent interaction
with the world. General, efficient, and flexible navigation
of a mobile robot require world models. The world models


xml record header identifier oai:www.uflib.ufl.edu.ufdc:UF0008216600001datestamp 2009-01-26setSpec [UFDC_OAI_SET]metadata oai_dc:dc xmlns:oai_dc http:www.openarchives.orgOAI2.0oai_dc xmlns:dc http:purl.orgdcelements1.1 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.openarchives.orgOAI2.0oai_dc.xsd dc:title Intelligent autonomous systemsdc:creator Bou-Ghannam, Akram A.dc:publisher Akram A. Bou-Ghannamdc:date 1991dc:type Bookdc:identifier http://www.uflib.ufl.edu/ufdc/?b=UF00082166&v=0000125222039 (oclc)001693355 (alephbibnum)dc:source University of Florida


28
[Newell 75] Several methods for managing uncertainty have
been proposed. These include the use of Bayesian probability
theory, certainty theory (developed at Stanford and employed
in the MYCIN system [Buchannan 84]), fuzzy set theory [Zadeh
83], the Dempster/Shafer theory of evidence, nonmonotonic
reasoning, and theory of endorsements. Ng and Abramson [Ng
90] provide a good reference that introduces and compares
these methods.
2.5.3.1 Bayesian Probability Theory
Bayes theorem, a very important result of probability
theory, allows the
computation of the
probability
of a
hypothesis based
on some evidence,
given
only
the
probabilities with
which the evidence
follows
from
the
hypothesis. Let
P (Hi/E) = the probability that Hi is true given evidence E
P(E/Hi) = the probability of observing evidence E when Hi is
true
P (Hi) = the probability that Hi is true
n = the number of possible hypothesis
Then, the theorem states that:
P (E/Hi) P (Hi)
P (Hi/E)
n
L(P (E/HK) *p (hk)
k=l


74
coupled with knowledge about the environment (such as in an
indoor environment, corners are the intersection of two
walls, etc..) can be brought to bear to improve the
efficiency of the search. Other types of a priori knowledge
include self knowledge such as the diameter and height of
the robot, the physical arrangement of the sensors onboard
the robot, and types and characteristics of sensors used.
For example, knowing the diameter of the robot, helps in
making the decision of whether to let the robot venture into
a narrow pathway. Note that having a priori knowledge about
the environment does not mean that the environment is known,
instead it means that the robot knows about certain objects
to expect in the environment (for example, walls, doors and
corners in an indoor environment) but does not have a map
of it.
One type of reasoning performed by the planning module
include the detection and prevention of cyclic behavior
patterns exhibited by the robot when driven by the lower-
level reflexive behaviors. For example, if in the wander-
while-avoiding-obstacles behavior, the robot gets stuck in a
cyclic pattern (similar to the fly-at-the-window situation)
giving no new information to the map builder, the map
builder forwards its status to the planning module including
the location of the centers of various frontiers that form
the boundary between empty and unknown space. The planning
module enables the target-nav behavior to choose one of
these centers as its target. When this target is reached


Figure 6.8 First and second 2-D line representations.
Figure 6.9 Merged lines over 1st. and 2nd. line sets.


137
learn must be part of any intelligent system. Therefore,
learning paradigms must be a crucial research issue in the
field of intelligent autonomous systems. Learning is evident
in animals when they modify aspects of their future behavior
based on their past history of interactions with the
environment. One final analogy to natural systems that must
be duplicated in artificial autonomous systems is the fact
that animal behavior is robust. For example, when animals
lose certain parts of their body they can still survive and
cope with their environment. Similarly, an artificial
autonomous system must be robust and continue to operate
(probably with a lower level of competence) when some of
their parts fail.
Outside of the analogy to nature, other important
research issues exit. Given that an artificial autonomous
system does not have the luxury of waiting evolutionary time
as animals did to develop, one question is how can such a
system benefit from the knowledge of its designer. That is,
how can the a priori knowledge of the designer be brought to
bear and put to good use in making artificial autonomous
systems intelligent and useful for various user specific
tasks in a variety of domains? Additionally, what form or
representation should the a priori knowledge be compiled
into so that it could be directly used by the behavior-based
system to effect immediate actions?
All of the above mentioned issues are both challenging
and exciting. Live experiments in the real word greatly adds


127
available. As described in section 5.2.1, the cones simulate
an actual sound burst of a sonar sensor with the inside of
the cone representing empty space, while the end of the cone
represents occupied space. The cones in the figure actually
extend all the way to the occupied region, but the graphics
display routine is set to only display relatively confirmed
empty space, and at the beginning the region of the cone
away from the sensor is not well confirmed yet, and hence
does not show. Figure 6.3 shows the accumulated EOU
representation after about 500 scans or raw data points. In
this experiment, the first 2-D line representation, figure
6.4, emerged upon the completion of about 45 scans (about
500 data points). Figures 6.5 and 6.6 show respectively the
corresponding raw data points, and the filtered raw data
points from which the line representation was generated.
As the 2-D line representation is generated, the robot
continues to gather data in the same generalized wandering
behavior. Meanwhile, the target generator knowledge source
in the map builder hypothesizes the presence of "curious"
locations at the end points of the 2-D lines that might be
worth further investigation for a better map representation.
End points in close vicinity of each others were regarded as
one target. The planning module decides that these targets
are worth investigating and sets the motivation state of the
robot as a location attraction with the locations of the
generated targets. This triggers the target-nav behavior,
and the robot starts moving towards its first target (one of


29
In using Bayes's theorem, two major assumptions are
required: first that all the prior probabilities P(E/Hk) and
P (Hk) are known; second that all P (E/Hk) are independent.
These assumptions are difficult or impossible to meet in
many practical domains. In such situations, more heuristic
approaches are used. Another problem with statistical
methods in general is that these methods can not distinguish
between the lack of belief and disbelief. This stems from
the observation that in traditional probability theory the
sum of confidence for a certain hypothesis and confidence
against the same hypothesis must add to 1. However, often
one might have a certain degree of confidence that a certain
hypothesis is true, yet have no knowledge about it being not
true. Certainty theory attempts to overcome this limitation.
2.5.3.2 Certainty Theory
Certainty theory splits the confidence for and the
confidence against a certain hypothesis by defining the
following two measures:
MB(H/E) is the measure of belief of a hypothesis H given
evidence E, with 0 MD(H/E) is the measure of disbelief of a hypothesis H given
evidence E, with 0 These two measures are tied together with the certainty
factor:
CF (H/E) = MB (H/E) MD (H/E)


38
implementation in our work of some of the sensor fusion
techniques described in this chapter.
The first example uses the consistency checking
techniques of section 2.5.1 to match observed and model line
parameters. A line parameter vector consists of orientation,
collinearity, and overlap variables with associated
uncertainties. The normal distance between the two parameter
vectors is calculated as described in section 2.5.1 and
compared to a threshold for a consistency check. Section
5.2.4.1 illustrates the details of the matching operation.
If the match is successful, the line parameter vectors are
now merged using the estimation techniques described in
section 2.5.1 also. These techniques use the standard Kalman
filter equations. The merged estimate with reduced
uncertainty is then compared to the observed lines to
determine the error in robot position and orientation.
Section 5.2.4.2 details such operations.
Another example illustrates a symbolic uncertainty
management technique similar to the theory of endorsements
presented in section 2.5.3.6, and to uncertainty management
within the schema system proposed in [Draper 88] Such a
technique is used for conflict resolution when no match
exists between observed features (what the sensors are
actually seeing) and model features (what the sensors should
be seeing). To resolve the inconsistencies, we propose a
knowledge-based approach within the framework of the Sensory
Knowledge Integrator. Using a priori domain-dependent and


100
error. The operating conditions variables represent a
behavior's operating conditions such as "target-reached" or
"break-detected", for example. The performance measure
variables include such variables as a behavior's shared
memory access frequency. The "status asserter" function only
asserts the status variables that has changed since the last
inspection.
5.2 Implementation of the Map Builder
The map-builder with the Sensory Knowledge Integrator
(SKI) as its underlying framework, runs on an IRIS 4D
workstation and implements a variety of knowledge sources
and representations. Figure 5.4 shows our specific
implementation of the map builder under the SKI framework
with a variety of knowledge sources and representations. The
two representations used include an occupancy grid empty,
occupied, or unknown (EOU) representation, and a 2-D line
representation. The EOU representation is a spatially-
indexed occupancy grid type of representation where space is
tessellated into cells each containing a value indicating
its state whether occupied, empty, or unknown. This
representation is described in section 3.3.2 and is useful
for map-based navigation strategies in computing free paths.
It is generated in our implementation by the EOU knowledge
source (KS) from sonar data and robot position and
orientation data, figure 5.4. Details of the EOU KS will be
discussed in the next section. The 2-D line representation


environment are
different map representations of
presented and discussed.
the
IX


8
in parallel with higher-level planning and map building
modules. Thus, our architecture includes a cognitive
component and a behavior-based component. The cognitive
component is characterized by knowledge representations and
a reasoning mechanism which performs higher mental functions
such as planning. The behavior-based component system hosts
the cognitive component and provides the basic instinctive
competences for the robot. The level of competence of the
behavior-based component determines the degree of complexity
of the planner in the cognitive component. Thus, the higher
the competence level of the behavior-based system, the
simpler the planning activity. Once the behavior-based
system is built to the desired level of competence, it can
then host the cognitive part. The interface between the two
components utilizes motivated behaviors implemented as part
of the behavior-based system. We define a motivated behavior
as one whose response is driven mainly by the associated
'motivation' or internal state of the robot. This is
analogous to motivated behavior exhibited by animals. For
example, the motivated behavior of feeding depends on the
internal motivation state of hunger in addition to the
presence of the external stimulus of food. Utilizing
motivated behaviors, the cognitive planning activity can
thus execute its plans by merely setting the motivation
state of the robot and letting the behavior-based subsystem
worry about the details of plan execution. In our approach,
the arbitration of the responses of lower-level behaviors is


142
a procedural language (such as C), and then return to CLIPS.
The external functions must be explicitly described to CLIPS
through a CLIPS function called "usrfuncs". Each user-
defined function is defined within a call to "usrfuncs"
using the CLIPS "define-function" routine. As an example of
declaring a user-defined function, we show below how our
behavior-select function is defined. In CLIPS file main.c we
find the "usrfuncs" function and modify it as follows:
main(argc, argv)
usrfuncs ()
{ /* Md the next two lines */
extern float behavior-select();
define_function("behavior-select",'f', selector, "selector");
}
In the define-function statement there are four arguments.
The first, "behavior-select" is the name of the function as
known by CLIPS. The second argument refers to the type of
value returned by the CLIPS function. The allowed types are:
* i for integer, f* for float, c' for character, 's' for
pointer to a character string, 'w' for pointer to a
character word, 'u' for pointer to an unknown data type, 'm'
for pointer to a multifield variable, and *v' for void. The
third argument is a C pointer to the name of the C function,
while the fourth one, "selector", is the actual name of the
external C function.
An external function can be called from either the
right or the left hand side of a rule. In our previous


9
partly hardwired in the behavior-based system, and partly
incorporated into a set of production rules as part of the
planning module of the cognitive system. These rules are
driven by the goals of the robot, and the current situation
facts provided by the world model and the status of the
behavior-based system. In addition, in the behavior-based
system, we use superposition in a potential force field
formulation (similar to [Arkin 87] and [Khatib 85]) to
combine the responses of the various complementary behaviors
that are active at any one time. The goal for the hybrid
architecture is to gain the real-time performance of a
behavior-based system without loosing the general goal
handling capability of a general purpose world model and
planner. We view world models as essential to intelligent
interaction with the environment, providing a "bigger
picture" for the robot when reflexive behaviors encounter
difficulty.
In our framework we tackle the behavior fusion problem
with the lower-level behaviors, while higher-level modules
such as the map builder tackle the sensor fusion problem in
attempting to build a general purpose representation.
Theoretical concepts and mathematical tools for sensor data
fusion are presented in chapter 2; issues in designing
intelligent fully autonomous mobile robots are presented in
chapter 3, while details of the workings of our proposed
architecture are explained in chapter 4. This architecture
is implemented and tested in a dynamic,
unknown, and


65
Figure 3.2 Model of hybrid control.
associated motivation state. By merely setting the
motivation state of the robot, the cognitive module
activates selected motivated behaviors in order to bias the
response of the behavior-based system towards achieving the
desired goals. The details of plan execution are left to the
behavior-based subsystem. The motivation state consists of a
variety of representations, each associated with the
corresponding motivation-driven behavior. It is the means of
communication between the cognitive and the behavior-based
subsystems, and could be thought of as a collection of


Figure 4.3 Model of a reactive behavior.
Figure 4.4 Sonar sensor repulsive force model.
77


20
on the body of evidence associated with each feature. This
type of reasoning is called evidential reasoning.
Symbol level. At this level symbolic descriptions of
the environment exist and propositions about the environment
are either true or false. Therefore, logical (Boolean)
reasoning about these descriptions is most appropriate.
2.5 Sensor Data Fusion Techniques
A variety of techniques for combining sensor data have
been proposed. Most approaches concentrate on Bayesian or
statistical combination techniques [Richardson 88], [Luo
88], [Porrill 88], [Ayache 88], [Durrant-Whyte 86a]. Some
researchers followed a heuristic approach [Flynn 88], [Allen
88]. Garvey et al, [Garvey 82] proposed evidential reasoning
as a combination technique and claims it is more general
than either Bayesian or Boolean approaches. The fusion
technique of choice depends on the classification of the
sensor data involved, and on the level of abstraction of the
data. For example, at the signal level, data is generally
probabilistic in nature and hence a probabilistic approach
is most appropriate. At higher levels such as the symbolic
feature level, a Boolean approach is generally appropriate.
Sensor data classification and levels of abstraction have
been discussed in the previous sections. For complementary
data the sensors' data are not overlapping and there is no
concern for conflict. In this case, at any level of


39
domain-independent knowledge, the system reasons about the
conflict and generates resolution tasks to resolve it. These
tasks utilize symbolic endorsements which constitute a
symbolic record of the object-specific evidence supporting
or denying the presence of an object instance. By
accumulating endorsements from a priori expectations about
the object, and from sub-part and co-occurrence support for
the object, the system deals with reasons for believing or
disbelieving a certain hypothesis. To illustrate this
approach we give the following example: Suppose for example
that the system's model of a coffee mug includes a handle.
So, the absence of a handle in a particular view of the mug
reduces the confidence rating of the mug hypothesis. Rather
than just lower the numeric confidence value, the "mug
detector" knowledge source also records the absence of the
handle. This is a source of negative support weakening the
confidence in that hypothesis. The system then takes steps
to remove this negative evidence, invoking another behavior,
for example the "curiosity" behavior, to scan the object
from a variety of view points to account for the missing
piece of evidence. If a hypothesis is subsequently posted
for the handle, the mug hypothesis regains its higher
confidence. The system thus arrives at more reliable
conclusions by reasoning about the sources of uncertainty.
The symbolic representation of uncertainty facilitates this.


43
tackling the important issues. On the issue of autonomous
robot control architecture, researchers are split between
the behavior-based decomposition and the traditional
decomposition. A brief survey of current research in the
behavior-based approach is presented in section 3.1.2, while
a survey of research in the traditional approach is embedded
in section 3.3 on world model construction issues since such
research involves the classical problems of sensor data
fusion, consistent world modeling, and robot position
referencing. In section 3.2 we discuss and compare the two
approaches, listing the advantages and limitations of each.
Fundamental to the traditional approach is the issue of
consistent world modeling which is presented in section 3.3.
Finally, section 3.4 discusses the directions of this
research.
3.1 Behavior-Based Approaches to Robot Autonomy
In this section we begin by tracing the basis of the
behavior-based approach to concepts in animal behavior, then
we provide a survey of current behavior-based approaches to
robot autonomy, and finally discuss the limitations of the
subsumption architecture [Brooks 86a], and reactive systems
in general.
3.1.1 Lessons from animal behavior
In designing autonomous mobile robots, valuable
insights can be obtained and lessons learned from nature.


143
example we call function "behavior-select" as (behavior-
select "target-nav" ?xt ?yt). The arguments "target-nav",
xt, and yt are to be passed from CLIPS to the external
function. Although these arguments are listed directly
following the function's name inside CLIPS rules, CLIPS
actually calls the function without any arguments and stores
the parameters internally. These parameters can be accessed
by the external function by calling any of the following
parameter access functions: "num-args", "rstring", and
"rfloat".
To pass data from the external function to the CLIPS
facts-list, the simplest method is to call the C function
"assert". For more detail on passing data between CLIPS and
external functions the reader is referred to the CLIPS
reference manual. Below we give a simple example of our
"selector" external function in order to illustrate the use
of the above mentioned functions for parameter passing:
float selector ()
{
char *fun_name, *state, buffer[50];
float x_target, y_target;
int numjpassed;
numjpassed = num_args () ;
fun_name = rstring(1);
state = rstring (2);
if (fun_name = "target-nav")
{
if(state = "CN")
{
x_target = rfloat(3);
yjtarget = rfloat (4);


CHAPTER 6
EXPERIMENTAL RESULTS
6.1 Results From An Experimental Run
The robot started its autonomous mission from an
initial location near the
center of
the lab
with
the
motivation
of discovery
and building maps
of
its
environment.
The initial
behavior
enabled
was
the
"curiosity"
behavior which
allowed the
robot to
spin
360
degrees in place while taking sonar scan data before
venturing into the unknown. The end of curiosity triggered
the "avoid", "wander", and "follow-boundary" behaviors
simultaneously. This allowed the robot to move around
without bumping into things, and to follow the boundaries
(walls) of its environment. Figure 6.1 shows the emergent
path of the robot under the control of these behaviors. Note
that the map builder module is working concurrently with
these behaviors building representations of the world by
assimilating sensor data from the various locations visited
by the robot. Thus, in the map builder, the EOU
representation is being continuously updated, while the 2-D
line finder knowledge source is accumulating enough data
points to support and warrant the generation of its 2-D line
representation. Figure 6.2 shows the EOU knowledge source in
action updating the EOU representation as sonar data become
120


35
interested in computing the evidence for proposition C = A
n B, then
BF (C) = [1/ (1-k) ] [ S ml (A) *m2 (B) ]
AT>B=C
u (C) = [1/(1-k) ] [ml (0) *m2 (0) ]
where
k = I ml (A) *m2 (B)
AnB=0
The added complexity of the Dempster/Shafer theory
increases the computational cost. In addition, the
assumptions of independence required in the Bayesian
approach still apply here. Another criticism of this theory
is that it produces weaker conclusions due to the fact that
it avoids the assignment of stronger probability values, and
hence stronger conclusions may not be justified.
2.5.3,5 Nonmonotonic Reasoning
While all of the methods mentioned above use a numeric
model of uncertainty, nonmonotonic reasoning uses a non
numeric approach. In this case the system starts by making
reasonable assumptions using the current uncertain
information, and proceeds with its reasoning as if the
assumptions were true. If at a later time these assumptions
were found to be false (by leading to an impossible
conclusion, for example), then the system must change these


CHAPTER 4
THE PROPOSED HYBRID CONTROL ARCHITECTURE
We propose an architecture that is a synthesis of the
parallel decomposition and the traditional serial
decomposition of a mobile robot control system. Both of
these architectures were discussed and compared earlier.
Figure 4.1 depicts the general framework of our hybrid
control architecture while figure 4.2 shows a block diagram
of a specific implementation of the proposed architecture.
The functions of the various blocks in the figures will
become clear as we proceed through the chapter. The hybrid
architecture supports a hierarchy of control in which the
various lower level modules (such as "avoid-obstacles",
"target-nav", "follow-wall", etc..) perform reflexive
actions or behaviors providing a direct perception to action
link, while the higher level modules (such as the map
builder and the planning modules) perform tasks requiring
greater processing of sensor data such as modelling. It is
important to emphasize parallelism or concurrency in the
hybrid architecture. For example, in figure 4.2, the
planning module, the map builder, and the lower-level
behaviors are all running concurrently. The lower-level
behaviors constitute the behavior-based subsystem and
provide the basic instinctive competences, while the higher-
68


97
5.1 Implementation of the Planning Module
As mentioned earlier the planning module follows a
knowledge-based approach to accomplish user specific tasks
by reasoning about task planning and behavior fusion using
current data and status provided by the map builder and the
lower-level behaviors, in addition to knowledge about the
environment and the task. We use CLIPS (C Language
Implementation Production System), a knowledge-based system
shell, as an implementation tool to represent and reason
with this knowledge. One of the main reasons for selecting
CLIPS is that it is written in and fully integrated with the
C language providing high portability and ease of
integration with external systems. We have described a CLIPS
implementation of a knowledge-based distributed control of
an autonomous mobile robot in [Bou-Ghannam 91], while a
brief review of CLIPS is given in appendix A. The general
structure of the CLIPS-based planning module is shown in
figure 5.3. Note that the planning module, the map builder,
and the lower-level behaviors (behaviors 0-n) are all
running concurrently. Only one of the lower-level behaviors
has control of the robot at any one time. The selection of
which of these behaviors to turn on is accomplished by the
planning module through the "behavior selector" user-defined
CLIPS external function. The planning module reasons about
behavior selection (or fusion) using the knowledge embedded
in CLIPS rules and the current situation facts provided by
status from the map builder (asserted through the "map


136
interactions among the knowledge sources were discussed in
chapter 5.
To close our discussion, let us address the question of
what are some of the important research issues in the
exciting field of intelligent autonomous systems. So far,
good progress has been made, but many research areas remain
to be explored. Valuable insights can be gained from animal
behavior, and hence, fruitful interactions can be obtained
between AI and the various fields of biology, ecology,
behavioral psychology, physiology, and neurobiology. While
animals possess a variety of reflexive behaviors that allow
the animal to quickly react to sudden environmental changes,
we note that the most striking characteristic of animal
behavior is that it is mainly adaptive. The behavior of an
animal is continuously adjusted to confront the dynamic
interactions with the environment. In artificial autonomous
systems, adaptive behavior-based control is equally crucial.
Moreover, animals adjust their behavior based not only on
external stimuli, but also on internal conditions or state.
In other words, animal behavior is goal oriented, triggered
by internal goals or motivations. Similarly for the case of
an artificial autonomous system, it is our belief that such
a system must have goals or drives that decide its actions.
So, another important research issue is how to make
behavior-based systems driven by explicit goals. Goals can
provide a measure of performance for the system, and are,
thus, particularly important for learning. The ability to


54
control algorithm, our robot exhibited cyclical behavior.
Anderson and Donath [Anderson 88] presented an approach
based upon the use of multiple primitive reflexive
behaviors, and came to the conclusion that "... cyclical
behavior may indicate that an essential characteristic of a
truly autonomous robot is the possession of memory and
reactive behavior (i.e., the ability to react to events
which occur over a number of intervals of time and the
ability to alter the behavior depending upon the previous
behavior)." [Anderson 88, p. 205]. Brooks and Connell
[Brooks 86b] have also observed cyclical behavior in their
wandering and wall following behaviors.
The majority of mobile robot projects follow somewhat
the traditional approach. In this approach, world
representation and sensory confirmation of that
representation are essential to the intelligence of an
autonomous mobile robot. A composite world representation is
generated through the integration of the various local
representations which are themselves formed by the fusion of
data from the multiple sensors onboard. Fusion at various
levels of abstraction is performed in order to produce the
representation useful to the planning subsystem. Unlike the
behavior-based approach where each behavior task employes
its own specialized representation, the representation
employed in the traditional approach is general purpose and
thus is useful for a variety of situations and planning
tasks. In contrast, the behavior-based approach employs a


107
5.2.2 The Filter-Raw Knowledge Source
This KS overlays the raw sonar data over the EOU
representation and eliminates all raw points that are not
supported by the EOU representation. That is, if a sonar hit
(x, y) corresponds to a grid cell covering the same (x, y)
location of that hit, and if that cell is considered empty
in the EOU representation, then that raw data hit is not
supported by the EOU representation and is therefore not
passed as a valid hit to the line finding KS.
5.2.3 The 2-D Line Finding Knowledge Source
This KS generates a 2-D line representation of the
basic outlines of the environment. It is based mainly on the
Hough transform [Hough 62] [Gonzalez 87] and on a max-min
clustering algorithm [Tou 74]. A line is represented in its
normal (p, a) form (figure
5.7) as x
.cos(a)
+ y.sin(a)
= P-
The following
steps show how
these
algorithms
are
implemented to
find the
observed
lines
from sonar
and
position and orientation data.
Step 1. Initialize the Hough transform grid:
The (p, a) parameter space is subdivided into a grid of
accumulator cells as shown in figure 5.8. In our
implementation, a varies between am-¡_n of 0 degrees to amax
of 359 degrees, with a Aa of 1 degree, p varies between 0
and 5 meters with Ap = 5 cm. The accumulator cells in the


21
abstraction, the technique of choice is to simply add the
sensor descriptions to the total environment description.
Similarly, for cooperative sensor data there is no concern
for conflict since the data of one sensor guides the
observations of the other. In the case of competitive sensor
data when two or more sensors provide information about the
same property value of the same object, a fusion technique
is called for. But, before choosing a fusion technique to
combine the data, how does one determine if the data is
truly competitive? That is, how does one determine if the
data represent the same physical entity? This correlation or
consistency check is discussed in the next section. The
choice of fusion technique in the competitive case depends
on the level of abstraction of the data. For example, at the
raw data level the problem becomes that of estimating a
parameter x from the observations of the sensors involved.
This could be resolved by either using a deterministic or
nonrandom approach (like the least-squares method for
example), or by using a probabilistic or random approach
(like the minimum mean squared error method). Dealing with
uncertain information is still a problem at higher levels of
abstraction and a variety of methods have been proposed. In
the following sections we will discuss fusion techniques in
more detail.


36
assumptions and all the conclusions derived from them. Thus,
in contrast to the inference strategies discussed above
where knowledge can only be added (monotonic) and axioms do
not change, in nonmonotonic reasoning systems knowledge can
also be retracted. Truth Maintenance Systems [Doyle 79],
[deKleer 86], implement nonmonotonic reasoning. The argument
for nonmonotonic reasoning is that nonmonotonicity is an
important feature of human problem solving and reasoning. In
addition, numeric approaches to uncertainty do not consider
the problem of changing data, that is, what to do if a piece
of uncertain information is later found to be true or false.
2.5.3.6 Theory of endorsements
Cohen's theory of endorsements [Cohen 85] is yet
another qualitative approach to managing uncertainty. The
basic philosophy of this theory is to make explicit the
knowledge about uncertainty and evidence. The motivation for
this theory stems from the limitation of the numerical
approaches which summarize all supporting and opposing
evidence into a single number. The semantics of this number
that represent knowledge about uncertain information is
often unclear. Thus, the basic idea is that knowledge about
uncertain situations should influence system behavior.
Hence, if a required piece of evidence is lacking, an
endorsement-based system allocates resources to the
resolution task whose execution will provide the most
information for reducing the uncertainty. The system


55
variety of specialized representations (each derived from a
small portion of the sensors data) for use by a number of
concurrent planning tasks resulting in many distinct,
possibly conflicting, behaviors. Thus, the need to perform
"behavior fusion" arises in this case as opposed to sensor
fusion in the traditional approach. Behavior fusion
sacrifices the generality obtained by sensor fusion in order
to achieve immediate vehicle response, while sensor fusion
sacrifices immediacy for generality. The immediacy versus
assimilation tradeoff issue is adequately presented by
[Payton 86].
3.3 Issues in World Model Construction
In this section we examine the issues of constructing a
world model of an autonomous mobile robot. The environment
of a mobile robot is often unstructured and contains objects
either as obstacles to avoid or as items to be examined or
manipulated. In the traditional approach, a mobile robot
must build and use models of its environment. This model
must be accurate and must remain consistent as the robot
explores new areas or revisits old ones [Chatila 85] .
Handling inconsistencies in world model construction of a
multi-sensor system, is one of the main problems tackled by
the proposed research. In order to construct an accurate and
consistent model of the environment, the robot must be able
to correctly determine its position and orientation. This is
a difficult task given that sensors are imprecise.


31
3. Otherwise,
CF(Q)
CF(R1) + CF(R2)
1- MIN [|CF (R1) I, |CF(R2)|]
Although certainty theory solves many of the problems
presented by an uncertain world (as in MYCIN), the meaning
of the certainty measures and how they are generated, is not
well defined. The assignment of numeric certainty measures
based on human terms such as "it is very likely that" is not
well defined and considered by some as ad hoc.
2.5.3.3 Fuzzv Set Theory
Fuzzy set theory is yet another approach for dealing
with uncertainty. The main idea is that often information is
vague rather than random and hence a possibility theory must
be proposed as a measure of vagueness just as probability
theory measures randomness. The lack of precision is
expressed quantitatively by the notion of a fuzzy set. This
notion introduces a set membership function that takes on
real values between 0 and 1 and measures the degree to which
a set member belongs to the fuzzy set. To illustrate, let I
be the set of positive integers, and A be the fuzzy subset
of I that represents the fuzzy set of small integers. A
possibility distribution that defines the fuzzy membership
of various integer values in the set of small integers could
be characterized by:


37
represents all reasons for believing or disbelieving a
hypothesis in structures called endorsements. These
endorsements are associated with propositions and inference
rules. The system uses endorsements to decide whether a
proposition at hand is certain enough by back chaining and
determining if its sub-goals are well endorsed in order to
assert it. Cohen describes five classes of endorsements:
Rule endorsements.
Data endorsements.
Task endorsements.
Conclusion endorsements.
Resolution endorsements.
The main problem with this recently developed theory is
the exponential growth of the body of endorsements when
asserting a proposition based on endorsements of its sub
goals and their associated sub-goals and so on. Thus, a
simple rule could lead to large bodies of endorsements after
a few inferences.
2.6 Implementation Examples
In this section we show how some of the techniques
described in this chapter are used in our research. As
mentioned in the introduction at the beginning of this
chapter, some of these techniques are used in the various
knowledge sources of the map builder module. The
implementation of the map builder is discussed in detail in
section 5.2. Here, we highlight with some examples the


50
repulsive fields around observed obstacles, and by
appropriately adjusting the strength of the fields.
3.1.3 Limitations of the subsumption architecture
At first glance the subsumption architecture appears
modular. Theoretically, a new layer could simply be added on
top of the previous layers to achieve a new level of
competence. In reality, upper layers or behaviors interfere
with the internal states of lower-level layers, and thus can
not be designed independently. In fact, the whole controller
must be redesigned when even small changes to lower-level
behaviors are implemented [Hartley 91] The reason for the
lack of modularity is that in the subsumption architecture,
the behavior arbitration mechanism is not separated from the
actual stimulus/response behavioral function. Moreover, the
arbitration strategy is further complicated by the use of
timeouts, or temporal ordering of behaviors [Anderson 90].
Another limitation of the subsumption architecture stems
from its rigid hierarchy of behaviors. In such a hierarchy,
a behavior is either higher or lower than another, with the
higher behavior inhibiting or suppressing the one below it.
For many non-trivial real life applications, such hierarchy
can not be established. Many behaviors are mutually
exclusive and not hierarchically organized. For example, in
our implementation the "target-nav" behavior which guides
the robot towards a specified location, is not higher or
lower than the "boundary-following" behavior. Rather, the


7
intelligence is needed to decide whether to turn left or
right when the robot comes to an intersection.
1.1 Philosophical Underpinnings and Overview
Our goal is to develop a general purpose robot that is
useful for a variety of tasks (explicitly stated by a user)
in various types of dynamically changing environments. The
philosophical view of our research is that such a goal could
only be accomplished by combining the two approaches
mentioned above, and that these two approaches complement
each other just as reflexive responses and higher-level
thought complement each other in human beings. For example,
while one does not think about how to articulate the joints
in one's legs when walking down a sidewalk (the reflexive
behaviors take care of the walking function) higher-level
thinking and planning is needed when one, for example,
remembers that the sidewalk is not passable further down due
to construction noticed earlier. At this moment one has to
plan a suitable alternative route to the destination.
Adhering to this philosophical view of reflexive behaviors
and cognitive modules working in a complementary fashion
where the advantages of one approach compensates for the
limitations of the other, this research proposes a hybrid
decomposition of the control architecture for an intelligent
fully autonomous mobile robot. This architecture follows a
parallel distributed decomposition and supports a hierarchy
of control with lower-level reflexive type behaviors working


27
the sensor system in order to establish a model for f(x) and
f(v) Unfortunately, this is usually not done because it is
impractical or impossible. In such a case, the initial value
of Ex is usually set to zero, and the initial value of M is
set to a large multiple of the identity matrix indicating
our lack of knowledge of prior observations.
2.5.3 Fusion at Middle and High Levels of Abstraction
At intermediate and high levels of abstraction,
features derived from lower level sensory data are present.
These features are normally associated with some degree of
uncertainty. It is the task of the multi-sensor system to
apply domain knowledge to these features in order to produce
valid interpretations about the environment. Thus, the basic
methodology involves the application of symbolic reasoning
and artificial intelligence techniques to aid the
interpretation task. Moreover, because "knowledge is power",
a powerful multi-sensor perception system must rely on
extensive amounts of knowledge about both the domain and the
problem solving strategy effective in that domain
[Feigenbaum 77].
Uncertainty results from the use of inadequate
knowledge as well as from attempts to reason with missing or
unreliable data. For example, in a speech understanding
system, the two sources of uncertainty are: 1) noise in the
speech waveform (sensor noise and variability), and 2) the
application of incomplete and imprecise theories of speech


CHAPTER 3
INTELLIGENT FULLY AUTONOMOUS MOBILE ROBOTS
One of the main goals of robotics research is the
development of autonomous robots. Autonomous robots are
desirable in many applications especially those where human
intervention is difficult. This chapter gives an overview
and analysis of the research issues associated with the
field of intelligent autonomous systems, and based on this
analysis presents the directions of the proposed research.
Traditional autonomous mobile robot projects [Crowley 85]
[Shafer 86] [Ayache 88] use some of the sensor fusion
techniques presented in chapter 2 in an attempt to build a
complete and accurate model of the environment. However,
despite the positive attributes of completeness and detail
of a global world model, some researchers [Brooks 86a] [
Connell 89] argue the need for its existence. The task of
constructing this world model may conflict with the need to
provide timely information about the environment. Hence, a
tradeoff exists between immediacy and assimilation [Payton
86] For control purposes, immediacy considerations give a
higher value to sensor data that can be used to effect
action more quickly. This is because in many real time
situations the time between receiving sensor data and acting
on it is very critical. The disadvantage of immediacy is the
40


Figure 6.4 First 2-D line representation of the robot's
world.
Figure 6.5 First set of raw data points (500 points).


131
The Empty, Occupied, Unknown (or EOU) representation is
an occupancy grid type representation that uses all
information available in a sonar scan including the empty
space between the sensor and the object. This representation
provides a basic spatially-indexed world model
representation suitable for determining regions of free and
occupied space suitable for navigation tasks. In addition,
this representation continuously improves as more sensor
data is assimilated. For this reason it was used in our
implementation to filter out false sonar readings that were
not supported by this representation.
The 2-D line representation provides the basic outlines
of the room and objects within it. It could easily be used
to find walls, doors, corners, hallways, etc.. Due to the
restricted lab space (one small room with no doors or
hallways to other rooms) in our experiment, the major use of
the 2-D line representation was for maintaining a consistent
world model by correcting the position and orientation
errors of the robot.
/


41
difficulty to obtain information or features critical to
plan execution from sensor data that has not undergone
sufficient assimilation. In addition, the extracted data may
be inconsistent or in error. To effect immediacy, Brooks
[Brooks 86a] proposed a parallel behavior-based
decomposition of the control system of a mobile robot. His
approach deviates from the traditional serial decomposition
approach which allows for greater sensor data assimilation.
The traditional approach decomposes the sensor to actuator
path into a few large processing modules in series, figure
3.1a. In the behavior-based approach, figure 3.1b, the path
is divided into many small parallel modules each with its
own specialized task and complete path from the sensor to
the actuator. A general characteristic difference between
the two approaches is that the parallel approach requires
behavior fusion, while the traditional approach requires
sensor fusion. Section 3.2 compares these two approaches and
illustrates the advantages and disadvantages of each.
In this chapter we discuss the important issues
concerning the design of intelligent autonomous agents that
are capable of interacting with a dynamic environment. These
issues include consistent world modeling and control
architectures. We present a brief survey of the current
research in the field and highlight the approaches and
methodologies used by the various researchers in


112
Step 8. Detect breaks in the line:
Once the points are sequentially grouped, we find the
distance dj_j between every pair of consecutive points Pj_ and
Pj in a group. If dj is larger than a certain threshold
(usually the diameter of the robot) then the line is split
into two line groups. The first starts with P]_ and ends with
P-j_, and the second starts with Pj and ends Pn. We continue
looking for splits in group Pj-Pn.
5.2.4 The Consistency Knowledge Sources.
The consistency knowledge sources maintain a consistent
world model (the 2-D lines model) as the robot visits new
places or revisits old ones. These KSs consist of the
Match/Merge KS and the Re-reference KS. The Match\Merge KS
is triggered once a new set of observed lines is generated
by the line-finder KS. These observed lines are matched to
model lines using the consistency checking techniques
described in chapter 2. If a model line is found to be
consistent with an observed line, then the two lines
represent the same physical line and will be merged into a
new line with reduced uncertainty using the fusion
techniques from chapter 2 also. The corresponding model line
is updated with the new merged line. If at the start of the
process no model lines exist yet, the model lines are
initialized to the observed lines. In addition, if an
observed line could not be matched to any model line, then


5
robot does not build or have a map of its environment. In
effect, the robot does not remember what it has seen or
where it has been. Anderson and Donath [Anderson 88]
describe some cyclical behavior exhibited by a reflexive
behavior-based robot and attribute such behavior to the lack
of internal state within each behavior. They also report
that this cyclic behavior was observed by Culberston
[Culberston 63]. Brooks and Connell [Brooks 86b] have also
observed cyclical behavior in their wandering and wall
following behaviors. To avoid such problems, later work by
Mataric [Mataric 89] a member of Brooks' group experimented
with map building and use under the subsumption
architecture.
The traditional approach to robot control architecture
is derived from the standard AI model of human cognition
proposed by Newell and Simon in the mid-fifties. It follows
the Deliberative Thinking paradigm where intelligent tasks
can be implemented by a reasoning process operating on a
symbolic internal model. Thus, it emphasizes cognition or a
central planner with a model or map of the environment as
essential to robot intelligence. Sensory confirmation of
that model is equally important. Such symbolic systems
demand from the sensor systems complete descriptions of the
world in symbolic form. Action, in this case, is not a
direct result of sensor data but rather is the outcome of a
series of stages of sensing, modelling, and then planning. A
desirable feature of such systems is the general ability to


104
State of those cells). These steps will now be discussed in
greater detail:
Stepl. Grid initialization:
An initial 2-D occupancy grid array of 1000x1000
declared as short integer (1 byte) is setup and each cell
initialized with a neutral value of 128 representing the
unknown state of the cell. The declaration of one byte per
cell was chosen for reasons of memory conservation. Thus,
the value of each cell ranges from 0 or definitely empty to
255 or definitely occupied, and the mid value of 128
indicating the unknown state. This grid of one million cells
represented an actual floor area of 10m by 10m with each
cell having an area of lcm^. Thus, the resolution of an
individual grid cell length was 0.1% of the total grid
length, or 0.00001% of the total grid area. This resolution
was much finer than needed for navigation tasks, so a later
version used a 5cm by 5cm cell causing an increase in the
speed of generating the EOU representation, with less
memory, and without any degradation in task performance. The
resolution or the area of a grid cell is determined by many
factors such as memory capacity, size of the environment,
and the intended use of the EOU representation. If the
intention is to use the EOU to determine large empty or
occupied space, then a rough (low) resolution will do. On
the other hand, if we want to identify doorways and tight


BIOGRAPHICAL SKETCH
Akram Bou-Ghannam was born on December 5, 1956, in
Aramoun, a small Lebanese town in the district of Aley on
the foothills of Mount Lebanon. He earned his high school
diploma (Lebanese Baccalaureate I) in 1974 from Dar El Hikma
High School in Abey, Lebanon, and the Lebanese Baccalaureate
II in 1975 from St. Mary Orthodox College in Beirut,
Lebanon.
In December 1975 he came to the United States of
America where he attended the University of Florida in
Gainesville, Florida, and earned his B.S. and M.S. degrees
(with honors) in mechanical engineering in March 1979 and
December 1980 respectively. Even though his degrees were in
mechanical engineering, he was interested in and involved
with the field of microprocessor-based and digital systems
design. Therefore, upon graduation he worked in this field
in industry as a research and development engineer. His
first employer was Vital Industries (1980-83) in
Gainesville, Florida. His second and current employer is the
IBM Corporation (1983-present) in Boca Raton, Florida. With
support from IBM, under the IBM Resident Study Program
(since August 1987), he is currently pursuing his Ph.D.


Figure 6.2 a. After the 1st.
full scan (12 data points).


24
2.5.2 Fusion at Lower Levels of Abstraction
At the lower levels of abstraction where the sensor
data is close to the signal level, the data contains signal
components random in nature, and hence probabilistic
reasoning or inference is most appropriate. Most of the
literature on sensor data fusion involves problems at this
level with the majority of the researchers adopting
probabilistic (Bayesian) inference. Next we will briefly
discuss the commonly used assumption, advantages, and
disadvantages of statistical sensor fusion techniques.
Most researchers treat the sensor fusion problem, at
this level of abstraction, as an optimal estimation problem.
This problem is briefly stated as follows: Given a system of
interest (e.g. the environment of a robot) represented by an
n-dimensional state vector x, and the measured quantities
(output of sensor systems) represented by the m-dimensional
measurement vector z, what is the best way to estimate the
value of x given z according to some specified optimality
criterion. A general measurement model is given by
z = h(x, v)
where h(x, v) is an m-dimensional vector which represents
the ideal operation of the sensor system, and v represents
the m-dimensional random noise or error vector.
Unfortunately, most problems with this general measurement


Figure 5.3 CLIPS implementation of the planning module.
98


128
the corners of the room) while still avoiding obstacles.
When a target is reached, the curiosity behavior is
triggered for a more detailed picture of the target
location. Another target generator knowledge source works on
the EOU representation and hypothesizes targets as the
center of a sizable boundary between empty and unknown space
in the EOU representation. In this experiment no targets
were generated by this knowledge source. When all the
targets have been visited and the location attraction
motivation have been satisfied, the target-nav behavior is
disabled and the robot resorts to the generalized wandering
behavior of the earlier stages. When a second set of 2-D
lines is produced, figure 6.7, the consistency knowledge
sources match and merge the two sets of lines (figure 6.8)
generating the improved lines of figure 6.9. Next, we
qualify and quantify the results obtained during the
experimental test run.
6.2 Discussion of Results
The implemented control architecture performed as
intended during the experimental test run, guiding the robot
safely through its unknown and unstructured environment
without operator intervention. While the behavior-based
system reacted reflexively to immediate sensor readings, the
map builder generated models of the world and navigation
targets, and the planning module determined further actions
in order to achieve the desired goal. The goal of our


51
two are mutually exclusive. In addition, it is possible that
lower-level behaviors need to inhibit higher-level ones.
Examples of such situations are numerous in biological
system. In the subsumption architecture, only higher-level
behaviors inhibit lower ones.
In general, the subsumption architecture suffers the
same limitations of behavior-based systems. Such systems
require some method of arbitration such as the subsumption
architecture. The subsumption architecture is specifically
implemented as part of behavior-based systems motivated by
the desire of their designers to produce artificial insects.
Hence, such systems avoid world modeling and the explicit
representation of goals. Instead, as mentioned earlier, the
driving philosophy is that "the world is its own best
model", and goals are implicitly designed within the system
by the designer establishing, a priori, the interactions
between behaviors through the environment. This is a serious
limitation of such behavior-based systems since the designer
is required to predict the best action for the system to
take under all situations. Obviously, there is a limit to
how far one can forsee the various interactions and
constraints in order to precompile the optimum arbitration
strategy [Maes 90] .


149
[Giralt 84a]
Giralt, G. 1984. "Research Trends in Decisional and
Multisensory Aspects of Third Generation Robots." 2nd.
International Symposium on Robotics Research, Koyoto, Japan.
August 20-23, pp 511- 520.
[Giralt 84b]
Giralt, G., Chatila, R, and Vaisset, M. 1984. "An Integrated
Navigation and Motion Control System for Autonomous Multi-
Sensory Mobile Robots." In Robotics Research: The 1st.
International Symposium, M. Brady and R. Paul editors., MIT
Press, Cambridge, MA., pp.191-214
[Gonzalez 87]
Gonzalez, R. C., and Wintz, P. 1987. Digital Image
Processing. Addison-Wesley, Reading, MA.
[Gould 82]
Gould, J. L. 1982. Ethology: The Mechanics and Evolution of
Behavior. W. W. Norton & Company, New York.
[Hartley 91]
Hartley, R., and Pipitone, F. 1991. "Experiments with the
subsumption architecture." Proc. IEEE Int'1 Conf. on
Robotics and Automation, Sacramento, CA., pp.1652-1658.
[Henderson 88]
Henderson, T., Weitz, E., Hanson, C., and Mitiche, A. 1988.
"Multi-sensor Knowledge Systems: Interpreting 3D Structure. "
Int'1 J. Robotics Res. 7 (6):114-137.
[Henderson 84]
Henderson, T., Fai, W. S., and Hanson, C. 1984. "MKS: A
Multi-sensor Kernel System." Proc. IEEE Int'l Conf. on
Robotics and Automation, pp. 784-791.
[Heywood 89]
Heywood, T. 1989. "HELIX: A Shared Memory Emulation System
for Heterogeneous Multicomputing, Technical report CESAR-
89/31, Oakridge National Laboratory, Oakridge, TN.
[Hough 62]
Hough, P. V. C. 1962. "Methods and Means for Recognizing
Complex Patterns." U.S. Patent 3,069,654.
[Kadonoff 86]
Kadonoff, M. B., Benayad-Cherif, F., Franklin, A., Maddox,
J. F., Muller, L., and Moravec, H. 1986. "Arbitration of
Multiple Control Strategies for Mobile Robots, In Mobile
Robots, SPIE Proceedings vol. 727, pp.90-98.
[Kak 87]
Kak, A. C., Roberts, B. A., Andress, K., M., and Cromwell,
R. L. 1987. "Experiments in the Integration of World


9(>
a. Photo with arrangment of sonar sensors on top.
b. Graphical simulation of the robot.
Figure 5.2 The K2A cybermotion robot


60
the angle between the 2 segments is less than a certain
threshold, (2) If the perpendicular distance from the
midpoint of one segment to the next is less than a
determined threshold, and (3) If one segment passes through
a bounding box (tolerance) around the other. The outcome of
this test gives five types of correspondence. In a later
paper Crowley [Crowley 87] uses the normal distribution to
represent spatial uncertainty, and the normal distance as a
measure for matching the model parametric primitives (ex.
lines, edge segments) to the observed ones. He defines the
function SIMILAR which returns a true if the normal distance
between the two primitives is less than a certain threshold,
else it returns a false. The function CORRESPOND now
consists of a simple set of attribute tests using the
function SIMILAR.
Andress and Kak [Andress 87] define a COLLINEARITY and
a NONCOLLINEARITY function as a measure of compatibility and
incompatibility respectively between a model edge segment
and an observed one. These compatibility measures form the
initial "basic probability assignment" for the observed
segments. The Dempster-Shafer formalism is used for belief
update in the face of new evidence.
Our approach to consistent world modeling is similar to
that of [Crowley 87] It is implemented as part of the
Sensory Knowledge Integrator framework. Section 5.2.4 of
chapter five details the implementation of our method. The
parameters of observed and model lines are matched using the


110
max{dkl = sqrt[(xk x^2 + (yk yi)2]}
Let C2 = Sk, and the distance between and C2 be d]_2-
3. Compute the distance from each of the remaining
sample points to and C2. That is, compute dk]_ and dk2 for
k= 1 to n. For every pair of these computations, save the
minimum distance, i.e., save min{dki, dk2} fr all k.
4. Select the maximum of all the minimum distances
obtained in 3 above: dmaxm;Ln (1) = max{min{dki, dk2} } for all
k. The '1' index indicates that the maximum distance
corresponds to the k=l sample point.
5. If dmaxm-[n > (l/2)d]_2 then C3 = S]_, else terminate.
6. Repeat steps 3-5 with the additional cluster center.
In general, in step 5 if the maxmin distance is an
appreciable fraction of all the previous maxmin distances
(such as greater than the average of these distances) then
the corresponding sample becomes a cluster center. Otherwise
the algorithm is terminated, and all cluster centers are
determined.
Step 5. Find the lines corresponding to the clusters found
in step 4:
First, for every point of the total number of n data
points, find out what cluster the point belongs to. This is
accomplished by finding the distances from the point to each
cluster center. The point belongs to the closest cluster
center. Next, for every point (x^, y-j_) in cluster k we
calculate the average line parameters for that cluster:
cck = [Z(wi.xi)]/[Zw-jJ


52
3.2 Control Architecture; Behavior-Based vs. Traditional
Control
As mentioned before, the control architecture of an
autonomous intelligent mobile robot can be modeled as either
a serial or a parallel decomposition of the perception-
action control path, figure 3.1. The more traditional
approaches [Crowley 85] [Kriegman 89] [Moravec 85] are
serial in nature where the control path is decomposed into a
few modules in series such as: 1) Sensing, 2) Modelling, 3)
Planning, 4) Actuation. In the parallel decomposition
approach [Brooks 86a] [Connell 89] [Payton 86] multiple
parallel control paths or layers exist such that the
perceptual load is distributed. Each layer performs a
specialized goal or behavior and processes data and issues
control commands in a manner specific to its own goals. In
[Brooks 86a] these layers of control correspond to levels of
competence or behaviors with the lower layers achieving
simple tasks such as avoiding obstacles and the higher
layers incrementally achieving more complex behaviors such
as identifying objects and planning changes to the world. In
this hierarchical set of layers, each layer is independent
from the others in performing its own tasks even though
higher level layers may influence lower level layers by
inhibiting their output or suppressing their input. Unlike
the traditional approach which has the disadvantage of
imposing an unavoidable delay in the sensor to actuator
loop, the layered approach enjoys direct perception to
action through concurrency where individual layers can be


53
working on individual goals concurrently. In this case
perception is distributed and customized to sensor-set/task-
set pair of each layer. This eliminates the need for the
robot to make an early decision on which goals to pursue.
Another advantage of the parallel approach is flexibility.
Since modules or layers are independent, and each having its
own specialized behavior and goals, it is possible that each
may have its own specialized interface to the sensors and
actuators. Flexibility stems from the fact that the
interface specification of a module is part of that module's
design and does not affect the other modules' interfaces.
This contradicts the traditional serial approach where a
modification to a module's interface might require
modification of at least the previous and the following
modules if not the whole system. Moreover, all of the
modules in the serial approach must be complete and working
before the system is operational, while on the other hand, a
behavior-based system can still produce useful behavior
before all the modules are complete. The main disadvantage
of a behavior-based approach is the possibility of
exhibiting cyclical behavior patterns by the robot due to
the lack of memory within some behaviors, that is, such
behaviors do not remember previous events and base their
decisions solely on the latest sensor stimuli. This prevents
the robot from responding to events which happen over
several time periods and could cause cyclical behavior. In
our initial simulation of a robot wandering with a fixed


152
[Porrill 88]
Porrill, J. 1988. Optimal Combination and Constraints for
Geometrical Sensor Data. Int'l J. Robotics Res. 7(6):66-77.
[Richardson 88]
Richardson, J. M., and Marsh, K. A. 1988. Fusion of Multi
sensor Data. Int'l J. Robotics Res. 7(6):78-96.
[Ruokagnas 86]
Ruokagnas, C. C., Black, M. S., Martin, J. F., and
Schoenwald, J. S. 1986. "Integration of Multiple Sensors to
Provide Flexibile Control Strategies." Proc. IEEE Int'l
Conf. on Robotics and Automation, pp.1947-1955.
[Shafer 86]
Shafer, S. A., Stentz, A., and Thorpe, C. E. 1986. "An
Architecture for Sensor Fusion in a Mobile Robot." Proc.
IEEE Int'l Conf. on Robotics and Automation, pp.2002-2011.
[Shafer 76]
Shafer, G. 1976. A Mathematical Theory of Evidence.
Princeton Univ. Press, Princeton, NJ.
[Tou 74]
Tou, J. T., and Gonzalez, R. C. 1974. Pattern Recognition
Principles. Addison-Wesley, Reading, MA.
[Willner 76]
Willner, D., Chang, C. B., and Dunn, K. P. 1976. "Kalman
Filter Algorithms for a Multi-Sensor System." Proc. IEEE
Conf. on Decision and Control, Clearwater, FL. pp.570-574.
[Winston 84]
Winston, P. H. 1984. Artificial Intelligence. Second
edition. Addison Wesley, Reading, MA.
[Zadeh 83]
Zadeh, L. A. 1983. "Commonsense Knowledge Representation
Based on Fuzzy Logic. Computer 16:61-65.
[Zadeh 78]
Zadeh, L. A. 1978. "Fuzzy Sets as a basis for a Theory of
Possibility." Fuzzy Sets and Systems, 1(1):3-28.


63
provide a "bigger picture" for the robot when
reflexive/reactive behaviors encounter difficulty. Such
difficulties include trap situations due to local minima
causing cyclic behavior, oscillations in narrow passages or
in the presence of obstacles, and inability to pass between
closely spaced obstacles [Koren 91][Brooks 86b][Anderson
88]. Trap situations are caused by various topologies of the
environment and by various obstacle configurations such as a
U-shaped configuration. [Arkin 90] gives a vivid example of
the difficulties encountered by reactive control as
resembling the "fly-at-the-window" problem. This situation
arises when the insect expends all of its energy trying to
go towards the sunlight outside through the glass of the
window. Our work dynamically builds a map of the environment
within the Sensory Knowledge Integrator framework. As we
will see in section 4.3.1, this framework contains a variety
of knowledge sources including different types of "trap
detector" knowledge sources that use map data to look for
traps. This information is used by the planning module to
avoid or recover from traps. Map data is used to reconfigure
the lower-level behaviors and not to replace the function of
these behaviors. In this manner the environment is more
efficiently explored while the robot still enjoys the real
time operation of the reflexive behaviors. Additionally, the
construction of a general purpose world model makes use of
the available world knowledge. For example, the Sensory
Knowledge Integrator, the underlying framework of our map


Figure 4.2 A specific implementation of the hybrid control
architecture.
71


117
10o-emI 2 C(a0O) 2 +(^0m)2]1/2
Similarly, the remaining two tests become:
IPo-pml <2[(cpo)2 + ((Jpm)2]1/2
|d0-dm| <2[ (L0)2 + (Lm)2]I/2
If all the three conditions above are valid for a
particular pair of lines, then we say that the observed line
and the model line in the pair are consistent, or they
represent the same physical entity. Next we merge the
parameters of each consistent pair of lines to provide a
better (reduced uncertainty) estimate. We use the estimation
techniques presented in chapter 2 to perform the merging
including the updating of the uncertainty. Thus, for our 1-D
case we obtain the following estimate:
new = m + (om) 2 (a0m) 2/t (0o) 2 +^<70m)2]
CT0new = (^m) 2 (G0o) 2/ [ (0o) 2 ^^m)2^
Pnew = Pm + (Po-Pm) 2 (apm) 2/ t (^po) 2 +(apm)2l
^pnew = (pm)2(po)2/t(po)2 +(^pm)2]
^new = ^m + (d0-dm) 2 (bm) 2/t (b0) 2 + (bm)2]
^new = (bm)2(L0)2/[ (L0)2 + (bm)2]


4
surprisingly complex ways. These "insects" never consult a
map or make plans; instead, their action is a direct
response to sensory information. The payoff for eliminating
symbolic models of the environment and the central planner
is speed. Real time operation becomes possible since the
computational burden is distributed and greatly reduced.
Another advantage of the subsumption architecture is its
modularity and flexibility. In principle, more behaviors may
easily be added until the desired level of competence is
reached. A drawback of the behavior-based approach is that
one cannot simply tell the various behaviors how to achieve
a goal. Instead, in an environment which has the expected
properties, one must find an interaction loop between the
system and that environment which will converge towards the
desired goal [Maes 90] Thus, the designer of a behavior-
based system has to "pre-wire" the arbitration strategy or
the priorities of the various behaviors. This inflexibility,
coupled with the inability to handle explicitly specified
goals, makes it hard for such behavior-based systems to be
useful for different types of missions over a wide range of
domains. Additionally, an efficient behavior to assure
reaching a specified goal cannot always be guaranteed. So,
it is possible for a robot using the behavior-based approach
to take a certain pathway many times over, even though
traversing this pathway might not be desirable for many
reasons. For example, the path might lead the robot away
from the target or into danger. This is possible because the


134
away from the trap. Reconfiguring a behavior-based system
consists of changing the arbitration priority and setting
the "motivation state" if needed. The motivation state
triggers motivated behaviors in the behavior-based subsystem
and biases the robot towards attaining the desired goal.
To give a summary of the research presented in this
dissertation, we have described an architecture which allows
for the implementation of a wide range of planning and
control strategies including those requiring timely
information about the environment, and the ones requiring
processing and assimilation of sensor data over time for
general purpose map-based planning. The architecture
exploits the advantages of both behavior-based and
traditional control architectures of an autonomous mobile
robot. This claim was validated in our experiments where the
robot demonstrated the real-time performance of behavior-
based systems in addition to the ability to build
representations of the world and put these representations
to use in aiding reactive control out of difficulties
(cyclic behavior) and, thus, effecting efficient exploration
of the environment. The interface between the planning
module and the reactive behaviors was accomplished by our
"motivated behaviors" and the "motivation state" of the
robot. Depending on the goal at hand and the current
situation status provided by the map builder and the lower-
level behaviors, the planning module sets the "motivation
state" which in turn triggers the "motivated behaviors" to


determined
76
by an arbitration network. The behaviors
comprising our behavior-based subsystem are reactive,
characterized by a rigid stimulus-response relationship with
the environment. The response of a reactive behavior is
deterministic and strictly depends on the sensory stimulus
(external environment and internal state). It is not
generated by a cognitive process with representational
structures. The model of our reactive behavior is given in
figure 4.3. The function F(Si) represents the deterministic
stimulus-response function, while the threshold x is
compared to F(Si) before an output response is triggered. In
our implementation, the various thresholds are
experimentally determined. In general, the threshold
represents one the important parameters for behavior
adaptation and learning. Thus, adhering to the biological
analogy, the threshold can be set by other behaviors
depending on the environmental context and the internal
state. We further divide our reactive behaviors into
reflexive and motivation-driven behaviors. The motivation-
driven behaviors are partly triggered by the motivation
state of the robot (set by the planning module in our
implementation, but could theoretically be set by other
behaviors), while the response of a reflexive behavior is
driven only by external stimuli. Reflexive behaviors
constitute the protective instincts for the robot such as


151
[Mataric 89]
Mataric, M. J. 1989. "Qualitative Sonar Based Environment
Learning for Mobile Robots." SPIE Mobile Robots,
Philadelphia, PA.
[McFarland 87]
McFarland, D. 1987. The Oxford Companion to Animal Behavior.
Oxford Unoversity Press.
[Mitiche 86]
Mitiche, A., and Aggarwal, J. K. 1986. "Multiple Sensor
Integration/Fusion Through Image Processing: a Review."
Optical Engineering 25 (3) :380-386.
[Moravec 85]
Moravec, H. P., and Elfs, A. 1985. "High Resolution Maps
from Wide Angle Sonar," Proc. IEEE Int'l Conf. on Robotics
and Automation, St. Louis, MO, pp.116-121.
[Newell 75]
Newell, A. 1975. "A Tutorial on Speech Understanding
Systems." Speech Recognition: Invited Papers of the IEEE
Symposium, D. R. Reddy, ed. Academic Press, New York, pp.3-
54.
[Ng 90]
Ng, K-C. and Abramson, B. 1990, "Uncertainty Management in
Expert Systems." IEEE Expert 5(2):29-48.
[Nii 86a]
Nii, H. P. 1986 "The Blackboard Model of Problem Solving,"
AI Magazine 7(2):38-53.
[Nii 86b]
Nii, H. P. 1986. "Blackboard Systems Part Two: Blackboard
Application Systems," AI Magazine 7(3):82-106.
[Nitzan 81]
Nitzan, D. 1981. "Assessment of Robotic Sensors." Proc. 1st.
Inti. Conf. on Robot Vision and Sensory Controls, Stratford-
Upon-Avon, UK, pp.1-12.
[Payton 90]
Payton, D. W. 1990. "Internalized Plans: A Representation
for Action Resources." In Designing Autonomous Agents, P.
Maes, ed. MIT Press, Cambridge, MA, pp.89-103.
[Payton 86]
Payton, D. W. 1986. "An Architecture for Reflexive Vehicle
Control," Proc. IEEE Int'l Conf. on Robotics and Automation,
pp.1838-1845.


CHAPTER 5
EXPERIMENTAL SETUP AND IMPLEMENTATION
The ideal implementation of the hybrid architecture
presented in chapter four would include various specialized
hardware on-board the robot each dedicated to its special
task or behavior and directly controlling the actuators on
the robot. For example, the avoid-obstacles behavior should
ideally be implemented with some digital hardware such as
logic gates or a single-chip micro-controller generating an
output vector that will be fused with vectors from other
behaviors in the arbitration/superposition hardware
producing a resultant vector that directly controls the
actuators. Our implementation simulated these behaviors and
modules in software on general purpose computer workstations
(off-board the robot) that controlled a general purpose
mobile robot through the robot's dedicated PC controller and
not directly to its actuators.
The robot system was setup in an indoor lab environment
(the Nuclear Engineering Robots Lab) with open space and
irregularly shaped boundaries consisting of walls, cabinets,
and computer stations. The dimensions of the lab were
approximately 8 meters by 5 meters. The floor consisted of
smooth tiles necessary for the operation of the wheeled
Cybermotion K2A robot. The system setup is shown in figure
93


113
either this line belongs to an area of the environment not
modeled yet, or a sensing error has occurred. To resolve
such conflict, a new-area-visited hypothesis is posted on
the hypothesis panel to be confirmed or denied. If it is
found that the observed line comes from a newly visited area
not modeled yet, then it is merely added to the set of model
lines.
As the robot travels, odometeric errors in its position
and orientation accumulate. These errors are kept in check
and actually reduced by the re-referencing KS. This KS uses
the difference (error) in rotation and translation between
two consistent lines, calculates the average difference in
rotation and translation of all the matched lines, and
finally applies this average difference to the robot
position and orientation to find the new corrected position
and orientation of the robot. Details of the consistency KSs
and the methods described above are explained in the
following two sections.
5.2,4.1 The Mfrtch/Merge Knowledge., J5.pyrce
This KS is activated when new observed lines are
available. Each observed line is received as two end points
PI (xl, yl) and P2(x2, y2) The first step is to transform
the line from the end points representation to a
representation suitable for matching such as the one shown
in figure 5.9. This representation is similar to the one


robot. Thus, the cognitive planning activity can execute its
plans by merely setting the motivation state of the robot
and letting the behavior-based subsystem worry about the
details of plan execution. The goal of such a hybrid
architecture is to gain the real-time performance of a
behavior-based system without losing the effectiveness of a
general purpose world model and planner. We view world
models as essential to intelligent interaction with the
environment, providing a "bigger picture" for the robot when
reactive behaviors encounter difficulty.
Another contribution of this research is the Sensory
Knowledge Integrator proposed as the underlying model for
the map builder. This proposed framework follows a
distributed knowledge-based approach to the fusion of sensor
data from the various sensors of a multi-sensor system in
order to provide a consistent interpretation of the
environment being observed. Within the various distributed
knowledge sources of the Sensory Knowledge Integrator, we
tackle the problems of building and maintaining a consistent
model of the world and robot position referencing.
We describe a live experimental run of our robot under
hybrid control in an unknown and unstructured lab
environment. This experiment demonstrated the validity of
the proposed hybrid control architecture and the Sensory
Knowledge Integrator for the task of mapping the
environment. Results of the emergent robot behavior and
viii


INTELLIGENT AUTONOMOUS SYSTEMS:
CONTROLLING REACTIVE BEHAVIORS WITH
CONSISTENT WORLD MODELING AND REASONING
By
AKRAM A. BOU-GHANNAM
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1991


119
two line segments in the model lines, and the corresponding
corner point in the observed lines. For example, if line Lm^
matches line L0^ and line Lm2 matches line LQ2, and if pm is
the intersection point between Lm]_ and Lm2, while pQ is the
intersection point between L0^ and LQ2, then pm and p0
correspond to the same physical point and hence makes for a
good reference point.
Step3. Correct the position (Pnew i-s the new corrected
position, and Cnew is the new covariance matrix or
uncertainty) and orientation of the robot knowing A0, Ax,
and Ay. Note that P0ld is the current uncorrected position
of the robot:
pnew = FFold
-new = FColdFT
Where the matrix F is represented as:
F =
Cos (A0) -Sin(A0) Ax
Sin(A0) Cos (A0) Ay
0 0 1


This dissertation was submitted to the Graduate Faculty
of the College of Engineering and to the Graduate School and
was accepted as partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
August 1991
Winfred M. Phillips
Dean, College of Engineering
Madelyn M. Lockhart
Dean, Graduate School


61
correlation and consistency checking techniques described in
section 2.5.1 of chapter 2. A successful match indicates
that all orientation, collinearity, and overlap tests have
been satisfied. Next, we merge the parameters of the two
matched lines using estimation techniques also described in
section 2,5.1. These techniques use the standard Kalman
filter equations. The merged estimate with reduced
uncertainty is then compared to the observed lines to
determine the error in robot position and orientation.
Section 5.2.4.2 details such operations.
In case no match exists between a model feature (what
the sensors should be seeing) and an observed feature (what
the sensors are actually seeing), then a conflict exists and
must be resolved. In our implementation we did not encounter
such conflicts and we did not setup our experiments in order
to obtain a conflict. Instead, in what follows, we propose
how inconsistencies could be approached under the Sensory
Knowledge Integrator framework. We propose a knowledge-based
approach for resolving the inconsistencies where the system
generates resolution tasks to resolve the conflict, and
reasons about the conflict using a priori domain-dependent
and domain-independent knowledge. At higher levels of
abstraction, the conflict resolution task utilizes symbolic
endorsements which constitutes a symbolic record of the
object-specific evidence supporting or denying the presence
of an object instance. Thus, we deal with reasons for
believing or disbelieving a certain hypothesis by


22
2.5.1 Correlation or Consistency Checking
To determine whether sensor data, or features derived
from that data could be classified as competitive, a
consistency check is performed on the data. This is a common
difficulty in robotic perception where often the correlation
between perceived or observed data and model data has to be
determined. The well known problem of correlating between
what the robot actually sees and what it expects to see is
an appropriate example. Following is an example that
illustrates one form of consistency checking:
Let pg1 and pei be parametric primitive vectors
estimated by sensors i, and j respectively. We desire to
test the competitive hypothesis H0 that these estimates are
for the same primitive p of an object. Let 8eii = pei pei be
the estimate of = p1 pi where p1 and pi are the
corresponding true parameter vectors, then H0: 8^ = 0. We
want to test H0 vs. Hx: 8^ <> 0. Let the corresponding
estimation errors be represented by e1 = p pei, and ei = p
- Pgi. Define e1^ = 8^ S^i, it follows that e^ = p1 pi -
Pe1 + Pgi. Then, under H0 we have
e /H0 = p Pgi (p Pgi) = e1 ei,
and the covariance of the error under H0 is given by
C /H0 = E[ (e /H0) (e /H0)T]
= E[ (ei ei) (e1 ei)T]


25
model have not been solved in a practical sense [Richardson
88]. Instead, the following measurement model with additive
noise is usually considered:
z = h(x) + v
Moreover, it is commonly assumed that x and v are
statistically independent, and that the probability density
functions f (x) and f (v) are known a priori. In addition,
most methods often assume f(x) and f(v) to be Gaussian with
the following statistical properties:
E[v] = 0, E[vvT ] = R
E[x] = Ex, E[ (x-Ex) (x-Ex) T] = M
where M and R are the state and noise covariance matrices
respectively. The above assumptions and equations are
sometimes written as:
x N (Ex, M) v N(0, R) and Cov(x, v) = 0
The measurement model is further simplified when the
function h(x) is linear. In this case, when the measurement
model is both linear and Gaussian, a closed form solution to
the estimation problem is obtained. The linear measurement
model is represented as


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EC0N4N84X_AUX9JI INGEST_TIME 2012-02-22T19:12:30Z PACKAGE UF00082166_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES


16
information about the same feature or property of the same
object. The following sensor configurations and scenarios
produce competitive information interaction:
a) Two or more sensors of the same type measuring the
value of the same feature of an object. For example, two
sonar sensors measuring the depth of an object from a fixed
frame of reference.
b) Different sensors measuring the value of a specific
feature. For example, depth information could also be
provided using stereo vision as well as a sonar range
finder. Another example of how different sensing modalities
produce the same feature, is the generation of edge features
of an object from either intensity images or range images.
c) A single sensor measuring the same feature at
different times. For example, a sonar sensor continuously
acquiring depth measurements from the same position and
viewpoint.
d) A single sensor measuring the same feature from a
different viewpoint or operating configuration or state.
e) Sensors measuring different features but, when
transformed to a common description, the information becomes
competitive. For example, the speed of a mobile robot could
be measured by using a shaft encoder, or it could be deduced
from dead-reckoning information from fixed external beacons.
Complementary. In this case the observations of the
sensors are not overlapping, i.e., the sensors supply