Use of experimental data in testing methods for design against uncertainty

MISSING IMAGE

Material Information

Title:
Use of experimental data in testing methods for design against uncertainty
Physical Description:
Mixed Material
Creator:
Rosca, Raluca I
Publication Date:

Record Information

Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 28251075
oclc - 49241032
System ID:
AA00020469:00001

Full Text









USE OF EXPERIMENTAL DATA IN TESTING METHODS FOR DESIGN AGAINST
UNCERTAINTY



By

RALUCA I. ROSCA


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF UNIVERSITY
OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2001


























Copyright 2001

by

Raluca I. Rosca

























To my parents, loan and Marinela-Comrnelia, for giving me both wings and roots; to my
brother, Mihai, waiting for his counter-dedication on a PH.D. dissertation; and to my aunt
Mimi for being, without always knowing, an example.














ACKNOWLEDGMENTS

I am deeply indebted to my advisor, Dr. Raphael (Rafi) T. Haftka, for scientific

guidance, real-life advice, material support, and endless patience during the 5 years used

to complete this dissertation. I extend my gratitude to Dr. Efstratios (Stratos) Nikolaidis,

because much of this work is the result of long discussions with him. I kindly remember

Drs. Hager, Kurdila and Kurzweg for agreeing to be members of the committee and for

taking the time to read this dissertation and to make comments on it.

I am also grateful for the long-distance love and trust of my family and thankful

for the encouragement of my supportive circles of friends: the AeMES people, the Folk

dancers, and the Gainesville Romanians.














TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ................................................................................................. iv

A B STR A C T ..................................................................................................................... viii

CHAPTERS

1. INTRODU CTION .......................................................................................................... 1

Need for Comparing Probabilistic and Nonprobabilistic Methods in Design and
Objectives of This Dissertation....................................................................................... 1
O rganization.................................................................................................................... 4


2. LITERATURE REVIEW............................................................................................... 6

Probabilistic Methods for Quantifying Uncertainty and Difficulties in their Usage...... 6
A alternative M ethods........................................................................................................ 8
Previous Comparisons of Probabilistic and Nonprobabilistic Methods....................... 10
Comparison of Methods Using Experimental Data...................................................... 13


3. PROBABILITY THEORY AND FUZZY SETS METHODS: A THEORETICAL
CO M PAR ISON ................................................................................................................. 14

Possibility Theory......................................................................................................... 14
Comparison of the Axioms of Possibility and Probability Measures........................... 17


4. CASE STUDY: CONTAINER DESIGN PROBLEM- A DESIGN PROBLEM WITH
MULTIPLE FAILURE CASES ........................................................................................ 19

Container Problem with Uncertainty in the Dimensions.............................................. 19
Container Problem with Uncertainty in the Budget and Area Requirements ............... 23


5. DOMINO CONSTRUCTION PROBLEM .................................................................. 26

Experiments and Toppling Criterion............................................................................ 27








Geometry Errors........................................................................................................ 29
Construction Errors................................................................................................... 31
Effect on Toppling Heights....................................................................................... 32
Toppling Criterion .................................................................................................... 34
Numerical Simulation of the Experiments.................................................................... 35
Analytical Form of Probability Density Function........................................................ 40


6. USE OF EXISTING EXPERIMENTAL DATA TO EVALUATE METHODS FOR
DESIGN AGAINST UNCERTAINTY............................................................................. 43

Motivation..................................................................................................................... 43
Example: Bidder -Challenger Problem ........................................................................ 48
Description of microchip speed target setting problem ............................................ 48
Bidder-Challenger problem: mathematical model and domino simulation .............. 49
Possibilistic and probabilistic formulations of the Bidder Challenger problem....... 50
Possibilistic formulation....................................................................................... 50
Probabilistic formulation...................................................................................... 51
Methodology for Using Existing Data to Conduct Simple and Efficient Experiments
that Mimic Real Life Design Decision Problems......................................................... 51
Splitting the data into fitting and testing sets............................................................ 52
Definition and evaluation of the relative frequency (likelihood) of success............ 53
Description of the fitting process (fit of possibility/probability distribution
functions) .................................................................................................................. 53
Results........................................................................................................................... 55
All data known -various handicaps.......................................................................... 55
All data known inflation factor.............................................................................. 57
Scarce data small sample size............................................................................... 62
Scarce data small sample size -- influence of inflation factor at different handicap
values ........................................................................................................................ 64
Concluding Remarks..................................................................................................... 65


7. CONCLUSIONS........................................................................................................... 67

APPENDICES

A. COMPUTATION OF TILT AND SWAY ANGLE OF DOMINOES FROM
DOMINO MEASUREMENTS......................................................................................... 69

B. COMPUTATION OF THE CENTER OF MASS OF A DOMINO BLOCK ............. 71

C. IDEALIZED MODEL OF STACKING PROCESS USED IN NUMERICAL
SIMULATION................................................................................................................... 73

D. DEFINITION OF INFLATION FACTOR.................................................................. 76








E. EFFECT OF INFLATION ON PROBABILISTIC OPTIMA AND THE
POSSIBILISTIC OPTIMA, FOR VARIOUS VALUES OF HANDICAP AND
INFLATION FACTOR..................................................................................................... 78

F. DIFFERENCE BETWEEN THE SHIFTED GAMMA AND NORMAL
CUMULATIVE DISTRIBUTION FUNCTION FITTED TO EXPERIMENTAL DATA,
WITH AND WITHOUT INFLATION ............................................................................. 82

LIST OF REFERENCES................................................................................................... 84

BIOGRAPHICAL SKETCH ............................................................................................. 88














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

USE OF EXPERIMENTAL DATA IN TESTING METHODS FOR DESIGN AGAINST
UNCERTAINTY

By

Raluca I. Rosca

December 2001


Chairman: Raphael T. Haftka, Distinguished Professor
Major Department: Aerospace Engineering, Mechanics and Engineering Science



Modem methods of design take into consideration the fact that uncertainty is

present in everyday life, whether in the form of variable loads (the strongest wind that

would affect a building), material properties of an alloy, or future demand for the product

or cost of labor. Moreover, the Japanese example showed that it may be more cost-

effective to design taking into account the existence of the uncertainty rather than to plan

to eliminate or greatly reduce it.

The dissertation starts by comparing the theoretical basis of two methods for

design against uncertainty, namely probability theory and possibility theory. A two-

variable design problem is then used to show the differences. It is concluded that for

design problems with two or more cases of failure of very different magnitude (as the

stop of a car due to lack of gas or motor failure), probability theory divides existent

resources in a more intuitive way than possibility theory.








The dissertation continues with the description of simple experiments (building

towers of dominoes) and then it presents the methodology to increase the amount of

information that can be drawn from a given data set. The methodology is shown on the

Bidder-Challenger problem, a simulation of a problem of a company that makes

microchips to set a target speed for its next microchip. The simulations use the domino

experimental data. It is demonstrated that important insights into methods of probability

and possibility based design can be gained from experiments.













CHAPTER 1
INTRODUCTION

We started this research motivated by the interest in comparing methods for

design against uncertainty. More precisely, we hoped to develop clear guidelines for the

use of probability and possibility-based methods in design against uncertainty. We

compared the theoretical foundation of the two methods, and found examples where the

use ofpossibilistic methods was not appropriate; in parallel research conducted at

Virginia Tech, Dr. Nikolaidis and his team found cases where the probabilistic methods

were dangerously sensitive to uncertainty about statistical data. Both teams needed

inexpensive physical experiments to test methods for design against uncertainty. Toward

this goal, we developed experiments with dominoes and a methodology to use them for

comparing the effectiveness of probabilistic and possibility-based methods. Moreover,

inspired by the work of Gigerenzer and Richter (1990) in social sciences, we realized that

the method we developed for the domino experiments could be easily applied to many

readily available data sets.

In the following, we motivate the need for comparison of probabilistic and

nonprobabilistic methods in design again uncertainty and describe the objectives and

organization of this thesis.

Need for Comparing Probabilistic and Nonprobabilistic Methods in Design and
Objectives of This Dissertation

Modem methods of design take into consideration the fact that uncertainty is

present in everyday life, whether in the form of variable loads (the strongest wind that








would affect a building), material properties of an alloy, future demand for the product,

or cost of labor. Moreover, the Japanese example showed that it might be more cost

effective to design taking into account the existence of the uncertainty rather than to plan

to eliminate or greatly reduce the uncertainty.

A number of methods were developed to model uncertainty. These include

probability theory and its variants (Bayesian theory, reliability theory), fuzzy sets theory

and related possibility theory, and worst-case design or anti-optimization. However,

engineers in need of modeling uncertainty have to know more than how to apply one

method or another, they also need to know when to use a specific method, or when one of

the methods is a cheaper approximation of the other. This 'choice of tools' problem

initially motivated our work.

This dissertation has two objectives: first we aim to provide a comparison of two

of the most popular methods used in design against uncertainty (probability and

possibility theory) based on their axioms and simple analytical examples; second we aim

to show how existing experimental data can be used to perform efficient experimental

comparisons between methods. The first objective is accomplished by comparing the

theoretical foundation of the two methods and by solving several problems involving

uncertainties using both methods and then comparing the results. We work with

problems where failure is catastrophic, not gradual, and we consider a crisp definition of

failure (that is, a design either fails or survives). Because probabilistic and possibilistic

methods use different metrics of safety that are not directly comparable, the problems

used to compare these methods should involve design rather than analysis. The objective

of the design is to minimize the chances of failure. Specifically, with a given amount of








resources and a given amount of information about uncertainty, we should use

probabilistic and possibilistic methods to obtain two alternative designs. Then we should

find which design is safer. Moreover, the results should be validated experimentally,

because the ultimate test of a method is how well the designs it produces fare in the field.

Second in chronological order, but eventually becoming our main objective, is to

show how existing data can be used for performing an efficient experimental comparison.

The approach can be used to compare not only possibility and probability theory, but also

any other methods that model uncertainty, as well as variants of the same method.

It is desirable to test the effect of modeling assumptions by subjecting designs to

experimental validation, but this is impractical in structural design problems. In some

fields, such as quality control, one may have enough experiments to validate models. In

areas such as structural design for safety, it is expensive and time-consuming to perform

the large number of experiments needed for validating assumptions. The difficulty of the

experimental task increases with the complexity of the system. Consequently, in the

design of complex structural systems there may not be enough validation of the

soundness of the models of uncertainty used in the design.

To assess the impact of assumptions, or to discriminate among different

approaches for design against uncertainty, we propose a methodology analogous to that

used in medicine where new drugs are first tested on laboratory animals, cultured cells

and bacterial cultures (Gad and Weil 1986, Arnold et al., 1990). This testing procedure

helps identify the most promising compounds and screen out those that are clearly

ineffective or dangerous. Using this paradigm, we propose testing approaches for design

under uncertainty first for systems that are simple and inexpensive to test. Design of








these systems should emulate the design of real-life systems. A method or a set of

assumptions that proves to be unsuitable for the design of a simple system could be ruled

out for designing systems that are more complex. With simple experiments available to

the research community, developers of methods for design against uncertainty could test

their methods, and identify and understand the strengths and weaknesses of the methods

they develop.


Organization

Using the Graduate School format, the introduction to this thesis is designated

Chapter 1. You almost finished reading this chapter, dedicated to the motivation of our

research work and its organization.

Chapter 2 presents a review of the scientific literature on comparison of

possibilistic and probabilistic methods. Papers dedicated to use of experiments in testing

design methods are also covered.

Chapter 3 presents the axioms of probability and fuzzy-sets possibility theory and

a theoretical comparison of the two methods. We introduce the concept of the least

conservative possibility distribution compatible with a probability distribution and show

its expression for the symmetrical and nonsymmetrical case.

Chapter 4 presents a comparison of the two methods for a simple design problem

with multiple failure modes (the container design problem).

Chapter 5 develops domino-stack building experiments, which are later used to

compare possibilistic and probabilistic methods in terms of their treatment of reducible

and irreducible uncertainty. We describe the physics of the problem and the error








sources, together with a numerical simulation of the performance of a builder based on a

series of building experiments.

Chapter 6 is dedicated to the use of existing experimental data in evaluating

methods for design against uncertainty. It presents methodology to increase the amount

of information that can be drawn from a given data set. The methodology is illustrated by

the Bidder-Challenger problem, a simulation of a problem of a company that makes

microchips to set a target speed for its next microchip. The simulations use the domino

experimental data described in Chapter 5. Finally, Chapter 6 offers some concluding

remarks and suggestions for future research.













CHAPTER 2
LITERATURE REVIEW

This chapter reviews literature on probabilistic methods for quantifying

uncertainty, on alternate methods to probability, on theoretical comparison of

probabilistic and nonprobabilistic methods, and finally on comparison of methods using

experimental data.


Probabilistic Methods for Quantifying Uncertainty and Difficulties in their Usage

French (1986) discussed different types of uncertainty and imprecision (including

physical randomness of data, choice of a model, numerical accuracy of calculations or

lack of clarity in the objectives) and their consequences in the process of modeling and

analysis. Numerous methods are used to deal with uncertainty in natural sciences and in

engineering; Rouvray (1997) presented an accessible history of those methods, from

probability theory, to multi-valued logic, to development of fuzzy sets and possibility

theory. However, for a long time probability theory was the only theory used to quantify

uncertainty. Even now, probabilistic methods are almost exclusively used in industry,

and an entire journal, Probabilistic Engineering Mechanics is dedicated to the

engineering applications of probability. The applications vary from geotechnical

applications (Zhou et al 1999) to biomechanics (Sadananda 1991), and from designing

integrated circuits (Seifi et al. 1999), to disinfection systems (Tchobanoglous et al. 1996),

to estimating the output of a drainage system into ocean (Mukhtasor et al. 1999).

However, care must be taken for the proper use of probabilistic methods. Person (1996)








discussed the application of Monte-Carlo methods in risk assessment and examples for

which they are not appropriate, specifically problems where partial ignorance needs to be

reckoned with. He also concluded that, unless much is known about the independence of

variables, Monte Carlo methods cannot be used to conclude that exceedance levels are

smaller than a particular level.

Indeed, even if the use of probabilistic models is well established, they often

require assumptions about distributions, correlations, and parameters such as standard

deviations. Statistical distributions for parameters may be available, but good

information on correlations is usually absent (Ang and Tang 1984). Sometimes the

marginal probability distributions of the random variables are not sufficient to completely

model the uncertainties we need the joint probability distribution of the variables. This

is rarely known in real life design problems (unless the variables are statistically

independent). In practice, designers study both extreme cases, where the random

variables are independent and perfectly correlated, respectively, and compare the

optimum decisions that are based on these assumptions. If the marginal probability

distributions and the covariance matrix of the random variables are known, one can use

Nataf's approximate model (Nataf 1962) or the Winterstein approximation (Winterstein

1988). Examples of application of the first model are presented in Nikolaidis et al.

(1995) and in Kiureghian and Liu (1985). A different approach is needed when the form

of the distribution is known, but errors are present in the parameters defining the

distribution. It has been shown that even small errors in statistical parameters may have

large effects on computed probabilities of failure, especially when these probabilities are

very small (e.g., 10-6, Ben Haim and Elishakoff, 1990). Optimum designs based on these








computed failure probabilities could be very sensitive to these errors (Chen et al. 1999,

Nikolaidis et al. 1999).

In design under uncertainty, the design variables may affect significantly the

probability distributions of the random variables. For example, in manufacturing of

composite panels, the orientation of the fibers affects significantly the amount of

uncertainty in the elastic properties of the panel. Thus, for every fiber orientation, we

have a different probability distribution for each variable describing the panel geometry.

Although we need this information in order to make good design decisions, this

information is rarely available (Elseifi et al. 1999).


Alternative Methods

There are situations when probability theory does not seem appropriate for

quantifying uncertainty. For such situations a number of other theories were developed.

Bhatnagar and Kanal (1986) reviewed five theories for handling uncertainty and

incompleteness of information, specifically: probability theory, evidence theory,

possibility theory, theory of endorsements, and non-monotonic logic. Although the

authors claim to have considered the strong and the weak points of each theory, no

recommendations were made about when to use in practice of any of those theories.

In engineering, two methods have gained popularity as alternatives to

probabilistic design. One of them is worst-case design; the other one is based on fuzzy

sets (more specifically on the interpretation of possibility theory using fuzzy sets).

Ben Haim and Elishakoff (1990) and Ben-Haim (1996) proposed a version of

worst-case design based on convex models for design problems where there is scarce

information about the uncertain variables. A design should survive when the values of








the uncertain variables vary in a convex set, which is specified by the designer based on

experience (Elseifi et al. 1999; Pantelidis, 1995). A key concept is that a good design

should survive large deviations of the uncertain variables from their nominal values

(measured by an uncertainty parameter). If we think of all combinations of the uncertain

variables as being points filling a balloon, the uncertainty parameter will be the degree in

which this balloon must be conflated (full of air) for the design to survive. The design

has to withstand the worst-case scenario, when the combination of uncertain variables

gives the least favorable response. Elishakoff dubbed the search for the worst-case

scenario "anti-optimization."

Elishakoff et al. (1994) contrast the model of uncertainty based on probability

theory with one based on convex analysis, where bounds on the magnitude of uncertainty

are required. The anti-optimization approach was illustrated in Qiu and Elishakoff,

(1998) for the case of large parameters using interval analysis on the example of a six-bar

truss. Lombardi (1998) presented an application of the anti-optimization method to the

optimization of a ten-bar truss where the loads are considered to vary within a polyhedral

box. This work was continued in Lombardi and Haftka (1998) with application to

optimization of a simply supported laminate composite, a simple beam problem with

non-linear objective function, and a composite sandwich structure.

Another method used to model uncertainty is based on fuzzy sets. If data are

scarce or vague, we can use fuzzy set models for uncertainties. Zadeh (1965) introduced

the notion of fuzzy sets, based on the idea of degree of membership (from 0 to 1) to an

imprecisely defined set. Klir and Yuan (1995) reviewed the foundation of fuzzy sets

theory, as well as its development and application in fuzzy measures, fuzzy logic, and








fuzzy decision-making. Pedrycz and Gomide (1998) treated the theoretical background

of fuzzy logic and fuzzy sets, with an emphasis on fuzzy modeling and computational

methods and a short presentation of the fuzzy optimization. Dubois et al. (1997)

collected papers on the application of fuzzy set techniques in engineering applications,

varying from clarifying information in medical imagery to retrieving information on the

Internet, to risk management with imprecise information. Slowinski (1998) presented a

more specialized collection of texts on application of fuzzy sets in decision analysis,

operations research, and statistics. Wood and Antonsson (1990) developed the method of

imprecision that uses fuzzy sets for modeling uncertainty because of errors in predictive

models (e.g., errors in finite element analysis). Allen et al. (1992), Thurston and

Carnahan (1992), and others used fuzzy sets to model vagueness in a designer's

preferences.


Previous Comparisons of Probabilistic and Nonprobabilistic Methods

As nonprobabilistic methods to model uncertainty have developed, so has the

need to compare them with probabilistic methods and to define the territory where each is

most appropriate. Theoretical debates on the advantages of one method over the other

are numerous; examples include Vol. 2, Issue 1 of Statistical Science (1987), Vol. 2

Number 1 of IEEE Transactions on Fuzzy Systems (1994) and Vol. 37, No. 3 of

Technometrics (1995).

Many practitioners of possibilistic approaches claim that probabilistic and

possibilistic methods have no common domain of application: probabilistic models are

for random uncertainties, whereas possibilistic models are for uncertainties due to

vagueness or linguistic imprecision. Another extreme view, held by many practitioners








of probabilistic approaches, is that everything done with possibilistic methods can be

done better with probabilistic techniques.

Laviolette and Seaman (1994) combated from a subjective probabilistic point of

view the five arguments of the advocates of the theory of fuzzy sets as a system for

representing uncertainty. First, they rejected the reality hypothesis, which holds that

imprecision is an inherent property of the world external to an observer. Second, they

attributed the subjectivity hypothesis (which holds that probability is an exclusively

objective measure of uncertainty) to the (commonly used, but incomplete) frequentist

interpretation of probability, and they reaffirmed that subjective uncertainty can be

represented using probability, namely subjective probability. Third, they rejected the

behaviorist hypothesis, which claims that uncertainty systems should emulate rather than

prescribe human behavior in face of uncertainty, and affirmed that 'the goal is to

prescribe conditions for coherent behavior and not to describe human behavior.' Fourth,

Laviolette and Seaman rebutted as unfounded the "probability as fiction" hypothesis,

which claims that probability theory does not comprise a field of study in its own right.

Fifth, they rejected the superset hypothesis, which holds that fuzzy set theory includes

probability as a special case and thus provides a richer uncertainty modeling

environment, considering it analogous to the affirmation that we have to renounce to

Newtonian mechanics in favor of Relativist mechanics as the first is a subset of the

second. They further criticized the argument that fuzziness represents a type of

uncertainty distinct from probability. The paper is concluded by presenting a method for

assessing the efficacy of fuzzy representations of uncertainty and applying this method in








three examples (all unfavorable to the fuzzy methods): a fuzzy ordering scheme, a fuzzy

method of quality control and a method of linear regression based on fuzzy sets.

Dong et al. (1987) discussed the propagation of uncertainties in deterministic

systems and contrasted three models of uncertainty (interval, fuzzy and random), using an

average cost example problem. Chiang and Dong (1987) presented another example

problem: the response of a structure with uncertain mass, stiffness and damping

properties, in free vibration, forced vibration with deterministic excitation, and forced

vibration with Gaussian white noise excitation. Probabilistic and fuzzy set models were

compared with regard to their impacts on the analyses and on the uncertain structural

responses obtained. For this example problem, they concluded that fuzzy models are

much easier to implement, and the associated analysis easier to perform than their

probabilistic counterparts. They suggested that when available data on structural

parameters are crude and do not support a rigorous probabilistic model, the fuzzy set

approach should be considered in view of its simplicity.

Maglaras et al. (1996) used a truss structure to compare probabilistic optimization

and deterministic optimization for low vibration frequency. They selected a design

problem so as to maximize the contrast in reliability between the two optimum designs

and demonstrated substantial advantage for the probabilistic approach. Maglaras et al.

(1997) continued this work and compared probabilistic and fuzzy-set based approaches in

designing the same damped truss structure, seeking circumstances that maximize the

difference between the two designs. They concluded that when only random uncertainties

are involved, probabilistic optimization leads to a more reliable design.








Comparison of Methods Using Experimental Data

In decision theory it is common to compare methods using experimental data.

Gigerenzer and Richter (1990) compared three algorithms for predicting which member

of a couple is better (in the sense of a specified criterion) using 20 different problems and

corresponding experimental data. Wilson and Schooler (1991) studied when people

make better judgments by relying on their intuition than reason. Davis et al. (1994)

presented the case of someone's forecasts of stock earnings decreasing in accuracy as new

information is added. Also Ambady and Rosenthal (1992) and McKenzie (1994)

compared simple intuitive strategies and Bayesian inferences using experiments.

The papers described above compare methods in terms of the result of a binary

decision (choose A or B). In Chapter 6 we generalize this comparison to a design

problem involving one variable. In such a problem, the design space is not reduced to the

2 elements of the binary decision, but is a larger (even if finite in our example) set.













CHAPTER 3
PROBABILITY THEORY AND FUZZY SETS METHODS: A THEORETICAL
COMPARISON

In this chapter we present the axioms of probability and fuzzy-sets theory (with an

emphasis on possibility theory) and a theoretical comparison of the two methods. We

introduce the concept of the least conservative possibility distribution compatible with a

probability distribution and show its expression for the symmetrical and non-symmetrical

case.


Possibility Theory

Possibility measures a) the degree to which a person considers that an event can

occur, or b) the degree to which the available evidence supports a claim that an event can

occur. A possibility of one means that there is no reason to believe that an event cannot

occur. On the other hand, if we believe that there are constraints preventing an event

from occurring, then we should assign a low or zero possibility to that event.

Zadeh (1978) used fuzzy sets as a basis for possibility. According to Zadeh, a

proposition that associates an uncertain real variable to a fuzzy set induces a possibility

distribution for this quantity, which provides information about the values that this

quantity can assume. For example, based on the statement 'X is about 10', the fuzzy

number X can have the membership function shown in Fig.l, denoting a subjective

interpretation of the statement that limits possible values to the interval (8,12). The

membership function determines the possibility that X takes any given value. For

example, from Fig. I we see that the selected fuzzy number has a possibility of 0.5 to








assume the value nine or 11 and a possibility of 0.25 to assume the value of 8.5. For

comparison, Fig. 1 shows also a probabilistic interpretation of the same statement: a

uniform probability density for the same interval.


Possibility
~ distribution
\ Probability
/ \^ density
0.25 --.-. -

8 10 12

Figure 1: Probability density and possibility distribution of X, for the statement 'X is
about 10'


Possibility is also viewed as an upper bound of probability. Giles (1982)

proposed a definition of possibility according to which the possibility of an event is the

smallest amount we would have to pay a decision-maker upfront to overcome his/her

resistance to bet against the event (i.e., agree to pay one dollar if the event occurs). A

rational decision-maker would agree to bet against the event as long as the expected gain

is nonnegative. Therefore, the smallest amount for which the decision-maker would bet is

an upper bound of the estimated probability of this event. This definition is an extension

of the definition of subjective probability.

Another interpretation, which is based on evidence theory (Shafer 1976), is that

possibility is the limit of plausibility when the body of evidence is nested. Shafer's

definition of possibility leads to a generic procedure for estimating the possibility of an

event from the available evidence: this possibility equals the sum of the degrees of

evidence of all the sets that contain the event.








There are many interpretations of probability. Probability can be viewed as a

relative frequency of an event (objective probability) or one's degree of belief that an

event is likely (subjective probability). In the first case, probability is estimated from

numerical data, whereas in the second case it can be estimated by asking decision-makers

questions about their willingness to bet for or against this event.

When a possibility and a probability are assigned to the same event, then these

should satisfy some consistency conditions. One condition can be that the possibility of

an event should be greater or equal to its probability (Klir and Yuan 1995, Zimmerman

1996). This is reasonable, since any event that is probable must also be possible, but the

converse is not true. A more restrictive condition is that the possibility of any event that

has nonzero probability must be one. In most design situations, this condition would lead

to overly conservative designs. In this thesis, we have adopted the first consistency

condition. The possibility density and probability distribution shown in Fig. 1 are

minimally consistent in the sense that the possibility distribution in Fig. 1 is the least

conservative one that satisfies the consistency condition given the uniform probability

density. Specifically, it can be shown that the possibility of any event associated with X

is greater than or equal to its probability. Moreover, the triangular distribution yields the

smallest possibility for any event associated with X out of all symmetric possibility

distributions that have their apex at 10 and are consistent with the uniform probability

density in Fig. 1. For example, the probability and possibility that X lies in the interval

(9,11) are both 0.5.








Comparison of the Axioms of Possibility and Probability Measures

Sugeno (1977) introduced fuzzy measures as a generalization of real measures.

On a finite universal set, possibility and probability are fuzzy measures. Table 1

compares these measures in terms of their axioms.



Table 1: Axioms of Probability Measure and Possibility Measure

Probability measure, P(.) Possibility measure, 1-(.)
1) Boundary requirement: 1) Boundary requirements:
P(_)=I I(0)=0, 17()=1

2) P(A) > 0 VAeS 2) Monotonicity:
VA, BeS, ifA c B,
then 17(A) < 1I(B)

3) Probability of union of events 3) Possibility of union of a finite
number of events
VAi, i e I, Ai are disjoint VAi, i E I, Ai disjoint
I I
P( UAi)= _P(Ai) rI( UAi =maxi(-I(Ai))
i=l iWl i=1


Let Q be the universal set and S a set of crisp subsets of 2. It can be shown that,

if the universal set is finite, the probability and possibility measures are special cases of

the fuzzy measure.

The difference between probability and possibility measures for Axiom 2 is

historical rather than substantial. Indeed, for probability theory we can prove the

monotonicity property as a simple consequence of Axioms 2 and 3 applied to the sets A

and B-A (the set of elements of B that do not belong to A). For possibility theory, as a

consequence of Axiom 2 applied to a set A and to the null set included in it, we obtain

H(A) > 17(0) and from Axiom 1 this latter possibility equals 0.








The main difference between the axioms of possibility and probability measures

is that probability is additive whereas possibility is subadditive. Specifically, the

probability of the union of a set of disjoint events is equal to the sum of the probabilities

of these events. On the other hand, the possibility of the union of a finite number of

events (disjoint or not) is equal to the maximum of the possibilities of these events.

As a result, if (A, ..,A)} is a partition of the universal event, .A, the probabilities

of Ai must add up to one, whereas there is no such constraint for the possibilities of Ai. In

fact, because the possibility of .2 is equal to the maximum of the possibilities of events

A,, the possibility of at least one of these events should be one. Therefore:

n
En(Ai)>I
i=1

An important difference between the axiomatic foundations of probability and

possibility is that we can only assign a probability measure to a c-algebra', whereas we

can assign possibilities to any universe, since possibility is both a measure and a function.

The class of all subsets of the real line is not a c-algebra. A probability measure can be

assigned to the smallest o-algebra that contains all intervals (-00, x1], where xi is a real

number (Papoulis, 1965), whereas we can assign a possibility to any class of subsets of

the real line.

The consequences of these axiomatic differences are studied in Chapter 4 using as

case study a problem of design with multiple failure modes.





A cr-algebra is a class of events that is closed with respect to complementation and
countable union.













CHAPTER 4
CASE STUDY: CONTAINER DESIGN PROBLEM- A DESIGN PROBLEM WITH
MULTIPLE FAILURE CASES

In this chapter, we present a comparison of the two methods based on a design

problem with multiple failure modes (the container design problem).


Container Problem with Uncertainty in the Dimensions

Before we start evaluating designs based on incomplete information, we first use

a simple problem to illustrate that given the same information a probabilistic designer and

possibilistic designer can lead to diametrically opposite design philosophies.

The following design problem involves only the sum and product of two

variables. We design a rectangular container of specified height and minimum required

volume, by selecting the width X and the depth Y. The volume requirement translates to

the condition

XY!a (1)

The cost is proportional to the surface area of the vertical sides, so that the cost

limit translates to

X+Y< a (2)

Due to manufacturing errors and limitation of available plate sizes, Xand Y may

differ from their nominal values 7X, Y that we specify. In fact, the manufacturer

guarantees maximum fractional errors, ex and ey, respectively in the two dimensions.

That is XE Ix and Ye Jy, where:

Ix-= [T(1-ex), X(l+ex)], ly = [Y(1-ey), Y(l+ ey)] (3)








If we assume that exceeding our budget is equally as bad as not meeting our

volume requirement, we need to minimize the chance of failure defined as cost overrun

or performance shortfall, by changing x and Y. The optimum is a compromise between

the two modes of failure. A probabilistic designer minimizes the probability of failure,

whereas a designer who uses a possibilistic approach minimizes the possibility of failure.

We assume that X and Y have uniform probability distributions, with unknown

correlation. The problem parameters were defined such that the probability of a design

violating both the performance and cost constraints (Eqs. 1 and 2, respectively) is

practically zero. That is, the probability of cost overrun or performance shortfall is equal

to the sum of the probabilities of these events (in general it is equal to the sum of the

probabilities of these events minus their intersection).

We also assume for Xand Y triangular symmetric possibility distribution

functions centered at X and Y, respectively, with support in Ix and ly, respectively. As

mentioned before, this possibility distribution function is the least conservative one that is

consistent with the probability distribution.

The solution is a compromise between the budget margin, mb,

b-X-Y
mb -- (4)
andth nmial erorane mrgnmb
and the nominal performance margin, mp,

XY -a
MP (5)
a

A 'naive' design may be obtained using a 'safety factor' approach, setting the two

variables to be equal and the two margins to be equal, that is

mp=mb, and x = Y (6)









In the following, we compare the possibilistic, probabilistic and naive designs

using a numerical example ofex=O.14, ey=O.05, b=6, and a=8, and assuming that Xand

Yare independent. For the naive design we get X = Y =2.883, with mp =mb= 3.9%. The

probabilities of cost overruns and performance shortfall are 0.21 and 0.37, respectively,

and the corresponding possibilities are 0.57 and 0.8. Thus, although we set identical

margins, or safety factors, the chance of cost overruns is lower than that of performance

shortfall.

Figure 2 shows the probability and possibility distribution functions of sum and

product of variables X and Yfor the naive design. This figure shows that the product has

wider possibility and probability distribution functions than the sum.


t.4 ,

0.9.
1.2 /
0.8





0 .I 0.
06 / 0.4
!/



0.4 -0.3
0.2
0.2/
0.1
-0 01 I
5.5 6 65 6.5 7 7.5 a 9.5 9 9.5 10
x+Y x)Y




Figure 2. Possibility distribution (solid line) and probability density function for cost
(sum of variables X and Y) and area (product of X and Y) for naYve design








The two designers react in an opposite manner, as shown in Table 2. The

possibilistic designer tries to equate the two possibilities of failure, since the overall

possibility is the maximum of the two. This reduces the cost margin to 0.0284 and

increases the performance margin to 0.0617, reducing the overall possibility of failure

from 0.8 to 0.69.

On the other hand, the probabilistic designer finds a very different combination of

design variables that allows the designer to reduce the probability of failure by about one

percent (from 0.58 to 0.57) relative to the naive design. This design point corresponds to

a cost margin larger than the area margin (0.0445 vs. 0.0283), thus to the opposite

approach from that taken by the possibilistic designer.



Table 2: Comparison of possibilistic, probabilistic and naive designs, obtained for
numerical example where e-=0.14, ey=0.05, b=6, a=8

Design
____________________________ Possibilistic Probabilistic Naive
X 2.8753 2.7486 2.8830
Y 2.9538 2.9842 2.8830
Possibility of failure max (Pos(X+Y>6), Pos(X Y<8) 0.6902 0.8694 0.8010
Probability of failure Pro(X+Y>6) + Pro(X Y <8)' 0.5833 0.5698 0.5791
Cost Margin, nmb: 0.0284 0.0445 0.0390
Area margin, mp 0.0617 0.0253 0.0390
Possibility of cost overrun Pos(X+Y>6) 0.6902 0.4998 0.5730
Possibility of area shortfall Pos(X Y>8) 0.6902 0.8693 0.8010
Probability of cost overrun Pro(X+Y>6) 0.2883 0.1551 0.2102
Probability of area shortfall Pro(X Y>8) 0.2950 0.4147 0.3689

For this example, the possibilistic design appears to be the more reasonable

choice. Its probability of failure is only slightly higher than that of the probabilistic


1 The joint probability of failure is practically zero in this problem








design (0.58 compared to 0.57), while the possibility of failure of the probabilistic design

(0.87) is much higher than that of the possibilistic design (0.69).

The example demonstrates that, given two modes of failure, the possibilistic and

probabilistic designers may opt for totally different balancing of risks, even for the

simplest of problems.


Container Problem with Uncertainty in the Budget and Area Requirements

In this variation of the container problem, both the budget and the volume

requirements are uncertain, with a nominal value for budget b and relative budget

uncertainty of at most Ab, and a nominal value for the required performance (area) of W

and uncertainty of at most Aa. That is belIb and adIa where

Ib=[b (1-Ab) b (1+Ab)] ; Ia=[a (J-Aa), (l+Aa)].

We assume that 0_Aa, Ab 1 (i.e. the uncertainty in area and budget can be zero,

but it is no larger than 100%).

If exceeding our budget is equally bad as not meeting our volume requirement, we

need to minimize the chance of failure defined as cost overrun or performance shortfall,

by changing X and Y. Once again the optimum is a compromise between the two modes

of failure.

The uncertainty in the budget and area is modeled using uniform probability

distribution functions for the probabilistic design and symmetric triangular membership

functions for the possibilistic design, having the support on Ib and la for the budget b and

area a, respectively. The value ofr = -/b12 measures how easy it is to satisfy the area

requirement with the resources (budget) available. In the absence of uncertainty, r=0.25

guarantees the existence of a totally satisfactory design (X=Y= b/2), and r<0.25 will








allow more than one design which satisfies both the area and the cost requirements. For

r> 0.25 it will be impossible to satisfy both requirements (every design either will be too

expensive or will have a too small area).

The problem is formulated as follows:

Find (X, Y) which minimize the measure of failure
("cost overrun" X+Y>b or "area shortfall" X Y
where b and i are the values of the specified acceptable volume and the budget.

The problem parameters were selected so that the probability of violating both the

performance and cost constraints is not zero. Then the probability of "cost overrun or

performance shortfall" is equal to the sum of the probabilities of these events minus the

probability of their intersection.

It is possible to obtain analytical expressions for the coordinates of the optimum

probabilistic and fuzzy set designs. For this simple problem we find that both designs will

set x = y, so that the problem has only a single variable. As a consequence of the

properties of the different calculi, the probabilistic design will tend to minimize the

chances of failure due to the mode easier to satisfy. On the other hand, the possibilistic

design will be obtained for equal possibilities of failure in the two modes. We illustrate

this difference by numerical results. In Table 3, we maintain constant the degree of

uncertainty in the budget (Ab=r=18%), as well as the nominal value for budget and area

(b =6, ii = 8.64). We also select a degree of uncertainty in area much smaller than the

one in budget (Aa<
The probabilistic design follows the common sense approach of concentrating on

the easier/cheaper mode of failure. As the uncertainty in the area is smaller, the

probabilistic design selects a design that eliminates or minimizes the uncertainty in area








(by choosing a larger container), paying for it a small price in increased chance of cost

overrun. The possibilistic design, on the other hand, is locked into equal possibilities of

failure. The absurdity of that approach is evident for the smallest Ab. For that case, the

probabilistic design can eliminate the probability of area shortfall by a miniscule (0.002)

change compared to the fuzzy set design, reducing the probability of failure to almost

half of that of the fuzzy set design.



Table 3: Possibilistic and probabilistic designs when the uncertainty in area is much
smaller than the one in budget (Aa< budget and b =6 and i =8.64. The degree of uncertainty in the budget equals
to r (r=Aa=18%).

Possibilistic Probabilistic Probability of failure Possibility of failure
design design Possibilistic Probabilistic Possibilistic Probabilistic
___ Xpos = Ypos Xpro = Ypro design design design design
10.00 0.4385 0.4450 0.2918 0.1943 0.3168 0.3886
5.00 0.4322 0.4347 0.2313 0.1375 0.2465 0.2749
2.00 0.4277 0.4285 0.1866 0.1027 0.1962 0.2054
1.00 0.4260 0.4264 0.1699 0.9100 0.1778 0.1820
0.50 0.4251 0.4253 0.1612 0.8510 0.1683 0.1703


This problem illustrates that, for problems with two (or more) modes of failure,

one of which is much easier to satisfy, probability methods are better than possibilistic

methods under conditions of full knowledge of the uncertainty.

A more complex problem is used in Chapter 6 to compare the designs given by

possibilistic and probabilistic methods. We devote Chapter 5 to the description of the

experiments that are further used in Chapter 6.













CHAPTER 5
DOMINO CONSTRUCTION PROBLEM

We devote Chapter 5 to the introduction of the experimental system employed to

obtain the data used in Chapter 6. We decided to obtain our experimental data from

building towers of blocks, with failure of the system defined as the toppling of the tower.

This approach has three advantages, being:

* Relatively inexpensive, as the failure of the system (tower) does not imply destruction
of the components (blocks);

* Not time consuming, as set-up time before each experiments is minimal;

* Easy to repeat.
The first section describes the development of a model of toppling for a stack of

blocks. The second section covers the experimental set-up for domino and penny stacks,

preliminary findings and a description of the toppling mechanism. The third section

describes the implementation of the toppling criterion in Monte Carlo simulation, which

provides a histogram of the number of blocks in the stack when the stack topples. The

numerically generated histogram is then compared to the one obtained from experimental

data. The fourth and last section introduces two simple analytical expressions for the

probability density function of the number of blocks at failure.








Experiments and Toppling Criterion

In order to gain insight into the toppling problem, we performed a small number

of building experiments, using blocks of dominoes' and pennies. For each experiment,

we build the tallest possible tower using only blocks from one category (dominoes or

pennies). We used the same portion of the same work surface, and at each step we added

a new block to the tower, waited until observable small vibrations of the tower stopped

and then added a new block. We recorded the height of the tower when it toppled,

together with the type of collapse.

Searching the literature for description of similar experiments, we found the work

of an Italian architect, namely Sinopoli (1989), describing the equilibrium of a ancient

Greek and Roman stone columns under their own weight and an external impulse (as the

one caused by an earthquake). Her model was not directly applicable to our case, as she

was mainly interested in tall blocks, that is blocks for which height-to-width ration of the

rectangular blocks was bigger than V2i. In addition, she was considering the whole

column moving as a rigid body and toppling to occur when the center of mass of the top

block it's projected outside the column's base.

Because in the Sinopoli model the ratio width/height of the blocks was an

important parameter, we wanted to check its influence in our experiments. We repeated

the building experiments with pennies, using instead of single pennies blocks of two and

then three pennies glued together. We always glued the tails face of one penny to the

heads face of another one, so that the resulting blocks will have the same faces, and thus




' The dominoes used here were unusual in that they had 9 dots (instead of the usual 6) per
each half face. Therefore the number of dominoes in a set was 54.








the same inter-block friction coefficients as the initial single penny blocks. The results of

the experiments are summarized in Table 2.

From Table 2, we can see that the total height increases when we glue blocks

together. This indicates that the number of units in a stack is as important for toppling as

the geometrical irregularities of its component units. This is because each block added to

the stack comes not only with its own defects, but also with a translation error the

misalignment of blocks' edges produced by the human builder when adding a block to

the stack. When we glue the blocks, this translation error is much smaller.



Table 2: Statistical properties of domino and penny experiments. Numbers in parentheses
indicate the total number of pennies in the stack. The average height of a penny
stack grows when the pennies are glued in blocks of two or three. The
coefficient of variation of the number of blocks at failure is the ratio of the
standard deviation to the mean value of this number.

Type of Number of Average ratio of Range of number of Mean Coefficient
block experiments height-to-width blocks (units) at toppling value of variation
Dominoes 10 0.33 27-41 35 0.18
Single 34 0.77 27-82 56 0.23
pennies _______________(27-82) (56)
Double 45 1.55 20-59 35 0.28
pennies _______________(40-118) (70)
Triple 15 2.32 22-38 32 0.14
pennies _______________(66-114) (96).


The domino blocks have rectangular faces with an aspect ratio of about 2:1.

Since domino stacks always topple in the narrow direction, we limit our measurement

and modeling to capture variation in this direction, which is denoted as width here.

For the dominoes, we measured the dimensions and dimensional errors, and this

information is summarized in the next subsection. Based on the measurements and the








preliminary experiments, we identified three types of errors that appear in the stacking

process. We can classify them in two categories, which are easily recognizable in most

real-life design problems as well: a) geometry errors, b) construction errors.



Geometry Errors

To a first approximation, the dominoes are rectangular in the width-height cross-

section. Under a closer examination, the faces of the dominoes are not parallel and have

some curvature, as shown in an idealized form in Fig. 3. We assume that in a batch of

dominoes, the width b and height h of the rectangular part are constant. However, the tilt

angle y (between the upper face and the normal to the lateral edge) and angle (between

the normal to the lateral edge and the tangent to the lower face at the comer point) vary

from block to block. We refer to e as the sway angle because it causes the stack to sway.

In an actual block, both faces have curvature so we would have to consider both an upper

face and a lower face sway angle.





h S

b

Figure 3: An idealized cross section of the narrow side of a domino block. This side is
modeled as a rectangle of width b and height h, with b and h constant; here
shown is the upper surface of the domino inclined with a tilt angle r; the lower
face is also curved, with a sway angle 6 measured from the tangent to the
lower face of domino to the normal at the lateral edge of the rectangle; the tilt
and sway angle vary from domino to domino and can be present on any of the
upper or lower faces.








An important difference between the tilt and the sway angles is that the builder

can compensate for the former, but not for the latter. Indeed, we see in Fig. 4a that, for a

pair of dominoes of identical tilt angle, by adding the second domino in a mirrored

position with respect to the first one we obtain a perfectly horizontal upper surface. In

practice, the builder usually notices the result of an accumulation of tilt of several

consecutive blocks and then tries to take compensatory action. How this action is taken

varies among builders. On the other hand, under the action of an external force F, the

same pair of dominoes sways, as shown in Fig. 4b, produces the rotation of the top

surface with the sum of the two sway angles.







a) b)
Figure 4: Tilt angles of opposite signs compensate for each other, but the sway angles add
up: a) we can compensate for tilt using dominoes with tilt angles of opposite
signs; b) under the action of an external force, a stack rotates with a sway angle
equal to the sum of the sway angles of the component blocks.


The tilt and sway angles vary from domino to domino and their magnitude can be

computed from direct measurements of the blocks, as shown in Appendix A. In Table 2

we present the values computed for these two angles using the measurements done on

106 dominoes (two complete sets). The measurements show that the dominoes are

usually slightly irregular, with an average tilt angle of 4" and standard deviation of 16'

22". The average sway angle is -3' 57" and the standard deviation of the sway angle is

43' 11". However, we found one very irregular block with a tilt angle of-3 18' 35" and a








sway angle of-7 10' 11". Without this block, the average tilt angle becomes 39" with a

standard deviation of 12' 1", and the average sway angle becomes -4' 42" with a standard

deviation of 27'55".



Table 2: Statistical information about measured dimensions and computed tilt and sway
angles for 106 dominoes

Domino dimension Range Mean Standard
Value deviation
Height (measured at eight points/domino) (inches) 0.225-0.302 0.277 0.0075
Tilt angle (computed) (degrees) 1-3018'- 10 22' 4" 16'22"
Sway angle (computed) (degrees) -7 12'- 3 14' -3' 57" 43' 11"


Construction Errors

When stacking blocks, the builder does not have perfect control thus producing

misalignments of the edges of the blocks. We account for these misalignments using the

translation error, s (Fig. 3a).


I .
wr


Figure 3: Translation errors, defined as the misalignment of the edges of dominoes, are
due to the builder: a) translation error for a two-block stack; b) photograph of a
stack of dominoes the wavy aspect of the column is due to the translation
errors (including compensatory errors expressly induced by the builder) as well
as geometrical errors.








Translation errors vary from experiment to experiment, and from builder to

builder. We measured indirectly the translation errors that appear in the building process,

by videotaping the experiment and then analyzing individual picture frames. We could

then determine the maximum translation error committed by a builder when building a

specific stack.

Effect on Toppling Heights

The maximum translation error varies widely from builder to builder and from

one stack to another. In addition, the choice of blocks and their orientations, which can

also be random, determines the effect of the geometrical errors. Therefore, the height of

the stack at toppling varies widely. In order to isolate the part of the variability that

comes from construction errors from the part that comes from geometry errors we

performed three sets of experiments. In the first set, performed by the dissertation author,

the sequence of blocks and their orientation were fixed for all the experiments. That is,

the geometrical errors did not vary from one experiment to the next. In the second set,

also performed by the dissertation author, the sequence was random. Finally, in the third

set, different builders participated. In order to collect data about the variability from one

operator to another, we organized a competition of building towers of dominoes. We

started with 16 competitors, in a single elimination tournament. The best of three scores

decided a game between each pair of competitors. This procedure required a total of 90

stacks to be built. The results of the three sets of experiments are summarized in Fig. 4.

In the first case, the number of blocks at toppling ranged from 21 to 35, with a

mean of 26.4 and a standard deviation of 3.33. In the second case, where the blocks used

were randomly chosen from the same set, the number of blocks at toppling ranged from

19 to 45, with a mean of 32.2 and a standard deviation of 6.21.











0.3
I0.2

I4 o .,I ll l ,M M ,
3 20 22 24 26 28 30 32 34 36
Number of dominoes at toppling

a)


S0.2 -
0.1
& 0 ,1 I III, III I,,
19 22 25 28 31 34 37 40 43
Number of dominoes at toppling

b)


0.1
S0.08
0.06
. 0.04
d 0.02


20 23 26 29


32 35 38 41 44 47 50 53 56


Number of dominoes at toppling

c)
Figure 4: Variation of maximum translation error produces variation in the height of
stacks at toppling. Plots of relative frequency of toppling vs. number of
blocks at toppling for a) one builder, fixed sequence of dominoes (20
experiments); b) one builder, random sequence of dominoes (50 experiments);
c) multiple builders, random sequence (90 experiments).








The difference in standard deviation for the two experiments is due to the

addition, in the second set of experiments, of geometrical errors to the translation errors

present in the first set of experiments. If we assume that the geometrical errors and

construction errors are independent, then we can compute the standard deviation due to

geometrical errors as (6.212-3.332) 2=5.24. Thus the effect of geometrical errors is

larger than the one of building errors. The difference in means may also indicate that the

sequence and orientation of blocks chosen for the first experiment were not favorable for

a tall stack, and a random sequence tends to produce a taller stack. In the third case of 16

different builders, the number of blocks at toppling ranges from 22 to 55, with a mean of

35.1 and a standard deviation of 6.30. Comparing plots in Figs. 4b and 4c, we observe

that the standard deviation is approximately the same, thus confirming the predominant

effect of geometrical errors. We also see that the distribution obtained for one builder is

more uniformly spread than the one obtained for multiple builders.



Toppling Criterion

By videotaping the building experiments and analyzing them frame-by-frame, we

understood the main toppling mechanism for a stack of dominoes. Initially, we thought

that toppling happened when the center of mass of the top block was outside the base.

However, simple analysis of equilibrium, as well as frames like Fig. 5a show that it is

possible to have a stable stack with the center of mass of the top block outside the base of

the stack.
























a) b) c)
Figure 5: Toppling mechanism for a stack of dominoes: a) A stack can be stable even if
the center of mass of the top block is outside the base of the stack; b) when the
center of mass of a sub-column is outside its base, the sub-column is unstable;
c) the upper sub-column rotates as a rigid body about the contact edge


When the center of mass of a sub-column (consisting of two or more blocks) is

outside its base, as shown in Fig. 5b, the sub-column moves, and triggers the motion of

the whole column, which eventually topples. The motion is usually a rotation about the

contact edge, with the upper sub-column rotating as a rigid body, as shown in Fig 5c.


Numerical Simulation of the Experiments

The histograms obtained from the experiments do not identify clearly the

probability distribution associated with toppling. In order to identify that distribution, we

developed an idealized model of the building process and implemented it in a Matlab

procedure that simulates a random stacking process. The procedure returns the number

of blocks when the stack topples. Repeating this procedure in a Monte-Carlo simulation,

we obtain a histogram of relative frequency of toppling versus the number of blocks in








stack at toppling. The histogram approximates the probability density function for

toppling.

The idealized model includes as input b and h, the nominal width and height of a

domino block. For the numerical simulation presented below, we used for b and h the

mean values measured on the dominoes used in the experiments. At step K, the

construction error SK, tilt angle rY and curvature (sway) angle 6K are generated as

random variables uniformly distributed over [-smax, Smax], [-y., y, ] and [- max 8max ],

respectively. We consider y., and e. as fixed block properties, with fixed values

throughout the Monte Carlo simulation. To account for the difference in the skills of

different builders, the maximum construction (translation) error varies from one stack to

another, with s,,m uniformly distributed over [0, s], where s is fixed for each Monte Carlo

simulation.

Once the errors sK. YK, and 6K are generated, we can compute the position of the

center of mass of the K-th block (see Appendix B). We determine if the column is stable

by computing the center of mass of each sub-column (J: K), with 1 < J < K -1 and

checking if its projection is inside the base of the J-th block. We also compute the

position of the stack when swaying takes place and check the toppling criterion in the

swayed position as well. If this criterion is not satisfied, then the column topples and the

procedure returns K as the number of blocks at toppling. The number of available blocks

is finite, nmax, thus 1 K < n.. If the column is stable even when all the blocks are used,

the procedure returns nmax+l as the number of blocks at toppling. A flow chart of the

algorithm for simulating the stacking process is presented in Appendix C.








We repeat the procedure N times, where Nis large, record the number of blocks at

failure in each replication and build a histogram of this number. This histogram, which is

an approximation of the probability density function (PDF) of the number of blocks at

failure, depends of the maximum errors in tilt and sway y., e., as well as on the

maximum translation error, s.

In Fig. 6, we present examples of histograms generated using the simulation

procedure. We have selected histograms having comparable means (close to the

experimental values of 31-35), but different errors. The histogram in Fig. 6a is generated

for N=5,000 runs in the case of translation errors s =b and no tilt or sway errors, y. =

em =0 (perfect dominoes). The value of the translation error is exaggerated in order to

get the right mean. Therefore we find a large probability of toppling for towers with n=2

or n=3 blocks, but even so the histogram shows that 418 or 8.4 % of the towers do not

topple even when all n,,max,=200 blocks are used, and 15.1 % of the stacks have more than

60 blocks.

We classified the errors present in the stacking process as geometry and human

errors. The geometry errors (sway and tilt) depend on the blocks used, and can be

measured directly. However, the human (translation) error varies with the builder. In

order to collect data about the variability from one operator to another, we used the data

from the tournament described earlier.













003
Viable maximum translation errors = 0.35 b
o.o0 Maximum tilt angle YmY = 2
No sway angle
0.02
Min = 5
Max= 201 (luntoppled stack)
oo' [Mean 35.98
standard deviation = 23.4
0 S01



0
00 1 100 200 200


0 ..9


SVariable maximum
translation error s=0.2b
* Maximum tilt angle

Ymax =1 |
* Maximum sway angle
Emax = 1 j


Min=24
Max=60
Mean=37.2179
Standard
deviation=5.7943


0.02
0.01

0 1 20
0 10 20


50 60


Figure 6: Considering more than one type of errors reduces the standard deviation of the
generated histogram for number of blocks at toppling. Examples of histograms
having comparable means: a) No sway or tilt errors (perfect dominoes),
maximum translation error s = b; b) No sway errors, translation errors s=0.35b,

maximum tilt angle r'm =20; c) Translation errors s=0.2b, maximum tilt angle
"=10 ,maximum sway angle = 1




In Fig. 7a, identical to Fig. 4c, we present the histogram of the results obtained


from the domino tournament. In Fig. 7b, we have a smoothened approximation of these


results, obtained by using a moving average (of five stacks).


Variable maximum translation error s = b
No tilt or sway angle

SMin=3
Max= 201 (untoppled stacks)
s Mean = 33.08
Standard deviation = 57.20


4
12


0 5D 100 ISO 2w0 250
























a) b)

Figure 7: Comparison of the histograms obtained from tournament and numerical
simulation: Domino tournament data-minimum number of blocks at toppling
is 22, maximum is 55, with a mean of 35.1 and a standard deviation of 6.3; b)
Smoothing of histogram in a) using a moving average over 5 stacks.



The histogram in Fig. 7a is much rougher than the one in Fig. 6c because of the

difference in the number of data used (90 compared to 10,000). In order to obtain a

smoother histogram from the raw data in Fig. 7a we used a moving average method. The

histogram in Fig. 7b is obtained by assigning to 22 the average of relative frequency

obtained for the interval [20, 24] in a), to 23 the average of relative frequencies obtained

for the interval [21, 25] and so on. The histogram in Fig. 7b is much closer to the one in

Fig. 6c than the initial one was, but we can still see differences. The model used to

generate Fig. 6c does not capture exactly the stack building process (for example, it does

not account for the translation compensation that a human builder will exercise when a

stack seems to tilt in one direction). Also, Fig. 7b is still generated using one hundredth

of the number of data used for Fig. 6c (in a sense we can refer to the data of 7b as being a

sample of the data in Fig. 6c). However, comparing Figs. 7b and 6c, we can conclude

that our model provides a fair approximation to the physical reality. Thus, Fig. 6c may

help us identify the type of distribution that governs toppling


0.12
0.1
0.06
10.06
0e0t
0.02
0
M~mnlwr of domnoes at topping


0.07
S0.06
0.06
J0.04
I0.03
0.02
0.01
0

Numbers of bWoks at stopping








Analytical Form of Probability Density Function

If enough experiments are available, then the probability density function may be

obtained directly from a figure, such as Fig. 6b, in a table format. However, we are

interested in cases where there is a limited number of data. In such cases, it is customary

to try and to fit the experiments with one of a small number of common distributions,

such as the normal or Weibull. These distributions are usually characterized by two or

three parameters that can be selected to fit the data.

In our case, the three parameters could be the translation error, tilt error and sway

error. However, if we use these parameters, every change in a parameter will require a

lengthy Monte Carlo simulation in order to produce the corresponding PDF.

Consequently, there is merit in having an analytical expression that fits closely the

numerical simulation.

The probability density function (PDF) for tower toppling, as approximated by the

histogram from numerical simulation, is asymmetric and does not start at zero. An

example of a PDF with these two characteristics is a shifted gamma distribution, which

has the functional form

f (x) (x- a) r-' exp(-A (x- a)),
A M

where F(m) is the gamma function,


r(m) = Jx"'M-1 exp(-x) dx.
0

We also tried the beta and Weibull PDFs, but the fit for the gamma function was

the best among asymmetrical PDFs, so we decided to use it on the following.








In Fig. 8a, we present one approximation of the histogram in Fig. 6c as a gamma

distribution with the same mean and standard deviation. We need an extra condition

because the formula for the shifted gamma distribution has three parameters: the shift

parameter a, the shape parameter m and scale parameter A. For Fig. 8a, we chose the

approximate PDF to be also non-zero at the first integer for which the histogram is non-

zero (i.e. a+l =24). This last condition for a gave poor results. The approximation

obtained is indeed non-zero for n=24, but has a value significantly smaller than the one

from histogram (in fact for all numbers between 24 and 34, the value predicted by the

first approximation is smaller than the one given in the histogram). In Fig. 8b we have

another shifted gamma distribution, with the shift parameter a obtained from a least

squares fit. The fit was obtained by optimization, considering only the points where the

histogram is non-zero and minimizing the sum of the squares of the differences between

the histogram and its approximation. The least square error for the first approximation is

0.0144, while the error for the second approximation is only 0.0118. The confidence in

fit, computed from a z2 -test with 8 intervals, is 90.70% for the first approximation and

92.01% for the second approximation, when 100% corresponds to a perfect, zero error

approximation.

As an alternate to the approximation with shifted gamma PDF we can use a

normal PDF. As the normal distribution is defined by only two parameters (mean and

standard deviation), we will equate them to the mean and standard deviation of the

histogram in the formula

f,(x)= 2 exp- a
7I2) 2<9







42



0.08

0.07

0.06

0.05

0.04

0,03

0.02

0.01


0 10 20 30 40 50 60 70 80


a)


0.08

0.07

0.06

0.05

0.04

0.03

0.02

0.01

0
0 10 20 30 40 50 60 70 80


b)

Figure 8: Histograms of numerical simulations and their approximations with shifted
gamma PDF; the histograms and their approximations have the same mean and
standard distribution; a) assuming the first integer for which the histogram and
the approximation are non-zero is the same, n=--24, a=--23; b) shift parameter is
obtained from a least-square fit, a=20.













CHAPTER 6
USE OF EXISTING EXPERIMENTAL DATA TO EVALUATE METHODS FOR
DESIGN AGAINST UNCERTAINTY




The best is yet to come this time grouped in the five sections of the present

chapter. The first section provides the motivation of testing methods for design against

uncertainty by using experimental data. The second section describes the chip speed

target problem, its reduction to a Bidder/ Challenger problem, and the probabilistic and

possibilistic formulation. The third section develops the methodology for using existing

data by repeatedly dividing it into fitting and evaluation sets and defines the relative

frequency of success. The fourth section presents the results obtained for the

Bidder/Challenger problem, when all or only part of the data was known, with or without

inflation of the standard distribution of the known data. Finally, the fifth section presents

the concluding remarks of the chapter.


Motivation

When methods of design against uncertainty are applied, they usually require

many assumptions on distributions because it is rare that complete data is available. In

addition, the designer often has to choose between methods for design against uncertainty

(e.g., probabilistic design versus possibilistic design), or between variants of a single

methodology (e.g., Bayesian probability versus standard probability). This situation is

common with many methods used by engineers to analyze and design systems.








Therefore, it is customary to perform physical experiments to validate and compare

analysis and design methods. Design optimization methods often add urgency to the need

for physical tests, because the optimization process tends to take advantage of

deficiencies in the analysis models (e.g., Haftka et al., 1998).

Consequently, the impact of the lack of data and the assumptions used with

methods of design against uncertainty should be investigated by applying these methods

to data obtained from physical experiments. However, unlike deterministic design

optimization, for which tests of a small number of designs will suffice for validation, the

situation appears to be much more difficult for methods of design against uncertainty.

For example, if the safety of a design is measured by its reliability (probability of

survival), then to test the safety of the design may require to test hundreds or thousands

of realizations of that design.

Gigerenzer and Todd (1999) showed a way out of this difficulty, by

demonstrating that methods for making decisions based on incomplete data could be

compared by using readily available physical data. They tested four methods for making

decisions by using data available from 20 unrelated fields from social sciences, biological

sciences, and transportation. They demonstrated the approach for the simplest of

decisions--that of selecting one of two items based on a data set containing information

about some attributes of the items. The decision process has two stages. In the training

stage, a sample of half of the data is used to determine how the attributes (cues) could

affect the decision. In the decision stage, the information on pairs of items from the rest

of the data set is given and one item of each pair is to be selected. In order to test a








decision method, the process is repeated (1000 times) each time using a randomly chosen

half of the data set for 'training.'

One example in Gigerenzer's study is to infer which of two high schools has a

higher dropout rate using a data set of schools and their dropout rates. First, a sample of

half the high schools from the data set is presented to the decision maker along with

information on their dropout rates, the percentage of low-income students, the average

SAT score, and the degree of parental involvement in their children's schooling. The

simplest of the four methods (called pick the best) selects the single cue of the three that

has the highest correlation coefficient with the dropout rate in the sample. The most

complex method uses regression to correlate all three cues to the outcome. Next, all

possible pairs of schools are presented, and each method is scored in terms of predicting

correctly which of the two schools has higher dropout rate. The procedure is then

repeated for 1000 times for different training samples. This allows the decision maker to

eliminate the chance factor in the selection of the sample.

For each of the 20 data sets, the study was also carried out in a 'fitting' mode,

where all the data was available to the decision maker. The case where only half of the

data was available was labeled the 'generalization' case. The efficacy of a method

should be judged in terms of the ability of the method to generalize, that is solve

problems that it has not seen during training. The four methods were compared for each

of the 20 problems in both the fitting and generalization modes. With 20 problems with

different number of objects in the data set and different ratio of number of properties

known per number of objects in the set, the conclusions can be generalized to other

problems.








Gigerenzer et al. found that in the fitting mode the regression method made the

best decisions. However, the simplest method (pick the best) was the best in

generalizing, that is, in solving decision-making problems that have not been seen during

the training phase. Its advantage over regression was particularly pronounced for

problems where the number of data per cue in the data set was less than five. This meant

that the sample available to the decision maker during the fitting phase had two or less

data per cue, which can lead to over fitting problems for regression. However, pick-the-

best had slight advantage on average even for the eight problems where the number of

data per cue in the set was eight or more (p. 115).

Our goal is to develop a similar approach to test methods for design against

uncertainty. Most design problems can be reduced to a scenario of choosing (from a

given space) a design that best satisfies a set of requirements. A number of methods for

design might be available, each quantifying differently the set of requirements or using

alternative representations of the uncertainty.

Our work generalizes the Gigerenzer's test procedure from a binary decision to

the selection of the best design among a large number of possible designs under

uncertainty. A methodology for experimental testing and comparison of design methods

is demonstrated on an example simulating the choice of a performance target for a

company designing a product in a competitive market.

As an example to demonstrate the approach we use records of domino stacks that

were built as described in Chapter 5. One set recorded the toppling heights of 50

attempts by the author (Rosca). This set was used to represent the performance of the

company. The other data set included 90 attempts by 16 other students during a








competition. This second data set was used to represent the performance of possible

competitors. The two data sets are used to compare probabilistic and possibilistic design

procedure as well as to investigate the efficacy of inflating the uncertainty when

distributions are fitted to scarce data.

In our example, we compare two versions of the probabilistic approach and a

possibilistic one. The normal and the shifted gamma distributions fit the domino data

best (Chapter 5) and the fitting error was found to be very small (see figures of Appendix

F). First we fit normal probability density functions to all data available for the company

and its competition and, based on these density functions, we solve the problem of setting

a performance target. We also repeat the procedure fitting the data with shifted gamma

density functions. Finally, we fit triangular membership functions to the domino data and

solve the same problem using a possibilistic formulation. The optimum designs given by

each method are then compared in terms of their relative frequency of success. For the

sake of brevity we will call this measure the likelihood of success.

The two versions of the probabilistic formulation and the possibilistic formulation

are further tested by repeatedly taking samples from the company and competition data

and repeating the 'fitting and solving' procedure on the sample data. Once again, the

designs are compared in terms of their likelihood of success when all data are known.

Modifying the size of the sample allows us to study the performance of the methods

when data is scarce. An extra step is then taken by inflating the dispersion of the

distribution fitted to the small sample.

This chapter shows that we can use existing data to construct simple design

problems and mimic real life design decision problems. Small changes in the simple








design problems will allow us to simulate many real life design problems with the same

experimental data.


Example: Bidder -Challenger Problem

Description of microchip speed target setting problem

A microprocessor manufacturing company, referred from here on as Chiptel, tries

to consolidate its share of the computer chip market. It plans to announce publicly the

delivery of a new product at the beginning of the new calendar year and guarantee its

performance (speed of the chip) three months in advance. The move is supported by a

newly released consumer psychology report suggesting that corporate consumers are

willing to plan and budget around guaranteed performance rather than, first, allocate

funds and then buy 'the most bang for the buck'. The public relations department of the

company warns against promises that cannot be delivered. The marketing department

knows that other companies in the market (Advanced Microprocessors being one of

them) are working on a similar product, without knowing how advanced they are and

how powerful their new microchips will be. However there are data available for past

performance of both Chiptel and its competitors, and no new competitor appeared in the

market since the collection of these data.

Using the available information, the management of the company has to decide

what speed for the new microprocessor, nbw, the company should announce. For this

decision nbW would be chosen such that to maximize a measure of success. The outcome

of the decision is considered successful if Chiptel produces a chip of at least the

announced performance and Chiptel's competitors do not produce a chip much better

than the announced one.









Bidder-Challenger problem: mathematical model and domino simulation

A mathematical model of the speed target-setting problem is given by the

following silent auction-type problem. Chiptel acts as Bidder, while the other companies

in the market act as Challenger. Bidder is successful if:

The delivered performance ndeis at least the promised one (ndel2nbi) AND

Challenger's product has a performance nchal smaller than nbid + nh,,d4

(nc,,i
the consumer for known performance over uncertain performance.

With the same notation, Bidder fails if:

Bidder is not able to deliver the bid (ndei< nbid) OR

Challenger delivers more than the bid plus the handicap (nichanbiflnhaud).

The actual problem will require fitting statistical distributions to data of past

performance properly shifted in time. In order to simulate this type of problem we use

domino experiments. In each experiment, a builder stacks domino tiles one on top of the

other until the entire stack collapses. The height of the stack, just before collapse, is a

performance measure that stands in for chip performance. We have shown in Chapter 5

that the distribution of stack heights for a single builder or for a group of builders follows

approximately a shifted gamma distribution. We also use a normal distribution in order

to show the influence of the choice of a probability distribution on the result

performance.

We have two sets of experimental data available for simulating the Bidder-

Challenger problem. The author (Rosca) generated the data for Bidder (50 experiments)

and multiple competitors in a match generated the data for Challenger (90 experiments).








Note that we can simulate a broad class of design problems by slightly changing

the above domino problem. For example, we can consider the handicap as an additional

design variable. The Bidder may increase the handicap by spending money to advertise

and promote their chip.

Possibilistic and probabilistic formulations of the Bidder Challenger problem

In this example, we compare the performance of the optimum decision reached by

possibilistic and probabilistic formulations. However, the methodology of comparing

alternative methods is more important than the particular methods compared.

Possibilistic formulation

In possibility theory, the possibility of an event and the one of its complement do

not necessarily add up to one (as is the case for the probability theory). Therefore we

have two possibilistic formulations.

In the first formulation, we want to find nbId that maximizes the possibility of

success. We assume that the heights of Bidder's and Challenger's towers are

independent. Then the possibility of success is:


Pos (success (nb,))=min [Pos (nde,1nbi), Pos (ncih.IfnbU+nha, un)].

In the second formulation, we minimize the possibility of failure:

Pos (failure (nbW))=max [Pos (ndIe Both formulations can provide multiple optima (corresponding to a flat region in

the plot of the possibility of failure or success as a function of the bid height). For the

same input data, the sets of optima given by the two possibilistic approaches are not

disjoint. We call the intersection of these two sets 'the possibilistic optimum'. However,

there are cases where the intersection contains more than one element.








Probabilistic formulation

The Bidder's and Challenger's microprocessor speeds are statistically independent.

Therefore, the probability of success is calculated by the following equation:

Pro(success(nbiu)))=Pro(ndel>nbd) Pro(nchaInfbi+nflnd).

In the probabilistic formulation we seek nbid that maximizes the probability of

success.

We will compare the optima obtained by the two formulations with data from the

domino experiments. We analyze two different cases:

* all data is known and used to find the optimum;

* only part of the data is known.

In both cases, we compare the performance of the possibilistic and probabilistic

approaches when there isfitting error (the true type of the probability distribution of the

random variables is unknown) but a decision must still be made. In Case 1, we have

sufficient information to estimate accurately the parameters of the assumed probability

distribution of the tower height, whereas in Case 2 we do not have enough information.

Methodology for Using Existing Data to Conduct Simple and Efficient Experiments
that Mimic Real Life Design Decision Problems

It is important to conduct efficiently a sufficiently large number of experiments to

obtain statistically significant results. The following two concepts allow us to conduct

many experiments using the same data set:

* We split the data set into two subsets (one for making a decision -- the other for

evaluating it) in multiple ways.








We compare all possible combinations of toppling heights of the Bidder's and

Challenger's towers to estimate the relative frequency of success. In our case, since

the samples are only a small fraction of the total data, we use the entire data as test.


Splitting the data into fitting and testing sets

If we use all the data for selecting the optimal nbid, we can compare different

methods, but we will have a comparison based on a single example, where chance may

play a large role in the outcome. However, the relatively large number of data allows us

to use subsets of the data for making and evaluating the decision and then repeat the

process for different subsets. This reduces the element of chance in the comparison

between the methods. Here, we perform the comparison for 80 randomly chosen subsets.

In addition, the use of subsets allows us to examine not only different methods,

such as probability and possibility, but also different variants within the same method. In

particular, when the number of data is small (scarce data situation), it is common practice

to inflate variability (Fox and Safie (1992)). In the following we examine the

effectiveness this practice for both probabilistic and possibilistic decisions.

In the scarce data case, we take samples of size nampie,, from both the Bidder and

Challenger distribution. Based on this sample, we fit a shifted gamma or a normal

probability density. The fitting process is described in the following section. Based on

the fitted functions and using a probabilistic or a possibilistic formulation, we solve the

Bidder-Challenger problem, obtaining one (or more) optimum designs. We compare the

designs obtained, in terms of their likelihood of success.

The fact that we use the likelihood as a metric of the quality of a decision means

that with infinite amount of data and no fitting errors the probabilistic formulation should








be superior. The possibilistic approach can prevail only if the fitting errors and the errors

due to incomplete data overcome the natural advantage of the probabilistic approach.


Definition and evaluation of the relative frequency (likelihood) of success

Generally, for a given sample, the possibilistic and probabilistic formulations

yield different optima because they maximize different objective functions. We compare

the two optima in terms of their relative frequency of success considering all possible

Bidder-Challenger competitions obtained by combining all the data for the collapse

heights of the towers built by the Bidder and Challenger. With 50 experiments available

for Bidder and 90 experiments available for Challenger, the likelihood is calculated by

counting the number of successes as a fraction of the universe of possible pairs of Bidder

and Challenger data, that is 4500 pairs.

Consider a competition in which the Bidder's tower collapsed at a height of N,

blocks and the Challenger's tower collapsed at a height of n2 blocks. Bidder succeeded if

N,nbid and N2-nbiW + nhnd- We compute the relative frequency of success when betting

nbidby counting the total number of pairs (N,, N2) for which the bidder won normalized

by the total number of possible competitions (4500) obtained by combining the data from

the Bidder and Challenger experiments.

Description of the fitting process (fit of possibility /probability distribution
functions)

In the possibilistic formulation, to each sample we fit an asymmetric triangular

membership function, such that the mean of the sample corresponds to the peak of the

membership function. The minimum and the maximum values in the sample are the








minimum and the maximum values of the support of the triangular membership function.

An example is shown in Fig 5.


Figure 5. Triangular membership function (solid line) fitted to the sample of five from
the Challenger experiments [ 27 37 37 27 31] with sample cumulative
histogram for comparison.


In the probabilistic formulation, we fit a probability density function (PDF) to

each sample. When all data are available, the best PDF fit is given by normal and shifted

gamma density functions. Therefore, even for small data samples (size of three or five)

we use the normal PDF and the shifted gamma PDF to fit the data.

To find the shifted gamma function, we choose the scale and shape parameters so

the mean and standard deviation are the same for the sample and the fitted PDF. We

choose the third parameter (shift) as the integer that minimizes the sum of the squares of

the differences between sample points and fit at the points of the sample, that is we fit the








PDF rather than the cumulative distribution function (CDF). We choose the two

parameters (mean and standard deviation) of the fitted normal PDF to be the mean and

standard deviation for the sample.

Figure 6 shows the CDF of experimental data and of the fit for the same

Challenger sample as in Fig. 5, as for scarce data the comparison of CDFs is more

meaningful than the comparison of PDFs. Afterwards, knowing the PDFs for the Bidder

and Challenger, we compute nbid that maximizes the probability of success.


ChalUger expeimendatal daa CF(-). tod gamma COF(o) a8Wted Itnormal CDFC)
G ~


Figure 6. Experimental data CDF (bars), fitted shifted gamma CDF (circles) and fitted
normal CDF (asterisks) for Challenger example [27 37 37 27 31] of Figure 5



Results

All data known -various handicaps

First we solve the problem when all data is known and we vary the handicap

through the set of values {2, 5, 8, 11, 15}. Figure 7 shows the likelihood of success for








different handicaps as a function of the bid. As can be expected, the likelihood of success

increases with the handicap. As can be seen in Figure 7 and Table 3, the optimum bid

decreases with increasing handicap value. This is because the increased handicap

protects against the failure due to superior performance by Challenger, and a smaller bid

will reduce the risks of failure of Bidder to deliver the promised bid.



Table 3: Variation of height (bid) that gives maximum likelihood of success with
variation of handicap hand____,,________
Handicap 2 5 8 11 15
Optimum 33 32 29 28 28
bid
Maximum
Liklo 0.3180 0.4333 0.5633 0.6533 0.7373
Likelihood I I I I I I


Table 4: Variation of bids obtained by probabilistic and possibilistic decision maker and
their likelihood of success with handicap values nhand when all data is known;
cases where the optimum bid was found are marked with asterisks
hand Ideal Probabilistic Probabilistic Possibilistic
bid (shifted gamma fit) (normal fit) (triangular fit)
Optimum Likelihood Optimum Likelihood Optimum Likelihood
of success of success of success
2 33 32 0.3067 33* 0.3180 33* 0.3180
5 32 31 0.4107 32* 0.4333 32* 0.4333
8 29 29* 0.5633 30 0.5360 31 0.5133
11 28 27 0.6402 29 0.6153 29 0.6153
15 28 26 0.7236 27 0.7262 28* 0.7373

Table 4 shows the optimum bids selected by the two probabilistic models and the

possibilistic approach. The possibilistic approach found the best bid for three of the five

handicap values, while one probabilistic approach found it for one of the five handicaps

and the other probabilistic approach found it for two of the five.

The optimum bid was found by at least one method for four of the five handicaps,

that is, the optimum was missed only for a handicap of 11. These results indicate that









with the full data, the errors incurred by fitting the data to a probabilistic distribution

offset the advantage of the probabilistic approach over the possibilistic one (that it

maximizes the same objective as the one used to score the results).


0.8
nhand=2
0.7 +--- nhand=5
e-o nhand=8
0.6 E--El nhand=11
0-- nhand=15
0.5
0.4
P 0.3
..
0.
'P 0.2

0Bi

0 10 20 30 40 50

Figure 7. Likelihood of success for different values of the handicap and bid




All data known inflation factor

When only few experimental data are available to fit a probability distribution, a

standard practice (Fox and Safie 1992)) is to keep the mean of the data as the mean of the

distribution, but to inflate the variance by adding to it an inflation factor multiplied with

the standard deviation of the variance (see Appendix D). When all the data is known, the

effect of inflation is small (see Figs. 8-9). Therefore, in order to understand the effect of

inflation we consider also the extreme case of an inflation factor of 15.

For a membership function, there is no standard way to inflate the uncertainty.

Here we use the simple approach of keeping the mean fixed and inflating the support by








the inflation factor. That is, if the mean is 32, and the un-inflated membership function is

nonzero in the interval (30,35), then an inflation factor of 1 will inflate the interval to

(28,38), and an inflation factor of 2 to (26,41). The choice of inflation factor must then

reflect the number of data. Here we use an inflation factor of 2, which corresponds to

extreme inflation, similar in magnitude to an inflation factor of 15 for the probabilistic

data.

Table 5 presents the standard deviation of the data to be fitted, before and after we

inflate the standard deviation. In Table 5, the increase in inflated standard deviation does

not vary linearly with the inflation factor, but the increase in inflated variance does.




Table 5: Inflated standard deviation for Bidder and Challenger Data; the mean of Bidder
data is 33.10 while the mean for Challenger is 35.08. An inflation factor of zero
corresponds to no inflation.
Inflation Inflated standard deviation
factor Bidder Data Challenger Data
0 6.21 6.30
1 6.76 6.75
2 7.26 7.17
15 12.02 11.30

Table 6 presents the probabilistic and possibilistic optima obtained for a handicap

of five and various inflation factors, when all data is used to fit the distribution. We

study the all-data case to get a better sense of the effect that the inflation factor has on the

optima. Indeed, the behavior of the optima in this case would reflect the effect of

inflation factor, rather than poor fit of the distribution to data.

For the handicap value nha,,=5, we see from Table 6 that the probabilistic bid

decreases with increased Challenger inflation factor and increases with increased Bidder

inflation factor.










Table 6: Variation of optimum bid and its likelihood of success with various inflation
factors; handicap value nhand=5, all data case; probabilistic optimum decreases
with increased Challenger inflation factor and increases with increased Bidder
inflation factor while the possibilistic optimum exhibits the opposite trend
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
nhand=5 ofsuccess of success of success
0 0 31 0.4107 32 0.4333 32 0.4333

0 1 30 0.4240 31 0.4107 32 0.4033
0 2 30 0.4240 31 0.4107 32 0.4033
0 15 28 0.3920 29 0.3987 33 0.4020
1 0 31 0.4107 32 0.4333 31 0.4107
1 1 31 0.4107 32 0.4333 32 0.4333
2 0 31 0.4107 32 0.4333 31 0.4107
2 2 30 0.4240 32 0.4333 32 0.4333
15 0 33 0.4020 34 0.3578 31 0.4107
15 15 30 0.4240 32 0.4333 32 0.4333


0.9-

0.8-

0.7-


Fitted nom nal C u Cfo tiaflngerng-),success BSIAdd-) &Wd prbablty of successC)


/


-N FalurChllersi(s )
- Success Bidder
+ total pmbablty of success


,~ -~
.7
7-


p
/7


0.4 k


inWtAlC4 5
inI.d-2


0.2-\.
0.1- y .:*
01 ^ (4 %


N
N
N
*I~ \

'4


0, '0 M 40 50 60 70


Figure 8. Modification in the normal fit and probability of success with Challenger
inflation factor, inflch,,ai={0, 2, 15}; no Bidder inflation factor; nh,,d=5; the
value of the possibilistic optimum decreases with increased Challenger
inflation factor, and so does the likelihood of success








The behavior is explained from an examination of the probability of success

Pro (success (n))=Pro (ndel-n)*Pro (ncli
where by FB/(n), Fchag (n) we denote the cumulative distribution function for Bidder and

Challenger, respectively.

As can be seen in Fig 8, when the Challenger inflation factor increases, its fitted

cumulative distribution function Fchag (n) decreases and gets flatter, so the maximum

probability of success will be influenced more by 1-F Bid(n), which accounts for a smaller

value of n. Therefore the optimum in the probabilistic case decreases with increased

Challenger inflation factor. In other words, with two conflicting failure modes, by

inflating one distribution, we spread the support of the distribution and (because the total

integral of the function is constant) we decrease the values of PDF. Consequently, the

inflated failure mode becomes insensitive to changes in the decision variable, and the

optimum is influenced more by the other mode.

The same phenomenon is evident in Fig. 9 (next page), where the effect of the

Bidder inflation factor is shown. With the increase in the inflation factor, FBI(n)

decreases, so 1-Fgid(n) increases, and therefore the optimum bid will move to the right.

We observe this behavior for both probabilistic cases (the shifted gamma and the normal

fit).

Surprisingly, the possibilistic optimum displays the opposite trend, increasing

with increased Challenger inflation factor and decreasing with increased Bidder inflation

factor. To explain this, we recall that the set of possibilistic optima is obtained as the

intersection of the set of designs that minimize the possibility of failure and the one that

maximize the possibility of success.






Fitted shifted gamma CDF for failure Challenger(-),success Bidder(-) and probability of success(O)
~ Fakran OwlkWng
-- \\/' -Succuss Bidder
'0.9 C total probability ofd Succes

0.8 -
irflBID =15 1,
.N \ \ //
0.7 inflBD2 =2


0.5- \\


0.4 \

0.3- o ',, 80 nfli=15
0
0 in flD2
0.2- 00


0 ---- -- .r

0 10 20 30 40 5060 7


Figure 9. Modification in the shifted gamma fit and probability of success with Bidder
inflation factor inflBid={0, 2, 15}; no Challenger inflation factor, nhand=5; when
we inflate the Bidder distribution, the optimum bid tends to increase



As seen in Fig. 10, increasing the Challenger inflation factor increases the

possibility of success for the challenger for any bid value except the mean. In contrast,

inflating the Challenger distribution reduces the probability of challenger success for any

bid value because the total probability must sum to one. Thus, inflation increases the

importance of a failure mode in the possibilistic approach!

The contrast between the effect of inflation on probability and possibility is

clearly due to the non-additivity of possibilities. In probability, increasing the chance of

an event must come at the cost of reducing the chance of another event. In possibility, we

can increase the possibility of all events simultaneously. Even though we have been

comparing probability and possibility for the past few years, we needed this experimental

result to point to us this important difference between probability and possibility.
































Figure 10. Variation of possibility of success with Challenger inflation factor; no Bidder
inflation, nhand=5

Scarce data small sample size

For the scarce data case, we use only a randomly selected small sample of the data

for fitting a distribution and selecting a bid. The process is repeated 80 times to average

out the effect of chance in the selection of the sample. Rather than presenting all 80

examples of optima, we present their average (over the 80 samples) likelihood of success.

Table 7 presents the number of cases where the probabilistic bids fared better than

the possibilistic bids for the two probabilistic distributions for a sample size of five with

no inflation. It is seen that for most cases the probabilistic bids are more successful than

the possibilistic bids, but that the difference is not very large.








Table 7: Count of cases (out of 80) when the probabilistic bids have a better, worse or
equal likelihood of success than the possibilistic bids; Sample size is five.

inf_bid=0 Probabilistic optimum (shifted gamma Probabilistic optimum (normal
inf_chal=0 fit) vs. Possibilistic optimum fit) vs. Possibilistic optimum
hand Better Worse Equal Better Worse Equal
2 31 29 20 27 31 22
5 28 37 15 29 27 24
8 39 31 10 34 19 27
11 40 30 10 39 23 18
15 41 33 6 41 27 12

Table 8 (next page) presents the average and standard deviation of the likelihood

of success for the same case of sample size of five with no inflation. For both possibilistic

and probabilistic methods, increasing the handicap value increases the mean of the

likelihood of success of the computed optimum bid. As in Table 7, the probabilistic

optimum bids enjoy a small advantage overall compared to the possibilistic bids. This

result surprised us, as we expected the possibilistic approach to do better in the scarce

data case than the full data case. In the full-data case, the probabilistic approach was

slightly poorer than the possibilistic approach due to the fitting error. For the scarce-data

case, the poorer results of the possibilistic approach may reflect the fact that using the

lowest and highest values for the support of the membership function is too crude.

The standard deviation of the likelihood, which increases with increase of

handicap value from 2 to 11, then decreases for hand =15. This last decrease can be

explained by Fig. 7, the graph of likelihood of success by betting n versus n. For a

handicap of 15, the likelihood of success is almost flat near its maximum, that is, in the

region 22-28, while for the other handicap values it has a sharper decrease around the

maximum. Consequently, the variation of likelihood of success near the real maximum

(28) is smaller for a handicap of 15 than for the other handicap values. This smaller








variation explains why the standard deviation of the likelihood of success of the optima

obtained from sample is smaller for a handicap of 15 than for the other handicap values.



Table 8: The mean and standard deviation (computed over the 80 cases) of the likelihood
of success for probabilistic optimum (shifted gamma and normal fit) and possibilistic
optimum (triangular fit); sample size of five__________
sample Likelihood of success Likelihood of success Likelihood of success
size=5 for probabilistic for probabilistic for possibilistic
inf_bid--O=0 optimum optimum optimum
inf_chal=0 (shifted gamma fit) (normal fit) (triangular fit)
nhand Mean (of Standard Mean (of Standard Mean (of Standard
80 runs) deviation 80 runs) deviation 80 runs) deviation
2 0.2850 0.0361 0.2822 0.0398 0.2896 0.0290
5 0.3924 0.0441 0.3924 0.0451 0.3917 0.0478
8 0.4995 0.0552 0.5031 0.0496 0.4921 0.0576
11 0.5967 0.0622 0.5993 0.0513 0.5875 0.0656
15 0.6997 0.0559 0.7069 0.0412 0.6937 0.0586

Scarce data small sample size influence of inflation factor at different handicap
values

We repeat the fitting and optimization procedure for the case of the sample size of

five, but this time we vary both the inflation factor and the handicap. We present in Table

9 only the results for sample size of five and symmetrical inflation (same inflation factor

for both Bidder and Challenger). More details are given in Appendix E.

If we compare Tables 8 and 9, we see that inflation had a detrimental effect on the

probabilistic optimum. Indeed, for all but the handicap value of 2, the mean likelihood of

success of the optimum given by the inflated shifted gamma distribution is smaller than

the corresponding non-inflated one. The same effect is observed for the normal

distribution for all but a handicap of 11. For symmetrical inflation, little or no effect is

observed on the likelihood of success of the possibilistic optimum, as the possibilistic

optimum does not change with symmetrical inflation.








Table 9: The mean and standard deviation (computed over the 80 cases) of the likelihood
of success for probabilistic optimum (shifted gamma and normal fit) and
possibilistic optimum (triangular fit); sample size of five, symmetrical inflation
(Bidder inflation factor of 2, Challenger inflation factor of 2), various handicap
values
sample Likelihood of success Likelihood of success Likelihood of success
size=5 for probabilistic for probabilistic for possibilistic
inf_bid=2 optimum optimum optimum
infchal=2 (shifted gamma fit) (normal fit) (triangular fit)
nhand Mean (of Standard Mean (of Standard Mean (of Standard
80 runs) deviation 80 runs) deviation 80 runs) deviation
2 0.2687 0.0520 0.2797 0.0466 0.2896 0.0290
5 0.3662 0.0591 0.3899 0.0496 0.3917 0.0478
8 0.4732 0.0758 0.4980 0.0554 0.4917 0.0575
11 0.5717 0.0849 0.6017 0.0481 0.5879 0.0645
15 0.6922 0.0637 0.7075 0.0360 0.6920 0.0606


Concluding Remarks

A technique for using existing data for testing the effectiveness of methods of

design against uncertainty has been developed. The technique requires the two data sets

that provide data on one property (here domino stack height) for two groups. It then

becomes possible to create a decision problem that involves finding an optimum in terms

of a single decision variable. We expect that it will be possible to generalize the

technique to multiple decision variables, so that it could be used to test methods applied

to complex decision and optimum design problems under uncertainty. The example here

was used to simulate the decision on the guaranteed performance of a microprocessor that

a high-tech company must face when announcing a new product.

The utility of the experimental testing of methodologies was evidenced by several

results that surprised us, even though we have been exploring the methods we evaluated

for several years. These include the following:








* Small fitting errors in the probabilistic distributions were sufficient to give an
advantage to possibilistic decision-making, even though the metric of success was
probabilistic. This may indicate that these fitting errors deserve further study.

* In contrast, the probabilistic approach suffered less than the possibilistic approach
from small sample size. This may indicate that a better way of selecting membership
functions based on small samples may be needed.

* The process of magnification of standard deviation, which is commonly used with
small sample size, proved to be counterproductive. This may indicate that the
usefulness of magnification should be studied further analytically.

* The effect of magnification of uncertainty had an opposite effect on probability and
possibility. Inflation of uncertainty reduces the effect of a failure mode on the
probabilistic decision while it increases the effect of the mode on the possibilistic
decision. This result was shown to reflect the difference in additivity between
possibility and probability. We expect it to allow us to create problems with extreme
differences between probabilistic and possibilistic decisions.

For this study we used data from domino experiments. However, it is worth

noting that there is a wealth of other data readily available for such simulations. For

example, student records of physical traits (such as height or weight) can be used instead

of domino heights. The effects of test scores, high-school grades, and other factors used

in admission can be matched to graduation rates. Such data can be used to compare

methods for design against uncertainty for more complex problem with several design

variables instead of the single variable considered in the present chapter.













CHAPTER 7
CONCLUSIONS

This dissertation started by comparing the theoretical basis of two methods for

design against uncertainty, namely probability theory and possibility theory. The fact that

the probability measure is additive while possibility measure is sub-additive proves to be

the difference of most practical importance. A two-variable design problem is then used

to illustrate the differences. It is concluded that for design problems with two or more

modes of failure of very different cost (for example losing the use of a car due to lack of

gas or motor failure), probability theory divides resources in a more intuitive way than

possibility theory.

The dissertation continues with the description of simple experiments (building

towers of dominoes) that can be used to compare methods for decision under uncertainty.

Then it presents methodology to increase the amount of information that can be drawn

from an experimental data set. The methodology is illustrated on the Bidder-Challenger

problem, a simulation of a problem to set a target for announced microchip speed of a

company that makes microprocessors. The simulations use the domino experimental

data.

The utility of the experimental testing of methodologies was evidenced by several

results that surprised us, even though we have been exploring the methods we evaluated

for several years. These include the following:








* Small fitting errors in the probabilistic distributions were sufficient to give an

advantage to possibilistic decision-making, even though the metric of success was

probabilistic. This may indicate that these fitting errors deserve further study.


* In contrast, the probabilistic approach suffered less than the possibilistic approach

from small sample size. This may indicate that a better way of selecting membership

functions based on small samples may be needed.


* The process of magnification of standard deviation, which is commonly used with

small sample size, proved to be counterproductive. This may indicate that the

usefulness of magnification should be studied further analytically.


* The effect of magnification of uncertainty had an opposite effect on probability and

possibility. Inflation of uncertainty reduces the effect of a failure mode on the

probabilistic decision while it increases the effect of the mode on the possibilistic

decision. This result was shown to reflect the difference in additivity between

possibility and probability. We expect it to allow us to create problems with extreme

differences between probabilistic and possibilistic decisions.













APPENDIX A
COMPUTATION OF TILT AND SWAY ANGLE OF DOMINOES FROM DOMINO
MEASUREMENTS



In order to compute the tilt and sway angle for a domino block, we have measured

the height of the domino in the comers and the middle of lateral edges, as shown in

Fig. 11. Tan


Left


F Wj


Bottom


Figure 11. Height measurements for a domino block



Having these 8 measurements and the width of the domino (considered, in the

following, equal to the average value b=0.83 in), we compute the tilt angle y using the

formula

CE
tany=--
b







then we average y between the values given by the Left, the Right and the Center

measurements. Above, we considered that CE is given by the difference of height

measurements at Top and Bottom.
If we consider that, in a cross-section, the lower part of the domino is an arc of

circle of radius R and center angle 2 e, the sway angle 6 can be found by eliminating R
between the equations:

R sin e =AM, R(1-cos ) =MN.

We obtain

4b MN
tan e = b- --- -
b2 4MN2'

where MN can be expressed as a function of the height measurements at Top,

Bottom and Center of the domino:

MN=FN-FM=FN-(BC+AD)/2.



0 R
h / \R
A c -.. bA 1 -.-ua B


Figure 12. The height-width domino cross-section is a rounded trapezoid; the lower part
of the domino is an arc of circle of radius r and central angle 2 6


For each domino, we can compute the sway angle in the left and right sections.
The average of these two values is considered the sway angle for the domino.














APPENDIX B
COMPUTATION OF THE CENTER OF MASS OF A DOMINO BLOCK


We assume that our dominoes are rounded trapezoids, and that we know their

width b, height h, the tilt angle, y, and the sway angle e.

We compute the coordinates of the center of mass of the domino, by dividing it

into a triangle, a rectangle and a circular segment. Assuming that the length of the

domino is constant, and its distribution of mass is uniform, we have

Xr= AT +XR AR + xs As
AT + ARo + i
AT+AR+ As (Al)
y= YrTA +yR AR + ys As
A + AR + As
where (XT, Yr), (xR, YR), and (xs, Ys) are the coordinates of the center of mass of the

triangle, rectangle and segment, respectively; and AT, AR, and A, are the area of the

triangle, rectangle and the circular segment, respectively.

Because the tilt and sway angles y and 6 are small, we approximate the quantities

in (Al) as

b b
X T Xs
XR = b/2 b b2
yR=h/2, yr =h+-y, ys=--- (A2).
AR =bh 3 25
t = 2 As =b
2 6


Substituting now (A2) in (Al) we obtain the coordinates of the center of mass





72


b 6h+2by-be
x=2 6h+3by-be

YG =h -+by-
2h+by-be/3
2V h2h + by[ b / 3



and the area of the cross-section


A=bh- b2 y b2
2 6













APPENDIX C
IDEALIZED MODEL OF STACKING PROCESS USED IN NUMERICAL
SIMULATION


Inputs to the idealized model are the nominal width and height of a domino block,

b and h, respectively. In the numerical simulation presented below, b and h were set

equal to their mean values, which were estimated from the experiments. This is a

reasonable assumption because the variation in the height has hardly any effect on the

stability of the column, and the variation in the width is negligible compared to the

translation error.

At step K, the construction error, SK, tilt angle, YK, and curvature (sway) angle,

CK, are generated as random variables uniformly distributed over the ranges [-Sma, Smax],

[-rmx ym ] and [-e,, ,ex ], respectively. We consider y., and c. fixed throughout

the Monte Carlo simulation. To account for the difference in the skills of different

builders, we consider that the maximum construction (translation) error varies from one

stack to another. Specifically, we assume that smax is uniformly distributed over [0, s],

where s is the range of a particular group of builders, and it is fixed for the entire Monte

Carlo simulation.

Once the errors sK. yK, and CK are generated, we can compute the position of the

comers and of the center of mass of the K-th block as described in Appendix B. The

positions of the comers are specified with respect to a coordinate system whose origin is

at the lower left comer of the K-th block and whose axes are parallel to the horizontal








edges of the lower face of this block. Then we transform these coordinates into a set

corresponding to a system whose origin is at the lower left comer of the first block and its

x-and y- axes are horizontal and vertical, respectively. We determine if the column is

stable by performing the following checks: a) Compute the center of mass of each sub-

column (J: K), with 1 < J < K -1, assuming that there is no sway, and check if its

projection onto the horizontal plane is inside the base of the J-th block, b) Compute the

position of the stack when swaying takes place, and check the toppling criterion in the

swayed position. If one of the above two criteria is not satisfied, then the column topples

and the procedure returns K as the number of blocks at toppling. The number of available

blocks is finite, n,,,, thus 1 K < n^,. If the column is stable even when all the blocks

are used, the procedure returns nm,,+l as the number of blocks at toppling.

In the following, we describe how we calculate the contribution of the sway to the

total tilt angle. First we decide if the stack is going to sway to the left or to the right by

comparing the inclination of the upper face of the top block with the horizontal. If that

angle, Alfa(I), is positive, then the stack will sway to the left. Otherwise it will sway to

the right. We specify these two cases using a parameter sign, which assumes the value of

+1 if the tower sways to the left and the value of-1 if the tower sways to the right.

Next we can compute the total tilt in the swayed position, s_Alfa, from the base to

block I

J J
s_Alfa(J)= ZYk +sign- :Ek, for J = ..IJ
k=1 k=l

Once the tilt angles in the swayed position are known, we repeat the calculation of

block position and the checks on toppling described above for the swayed position.








Read b, h, ym, em, s; nmax


Generate tilt, sway and translation
errors for all blocks 1=1 :nmax

1=1; Imax=nmax+l

^-^ ^Is there ayblock ^_
^ left: IKnmax NO^
__ YES
Add another block to the stack: 1=1+1
I
Compute the coordinates for the center of mass
and the comers of block I in the position
without sway

For J=J 1:1 compute the coordinates for the
center of mass and the comers of block J in the
waved nnsitinn


Print STOP
Imax


No r'
J< YES >1
> C Compute center of mass of.
block J to block I in the posi
without swavy,


sub-column fro
itions with and


SCheck toppling:. YES
/ HAPTER I Horizontal projection o
Scnter of mass is outside the horizontal
"- projection of the sub-column base,
with without sway.
Toppling criterion is nowth. or without swayw.-
satisfied. Therefore, the sub- .
column formed by blocks I:J is
stable.


=1









Toppling criterion is satisfied.
Therefore, tower with I blocks
topples.


Figure 13: Stacking process flowchart


Z


Imax
/
3m














APPENDIX D
DEFINITION OF INFLATION FACTOR


Consider {x,.. .xn}, a sample of values of a random variable X. Use of small

sample sizes (say 5) for estimating the variance of X, may lead to large statistical errors.

It is important to estimate the error in the variance and adjust the variance to account for

the error.


If the mean value of the population is unknown then an unbiased estimator of the

variance of the variable is:

2 1
s= (,-)2 (D 1)
n-\
1n
where Y is the sample mean = xi
n I
The variance of the above estimator is (Freund and Williams (1966), pp. 151

formula (F.7a)):


0.2 /=4 (n 3)a4
s2 n n(n -1)
In
where / 4 is the forth moment of the population about the mean = (x-/2)4
nI

and a4 is the square of the variance of the population.


The following equation is used to inflate the unbiased estimate of the variance

obtained from equation (DI):








s'2=s2 +r.oa, =s2 +r.Li (n-3)4 (D2)
-n n(n-l)
where r is called the inflation factor.

When both the mean and standard deviation of the population are unknown, we

replace them with the corresponding estimates of these values in Eq. (D 2). Then the

variance of the estimated variance becomes:

-2 1 4 (n-3) Xi _y2
n7- n(n-)
-- ---(x,- )-^ [ (x )

The inflated estimate of the variance becomes:

S2 =S2 +r-' .













APPENDIX E
EFFECT OF INFLATION ON PROBABILISTIC OPTIMA AND THE POSSIBILISTIC
OPTIMA, FOR VARIOUS VALUES OF HANDICAP AND INFLATION FACTOR


In the main body of the dissertation we showed results for a handicap of five with

different values of inflation. This appendix provides complete results for all handicap

values. No results are presented for an inflation factor of 15, which was used in the main

body to facilitate seeing trends by exaggeration. The results shown in Tables 10-14 of

this appendix follow the same general trend observed in the main body. Inflating the

bidder distribution pushes the probabilistic optimum to higher towers and inflating the

challenger to lower ones. The effect on the possibilistic optimum is the reverse. We also

observe that for low handicap values the probabilistic design is more sensitive to

inflation, while for high handicap values the possibilistic design is more sensitive. It must

be noted, however, that an inflation factor of two is grotesquely high for the membership

function and is not a reasonable inflation factor when so much data is available.



Table 10: Effect of inflation on optimum height when all data is used, handicap of two
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
_nhand=2 of success of success of success
0 0 32 0.3067 33 0.3180 33 0.3180
0 2 32 0.3067 33 0.3180 33 0.3180
2 0 33 0.3180 34 0.2862 33 0.3180
2 2 32 0.3067 34 0.2862 33 0.3180








Table 11: Effect of inflation on optimum height when all data is used, handicap of five
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
n_hand=5 of success of success of success
0 0 31 0.4107 32 0.4333 32 0.4333
0 2 30 0.4240 31 0.4107 32 0.4333
2 0 ___ 31 0.4107 32 0.4333 31 0.4107
2 2 30 0.4240 32 0.4333 32 0.4333



Table 12: Effect of inflation on optimum height when all data is used, handicap of eight
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
n_.and=8 of success of success of success
0 0 29 0.5633 30 0.5360 31 0.5133
0 2 29 0.5633 1 30 0.5360 32 0.4733
2 0 29 0.5633 30 0.5360 29 0.5633
2 2 29 0.5633 1 31 0.5133 31 0.5133



Table 13: Effect of inflation on optimum height when all data is used, handicap of 11
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
hand= 11 of success of success of success
0 0 27 0.6402 29 0.6153 29 0.6153
0 2 27 0.6402 28 0.6533 31 0.5573
2 0 27 0.6402 29 0.6153 27 0.6402
2 2 27 0.6402 29 0.6153 29 0.6153



Table 14: Effect of inflation on optimum height when all data is used, handicap of 15
Bidder Challenger Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Optimum Likelihood Optimum Likelihood Optimum Likelihood
n_hand= 15 of success of success of success
0 0 26 0.7236 27 0.7262 28 0.7373
0 2 26 0.7236 27 0.7262 30 0.6560
2 0 25 0.7258 27 0.7262 25 0.7258
2 2 25 0.7258 27 0.7262 28 0.7373








Tables 15-19 of this appendix show the effect of inflation with a sample size of

five. Samples were drawn at random and the process was repeated 80 times. The table

shows that inflation has a detrimental effect on the mean and increases the standard

deviation of the probabilistic results, particularly for the lower values of the handicap.

The effect on the possibilistic results is milder, even though an inflation factor of two is

extreme for the membership function. From the table it appears that inflating the

Challenger membership function damages the possibilistic results much more than

inflating the Bidder's.




Table 15: The mean and standard deviation (computed over the 80 cases) of the
likelihood of success for probabilistic optimum (shifted gamma and normal fit)
and possibilistic optimum (triangular fit); sample size of five, handicap of 15,
various inflation factor
Sample size = 5 Probabilistic optimum Probabilistic optimum Possibilistic optimum
Bidder Challenger (shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Mean Standard Mean Standard Mean Standard
n-hand=2 deviation deviation deviation
0 0 0.2850 0.0361 0.2822 0.0398 0.2896 0.0290
0 2 0.2729 0.0476 0.2766 0.0509 0.2857 0.0386
2 0 0.2740 0.0447 0.2784 0.0407 0.2850 0.0351
2 2 0.2687 0.0520 0.2797 0.0466 0.2896 0.0290


Table 16: The mean and standard deviation (computed over the 80 cases) of the
likelihood of success for probabilistic optimum (shifted gamma and normal fit)
and possibilistic optimum (triangular fit); sample size of five, handicap of five,
various inflation factor
Sample size = 5 Probabilistic optimum Probabilistic Possibilistic optimum
Bidder Challenger (shifted gamma fit) optimum (normal fit) (triangular fit)
inflation factor for Mean Standard Mean Standard Mean Standard
n-hand 5 deviation deviation deviation
0 0 0.3924 0.0441 0.3924 0.0451 0.3917 0.0478
0 2 0.3751 0.0557 0.3923 0.0497 0.3848 0.0556
2 0 0.3837 0.0494 0.3883 0.0453 0.3896 0.0398
2 2 0.3662 0.0591 0.3899 0.0496 0.3917 0.0478








Table 17: The mean and standard deviation (computed over the 80 cases) of the
likelihood of success for probabilistic optimum (shifted gamma and normal fit)
and possibilistic optimum (triangular fit); sample size of five, handicap of eight,
various inflation factor
Sample size = 5 Probabilistic optimum Probabilistic optimum Possibilistic optimum
Bidder Challenger (shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Mean Standard Mean Standard Mean Standard
nhand=8 deviation deviation deviation
0 0 0.4995 0.0552 0.5031 0.0496 0.4921 0.0576
0 2 0.4865 0.0695 0.5046 0.0539 0.4744 0.0785
2 0 0.4873 0.0675 0.4943 0.0537 0.5036 0.0543
2 2 0.4732 0.0758 0.4980 0.0554 0.4917 0.0575



Table 18: The mean and standard deviation (computed over the 80 cases) of the
likelihood of success for probabilistic optimum (shifted gamma and normal fit)
and possibilistic optimum (triangular fit); sample size of five, handicap of 11,
various inflation factor
Sample size = 5 Probabilistic optimum Probabilistic optimum Possibilistic optimum
Bidder Challenger (shifted gamma fit) (normal fit) (triangular fit)
inflation factor for Mean Standard Mean Standard Mean Standard
nhiand=L 1 deviation deviation deviation
0 0 0.5967 0.0622 0.5993 0.0513 0.5875 0.0656
0 2 0.5882 0.0707 0.6003 0.0540 0.5537 0.0919
2 0 0.5877 0.0718 0.5970 0.0478 0.5886 0.0647
2 2 0.5717 0.0849 0.6017 0.0481 0.5879 0.0645



Table 19: The mean and standard deviation (computed over the 80 cases) of the
likelihood of success for probabilistic optimum (shifted gamma and normal
fit) and possibilistic optimum (triangular fit); sample size of five, handicap of
15, various inflation factor
Sample size = 5 Probabilistic optimum Probabilistic optimum Possibilistic optimum
(shifted gamma fit) (normal fit) (triangular fit)
Bidder Challenger______
inflation factor for Mean Standard Mean Standard Mean Standard
nhand=15 l deviation deviation deviation
0 0 0.6997 0.0559 0.7069 0.0412 0.6937 0.0586
0 2 0.6979 0.0561 0.7069 0.0417 0.6470 0.1013
2 0 0.6935 0.0619 0.7065 0.0386 0.6824 0.0741
2 2 0.6922 0.0637 0.7075 0.0360 0.6920 0.0606













APPENDIX F
DIFFERENCE BETWEEN THE SHIFTED GAMMA AND NORMAL CUMULATIVE
DISTRIBUTION FUNCTION FITTED TO EXPERIMENTAL DATA, WITH AND
WITHOUT INFLATION


As can be seen in Fig 14, for all data case, there is little difference in the fit obtained with
the normal and the one obtained for the shifted gamma function. Both functions fit the
data very well.


Challenger experimental data CDF(-), fitted gamma CDF(o) and fitted normal CDF(*)
1 __, _,__. .._______


Figure 14. Cumulative distribution function (CDF) for Challenger data (bars), together
with fitted shifted gamma CDF (o) and fitted normal CDF (*); the confidence
in the shifted gamma fit, computed from a z2 test with 8 intervals, is 89.69%,
when 100% corresponds to a perfect, zero error, approximation


Even if we strongly inflate the distribution (as in Fig. 15), the difference between the fits
by the shifted gamma and normal CDF is still small.












Challenger experimental data CDF(-), fitted gamma CDF(o) and fitted normal CDF(*)

: Fited--amnma COf I
i Fted nonnal COF
w CF xepenmeunal data d M,

0.8.ItTI{

0.7. ^


Figure 15. Cumulative distribution function (CDF) for un-inflated Challenger data
(bars), together with shifted gamma CDF (o) and normal CDF (*) fitted to the
inflated data (inflation factor of 15).














LIST OF REFERENCES


Allen, J. K., Krishnamachari, R. S., Masseta, J., Pearce, D., Rigby, D., and Mistree, F.
(1992). "Fuzzy Compromise: An Effective Way to Solve Hierarchical Design
Problems." Structural Optimization, 4, 115-120.
Ambady, N., and Rosenthal, R. (1992). "Thin slices of expressive behavior as predictors
of interpersonal consequences: A meta-analysis." Psychological Bulletin, 111,
256-274.
Ang, A., and Tang, W. (1984). Probability Concepts in Engineering Planning and
Design, John Wiley and Sons, New York.
Arakawa, M., and Yamakawa, H. (1995). "A study on structural optimum design based
on qualitative sensitivities." JSME International Journal Series C- Dynamics,
Control, 38(1) (March), 190-198.
Arakawa, M., and Yamakawa, H. (1997). "Derivation of Qualitative Sensitivities using
Existing Optimum Design Results of Simple Structures." JSME International
Journal Series C- Dynamics, Control, 40(2), 366-373.
Arnold, D. L., Grice, H. C., and Krewski, D. R. (1990). Handbook of in Vivo Toxicity
Testing, Academic Press.
Ben-Haim, Y. (1996). Robust Reliability, Springer Verlag.
Ben Haim, Y., and Elishakoff, I. (1990). Convex Models of Uncertainty in Applied
Mechanics, Elsevier, Amsterdam.
Bhatnagar, R. K., and Kanal, L. N. (1986). "Handling Uncertain Information: A review
of Numeric and Non-numeric Methods." Uncertainty in Artificial Intelligence, L.
N. Kanal and J. F. Lemmer, eds., North -Holland, 3-26.
Chen, Q., Nikolaidis, E., Cudney, H., Rosca, R., and Haftka, R. T. (1998). "Comparison
of Probabilistic and Fuzzy Set-Based Methods for Designing under Uncertainty,"
40 th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and
Materials Conference and Exhibit, St. Louis, MO, 2860-2874.
Chiang, W.-L., and Dong, W.-M. (1987). "Dynamic Response of Structures with
Uncertain Parameters: A Comparative Study of Probabilistic and Fuzzy Set
Models." Probabilistic Engineering Mechanics, 2(2), 82-91.
Davis, F. D., Lohse, G. L., and Kottemann, J. E. (1994). "Harmful effects of seemingly
helpful information on forecasts of stock earnings." Journal of Economic
Psychology, 15, 253-267.
Dong, W. M., Chiang, W.-L., and Wong, F. S. (1987). "Propagation of uncertainties in
deterministic systems." Computers and Structures, 26(3), 415-423.
Dubois, D., and Prade, H. (1988). Possibility Theory: An approach to Computerized
Processing of Uncertainty, Plenum Press, New York.
Dubois, D., Prade, H., and Yager, R. R. (1997). Fuzzy Information engineering A
guided tour of Applications, John Wiley & Sons, New York.








Elishakoff, I., Lin, Y. K., and Zhu, L. P. (1994). Probabilistic and Convex Models of
Uncertainty in Acoustically Excited Structures, Elsevier, Amsterdam.
Elseifi, M., A., Gurdal, Z., and Nikolaidis, E. (1999). "Convex and Probabilistic Models
of Uncertainties in Geometric Imperfections of Stiffened Composite Panels."
AIAA Journal, 37(4), 468-474.
Person, S. (1996). "What Monte Carlo Methods Cannot Do." Human and Ecological Risk
Assessment, 2(4), 990-1007.
Fox, E. P., and Safie, F.(1992). "Statistical Characterization of Life Drivers for a
Probabilistic Analysis," AIAA/SAE/ASME/ASEE, 28th Joint Propulsion
Conference and Exhibit, Nashville, TN, AIAA-92-3414
French, S. (1986). Decision theory: An introduction to the Mathematics of Rationality,
Ellis Horwood Ltd., Chichester.
Freund, John E., and Williams, Frank J. (1966). Dictionary/Outline of Basic Statistics,
McGraw-Hill, New York, 151.
Gad, S., and Weil, C. S. (1986). Statistics and Experimental Design for Toxicologists,
The Telford Press.
Gigerenzer, G., and Richter, H. R. (1990). "Context effects and their interaction with
developments: Area judgements." Cognitive Development, 5, 235-264.
Gigerenzer, G., and Todd, P. M. (1999). Simple Heuristics that Make Us Smart, Oxford
University Press.
Giles, R. (1982). "Foundations for a Theory of Possibility." Fuzzy Information and
Decision Processes, North-Holland.
Haftka, R.T., Scott, E.P., and Cruz, J.R. (1998). "Optimization and Experiments: A
Survey," Applied Mechanics Reviews, 51(7), 435-448.
Kiureghian, A., and Liu, P.-L. (1985). "Structural Reliability under Incomplete
Probability Information." CEE-8205049, Division of Structural Engineering and
Mechanics, University of California at Berkeley, Berkeley, CA.
Klir, G. J., and Yuan, B. (1995). Fuzzy Sets and Fuzzy Logic-Theory and Applications,
Prentice Hall, Upper Saddle River.
Laviolette, M., and Seaman, J. W. (1994). "The Efficacy of Fuzzy Representations of
Uncertainty." IEEE Transactions on Fuzzy Systems, 2(1), 4-15.
Lombardi, M. (1998). "Optimization of uncertain structures using non-probabilistic
models." Computers and Structures, 67(1-3), 99-103.
Lombardi, M., and Haftka, R. T. (1998). "Anti-optimization technique for structural
design under load uncertainties." Computer Methods and Applied Mechanics in
Engineering, 157(19-31).
Maglaras, G., Nikolaidis, E., Hafika, R. T., and Cudney, H. H. (1997). "Analytical-
Experimental Comparison of Probabilistic Methods and Fuzzy-Set based Methods
for Designing under Uncertainty." Structural Optimization, 13(2-3), 69-80.
Maglaras, G., Ponslet, P., Haftka, R. T., Nikolaidis, E., Sensharma, P., and Cudney, H. H.
(1996). "Analytical and Experimental Comparison of Probabilistic and
Deterministic Optimization." AIAA Journal, 34(7), 1512-1518.
McKenzie, C. R. M. (1994). "The accuracy of intuitive judgment strategies: Covariation
assessment and Bayesian inference." Cognitive Psychology, 26, 209-239.
Mukhtasor, R., Sharp, J. J., and Lye, L. M. (1999). "Uncertainty analysis of ocean
outfalls." Canadian Journal of Civil Engineering, 26(4), 434-444.








Nataf, A. (1962). "Determination des Distribution don't les Marges sont Donnees."
Comptes Rendus de l'Academie des Sciences, 255, 42-43.
Nikolaidis, E., Chen, Q., and Cudney, H. (1999). "Comparison of Bayesian and
Possibility-based Methods for Design Under Uncertainty." 13th ASCE
Engineering Mechanics Division Conference, ASCE, Baltimore, Maryland.
Nikolaidis, E., Hemrnandez, R. R., and Maglaras, G. (1995) "Comparison of Methods for
Reliability Assessment under Incomplete Information." AIAA/ASME/ASCE/
AHS/ASC, 36th Structures, Structural Dynamics and Materials Conference,
1346-1353.
Pantelidis, C. (1995). "Uncertainty-based Optimal Structural Design." National Science
Foundation Grant.
Papoulis, A. (1965). Probability, Random Variables and Stochastic Processes, McGraw-
Hill, New York.
Pedrycz, W., and Gomide, F. (1998). An Introduction to Fuzzy Sets: Analysis and Design,
The MIT Press, Cambridge, Massachusetts.
Qiu, Z., and Elishakoff, I. (1998). "Antioptimization of structure with large uncertain but
non-random-parameters via interval analysis." 152, 362-372.
Rouvray, D. H. (1997). "The treatment of uncertainty in the sciences." Endeavour, 21(4),
154-158.
Sadananda, R. (1991). "A probabilistic approach to bone-fracture analysis." Journal of
Materials Research, 6(1), 202-206.
Schafer, G. (1986). "Probability Judgement in Artificial Intelligence." Uncertainty in
Artificial Intelligence, L. N. Kanal and J. F. Lemmer, eds., Elsevier, North-
Holland, 127-135.
Seifi, A., Ponnambalam K., and Vlach, J. (1999)."Probabilistic Design of Integrated
Circuits with Correlated Input Parameters." IEEE Transactions on Computer-
Aided DEsign of Integrated Circuits and Systems, 18(8), 1214-1218
Sinopoli, A. (1989). "Kinematic approach in the impact problem of rigid bodies." Applied
Mechanics Reviews, 42(11), S233-244.
Slowinski, R. (1998). Fuzzy Sets in Decision Analysis, Operations Research and
Statistics. the Handbooks of Fuzzy Sets, D. Dubois and H. Prade, eds., Kluwer
Academic Publishers, Boston.
Sugeno, M. (1977). "Fuzzy Measures and Fuzzy Intervals: A Survey." Fuzzy Automata
and Decision Processes, M. M. Gupta, G. N. Saridis, and B. R. Gaines, eds.,
North-Holland, Amsterdam and New York, 89-102.
Tchobanoglous, G., Loge, F., Darby, J., and Devries, M. (1996). "UV design:
Comparison of probabilistic and deterministic design approaches." Water Science
and Technology, 33(10-11), 251-260.
Thurston, D. L., and Camrnahan, J. V. (1992). "Fuzzy Ratings and Utility Analysis in
Preliminary Design Evaluation of Multiple Attributes." Journal of Mechanical
Design, 114(4), 648-658.
Wilson, T. D., and Schooler, J. W. (1991). "Thinking too much: Introspection can reduce
the quality of preferences and decisions." Journal of Personality and Social
Psychology, 60, 181-192.
Winterstein, S. R. (1988). "Nonlinear Vibration Models for Extremes and Fatigue."
Journal of Engineering Mechanics, ASCE, 114, 1772-1790.





87

Zadeh, L. A. (1965). "Fuzzy sets." Information and Control, 8, 338-353.
Zadeh, L. A. (1978). "Fuzzy Sets as a basis for a theory of possibility." Fuzzy Sets and
Systems, 1, 3-28.
Zhou, W., Hong H. P., and Shang, J. Q. (1999). "Probabilistic Design Method of
Prefabricated Vertical Drains for Soil Improvement." Journal of Geotechnical
and Geoenvironmental Engineering, 125(8), 658-664.
Zimmermann, H. J. (1996). Fuzzy Set Theory, Kluwer Academic Publishers, Norwell,
Massachusetts.














BIOGRAPHICAL SKETCH

Born and raised in Bucharest, Romania, Raluca loana Rosca graduated in 1995

with a BS from Mathematics Department, University of Bucharest, Bucharest, Romania,

in the specialty of Mathematics-Mechanics. For her graduation project, titled 'Stability of

pre-stressed non-linear elastic plates', she was fortunate to have as advisor the late Dr.

Eugen So6s. In 1996 she graduated from the same department with a 'Diploma of

Further Studies' (equivalent to M.Sc.) in the specialty 'Fluids Mechanics and Solids

Mechanics.' The advisor of her MS thesis, titled 'Contact problems in Elasticity,' was Dr.

Sanda Cleja-Tigoiu. Part of the thesis was written at University of Perpignan, Perpignan,

France, during a 3-month stage supported by a European Community TEMPUS

scholarship.

From September 1995 to December 1996, Ms. Rosca was employed as a

researcher at the Metallurgical Research Institute in Bucharest. In this capacity she won

a competitive 'Young Researcher Grant' awarded by the Romanian Ministry of Research

for 'Contact Problems in Elasticity.' From September to December 1996, she also taught

Algebra and Calculus discussion classes at the Politechnical University Bucharest.

Deciding to pursue a Ph.D. degree in Engineering Mechanics, in January 1997 she

joined the Structural and Multidisciplinary Optimization group at University of Florida.

While working as a research assistant under Dr. Haftka's supervision and as a

departmental teaching assistant, she was also an officer of the UF International Folkdance

Club and the Gainesville Romanian Student Association.








I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.


Raphael T. Haftka, Chair
Distinguished Professor of Aerospace
Engineering, Mechanics and
Engineering Science


I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.


Ulrich H. Kurzweg
Professor of Aerospace Engineering,
Mechanics and Engineering Science

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.


Andrew J. Kurdila
Professor of Aerospace Engineering,
Mechanics and Engineering Science

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philnosophy .


0illiam Hager j
Professor of Mathematics

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.


Efstratios-N6iiIis
Professor of the Mechanical, Industrial and
Manufacturing Engineering, University
of Toledo









This dissertation was submitted to the Graduate Faculty of the College of
Engineering and to the Graduate School and was accepted as partial fulfillment of the
requirements for the degree of Doctor of Philosophy.

December 2001 __.,_________
Dean, College of Engineering



Dean, Graduate School

















LD
1780
20 0?,










UNIVERSITY OF FLORIDA
3111111111111 t1262 IN5 1 113450ill
3 1262 08555 3450