Title: Discrete system sensitivity and variable increment optimal sampling
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00097846/00001
 Material Information
Title: Discrete system sensitivity and variable increment optimal sampling
Physical Description: viii, 117 leaves : illus. ; 28 cm.
Language: English
Creator: Bennett, Archie Wayne, 1937-
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 1966
Copyright Date: 1966
 Subjects
Subject: Electrochemical analysis   ( lcsh )
Adaptive control systems   ( lcsh )
Automatic control   ( lcsh )
Mathematical optimization   ( lcsh )
Electrical Engineering thesis Ph. D
Dissertations, Academic -- Electrical Engineering -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis--University of Florida, 1966.
Bibliography: Bibliography: leaves 111-112.
Additional Physical Form: Also available on World Wide Web
General Note: Manuscript copy.
General Note: Vita.
 Record Information
Bibliographic ID: UF00097846
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000546060
oclc - 13166294
notis - ACX0018

Downloads

This item has the following downloads:

PDF ( 4 MBs ) ( PDF )


Full Text













DISCRETE SYSTEM SENSITIVITY

AND

VARIABLE INCREMENT OPTIMAL SAMPLING








By

ARCHIE WAYNE BENNETT













A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA
December, 1966













ACKNOWLEDGMENTS


The author wishes to express his appreciation to

Dr. A. P. Sage. As chairman of the supervisory committee,

Dr. Sage's guidance and advice helped make this disser-

tation possible.

The author wishes to express his deepest gratitude

to his wife, Shirley. In addition to typing the disser-

tation, she has provided the inspiration needed for the

attainment of this degree.

He also wishes to acknowledge the support and encour-

agement of his parents.

Finally, the author would like to express his grati-

tude to the Graduate School of the University of Florida,

and to the National Aeronautics and Space Administration

for their financial assistance during the course of this

work.













TABLE OF CONTENTS


Page

ACKNOWLEDGMENTS ... . . . . . . . ii

LIST OF FIGURES . . . . . . . v

ABSTRACT . . ................ vii

Chapter
1. INTRODUCTION . . . . . . . . 1

Sensitivity Analysis In Automatic Control. 1
Research Objectives . . . . . . 2
Plan of the Dissertation . . . . . 2
Notation. . . . . . . . ... 3
References . . . . . . .. 5

2. A SURVEY OF SENSITIVITY ANALYSIS IN
OPTIMAL CONTROL . . . . . . .. 6

Introduction and Background . . . . 6
Sensitivity In Optimal Control . . .. 12
Summary . . . . . . . ... 21
References . . . . . . . .. 23

3. SENSITIVITY ANALYSIS FOR DISCRETE SYSTEMS. 28

Introduction . . . . . . ... 28
Parameter Sensitivity of Discrete Systems. 29
Perturbation matrix approach . . .. 30
Sensitivity vector-function approach . 34
Sampling Interval Sensitivity of
Sampled Systems . . . . . . . 36
Global sampling interval sensitivity . 36
Local sampling interval sensitivity 38
Example Problems . . . . . ... . 40
Problem 1 . . . . . . ... 40
Problem 2 . . . . . . .. 45
Summary . . . . . . . . . 50
References . . . . . . . . 51







TABLE OF CONTENTS (Continued)

Page

Chapter
4. VARIABLE INCREMENT OPTIMAL SAMPLING .. 52

Introduction ....... . . . . 52
Performance Criteria for Variable
Increment Sampling . . . . . .. 53
Sampling Interval Sensitivity . . .. 54
Local state variable sampling interval
sensitivity .... . . ..... 55
Local error sampling interval
sensitivity . . . . . . . 56
Variable Increment Sampling Based on
Sensitivity . . . . . . .. .. 59
Linear system simulation with an exact
discrete model .. . . . ... 60
Linear system simulation with an
approximate discrete model .. .... 67
Nonlinear system simulation . .... 70
Calculation Procedure for System Simulation. 72
System Analysis for Variable Increment
Sampling . . . . . . . . 74
Variable Increment Sampling for Optimal
Control . . . . . . . ... 77
Example Problems . ... . . . . . 79
Problem 1 . . . . . . .. 79
Problem 2 . . . . . ... .. . 88
Problem 3 . . . . . .. .. . 96
Problem 4 . . . . . ... . 101
Problem 5 . . . . . . ... 104
Summary . . . . . ... . . 109
References . . . ... . . . . 111

5. CONCLUSIONS AND RECOMMENDATIONS . . .. 113

Conclusions .. . . . .. . . . 113
Recommendations ....... .. . . 116

BIOGRAPHICAL SKETCH . . . ... . . .. 117














LIST OF FIGURES

Figure Page
3-1 First-Order System .. . . ... . 42

3-2 Unit Step Response .. . . . . . 42

3-3 Generalized First-Order Difference
and Sensitivity . . . ... . . . 42

3-4 Exact and First-Order Change . . ... 44

3-5 Global and Local Sensitivity . . ... 44

3-6 Second-Order System . . . . . ... 46

3-7 Unit Step Response . . . . . ... 46

3-8 Generalized First-Order Change ..... . 48

3-9 Sensitivity Vector Components . . . .. 48

3-10 Local Sensitivity . . . . . ... 49

3-11 Global Sensitivity . . . . . .. 49

4-1 Zero-Order Data Reconstruction . . .. 63

4-2 First-Order Data Reconstruction . . .. 63

4-3 Approximate Model Response . . . .. 68

4-4 Calculation Procedure . . . . . . 68

4-5 System Diagram ... . . . . . . 75

4-6 First-Order System . . . . . . 80

4-7 Response and Sensitivity . . . . .. 80

4-8 Data Reconstruction Error: Zero-Order Hold 80

4-9 Sampling Interval: Zero-Order Hold ... 83

4-10 Zero-Order Reconstruction Error vs. Number
of Intervals . . . . . . . .. 83







LIST OF FIGURES (Continued)


Figure Page

4-11 Reconstruction Error: Fractional-Order
Hold . . . . . . . . .. 85

4-12 Modeling Error of Approximate Model . 85

4-13 Reconstruction and Modeling Error ... . 87

4-14 Sampling Interval: Approximate Model . 87

4-15 Second-Order System . . . . .. 90

4-16 Input and Response . . . . . ... 90

4-17 Sensitivity . . . . . . ... 91

4-18 Reconstruction Error . . . . ... 91

4-19 Reconstruction Error . . . . ... 93

4-20 Sampling Interval . . . . . ... 93

4-21 Reconstruction and Modeling Error .... . 95

4-22 Sampling Intervals . . . . . ... 95

4-23 Reactor Optimal Response . . . ... 98

4-24 Sensitivity ... . . . . . 98

4-25 Modeling Error . . . . . . . 100

4-26 Sampling Interval . . . . . .. 100

4-27 Response and Sensitivity . . . ... 103

4-28 Modeling Error . . . . . . ... 103

4-29 Sampling Interval . . . . . ... 103

4-30 Model for Van der Pol's Equation . . .. 106

4-31 Response of Van der Pol's Equation ... 106

4-32 Sensitivity for Van der Pol's Equation . 106

4-33 Modeling Error . . . . . . ... 107

4-34 Sampling Interval . . . . . ... 107







Abstract of Dissertation Presented to the Graduate Council
in Partial Fulfillment of the Requirements for the
Degree of Doctor of Philosophy


DISCRETE SYSTEM SENSITIVITY
AND
VARIABLE INCREMENT OPTIMAL SAMPLING

By

Archie Wayne Bennett

December, 1966



Chairman: Dr. A. P. Sage
Major Department: Electrical Engineering


Discrete system sensitivity is investigated and a

scheme presented for the optimal adjustment of the sam-

pling rate of a sampled-data system. As background for

the sensitivity study, a survey of the historical develop-

ment of sensitivity analysis is presented. The survey

includes recent developments toward a generalized ap-

proach to sensitivity analysis. Also included are some

of the applications of sensitivity to optimal control.

The investigation of discrete system sensitivity

includes variations in system parameters and sampling

intervals. Two approaches to parameter sensitivity are

used. One method makes use of a perturbation matrix and

is only applicable to linear systems. The general ap-

proach, using partial derivatives, is useful in linear

and nonlinear systems. Discrete sensitivity equations

are derived for both methods.







Sampling interval sensitivity is investigated for

global and local effects. For the global sensitivity

function, all sampling intervals are equal and undergo

the same variation. In local sampling interval sensi-

tivity, only one interval is assured to change.

A method for the optimal adjustment of the sampling

rate of a sampled-data system is presented. The technique

uses local and error sampling interval sensitivity in

sampling interval formulas that constrain the magnitude

of the reconstruction and modeling errors of the discrete

system response. The resulting sample adjustment scheme

is suitable for real-time digital computer simulation.

The possibility of using the sampling interval adjust-

ment scheme for optimal control is discussed. Several

example problems are used to illustrate the technique.

In each case, variable increment sampling improved sam-

pling efficiency and computer utilization.


viii







CHAPTER 1


INTRODUCTION


Sensitivity Analysis In Automatic Control


A general definition of sensitivity analysis states

that it is the development and use of equations for the

partial derivatives of the system response with respect

to system parameters [1.1 Sensitivity was probably first

applied to control systems by Bode [2] in 1945. From 1945

until about 1960, only a few people were interested in

sensitivity theory and thus advances occurred rather

slowly. However, the advent of adaptive control and its

associated problems of identification and parameter adjust-

ment brought a renewed interest in sensitivity. Interest

in sensitivity was also stimulated by the need to know

more about system dynamics and the effect of perturbations.

In the last few years, considerable progress has been

made concerning the theory of sensitivity analysis. It

now includes a wide variety of perturbations and has been

extended to optimal control. It is also being used much

earlier in the design process. The future of sensitivity


1
Bracketed numbers refer to the list of references
collected at the end of each chapter.







appears to be very promising, particularly in discrete and

hybrid systems where work is just beginning.



Research Objectives


This dissertation has three basic goals. The first

objective is to survey the existing applications of sensi-

tivity to control systems. The second aim is to develop

sensitivity procedures for discrete systems. The third

objective is to use the discrete sensitivity techniques

that have been developed to implement variable increment

optimal sampling.

The project is computer oriented and the aim is to

develop algorithms for various sensitivity functions that

are convenient for digital simulation. All of the tech-

niques will be illustrated and recommendations regarding

accuracy and convenience will be made.



Plan of the Dissertation


Chapter 1 is an introduction to the dissertation. It

includes a brief discussion of sensitivity analysis and

outlines the objectives of the research. Chapter 1 also

presents a summary of each chapter and gives the notation

to be used throughout the dissertation.

Chapter 2 surveys the existing work on sensitivity

analysis. A brief historical development is presented

first as an introduction and background for a more







detailed look at sensitivity in optimal control. The

various approaches that have been suggested are presented

and their uses and interrelations are discussed.

In Chapter 3, the application of sensitivity to

discrete systems is investigated. Discrete sensitivity

equations are developed for parameter and sampling interval

sensitivity. The chapter also includes example problems.

Chapter 4 illustrates the application of techniques

developed in Chapter 3. Sampling interval sensitivity

is used to implement variable increment optimal sampling.

The use of variable increment sampling in optimal control

is also discussed. Several example problems are also

included.

Chapter 5 contains the conclusions and recommendations

for discrete sensitivity and variable increment sampling.



Notation


Throughout this dissertation, vector-matrix notation

is used to represent system dynamics. Scalars are indi-

cated by lower case Greek or Roman letters. The only

exception is the performance index J of an optimal control

system. Column vectors are indicated by underlined, lower

case Greek or Roman letters; e.g., x. Capital Greek or

Roman letters are used to denote matrices.

Subscripts have a number of uses. Two subscripts

indicate the row and column of a component of a matrix;





4


e.g., xij. A single subscript is used to denote the com-

ponent of a vector; e.g., xi. A single subscript also

means a particular column of a matrix; e.g., vi. In

discrete systems, subscripts also indicate a function

during a particular sampling interval; e.g., tk.

The arguments of functions are explicit, except in

two instances. The time argument on continuous functions

of time is often omitted for convenience; e.g., x(t) = x.

In the other instance, the sampling interval index (k) is

used in place to time tk.














REFERENCES


1. R. Tomovi6, "Modern sensitivity analysis," IEEE
Convention Record, vol. 13, pt. 6, pp. 81-8,-
March 1965.

2. W. H. Bode, Network Analysis and Feedback Amplifier
Design, D. Van Nostrand Co., Inc., New York, N.Y.,
1945.













CHAPTER 2


A SURVEY OF SENSITIVITY ANALYSIS
IN OPTIMAL CONTROL


Introduction and Background


The sensitivity of a control system was perhaps first

mentioned by Bode [1] in his book published in 1945. His

definition of the sensitivity of the system gain, to vari-

ation in a parameter, was

-1 dp T (1)
SdT p'


where T is the system gain and p is a parameter of the

system. For almost ten years after its introduction, there

was very little written on sensitivity.

Beginning in 1955, work began to appear in which Bode's

definition of sensitivity was inverted and related to other

system characteristics. The principal contributors to this

early work were Horowitz [2], Truxal [2,3] and Mason [4].

During the years 1957 through 1962, sensitivity re-

ceived increased attention. The pole-zero sensitivity was

studied [5,6,7], and sensitivity was related to root-locus

properties [8]. The use of sensitivity analysis in linear

system theory was presented by a number of people [9-15].

It was extended to sampled-data systems [16,17], and a







number of articles were written on the use of the sensi-

tivity coefficients for system identification and adaptive

control [18-22]. It was during this period that the need

for a more general approach to sensitivity became more

apparent. As a result, even more articles on the appli-

cation of sensitivity analysis began to appear.

A number of important developments took place in 1963.

Tomovi6 published the first book devoted entirely to sensi-

tivity analysis [23]. The book contains a formulation of

theoretical and practical problems with the aim of encour-

aging more work on a general theory of sensitivity analysis.

The close relationships, as well as the distinct differences

between stability and sensitivity, are discussed.

Tomovi6 considers a dynamic system with the mathemat-

ical model


F(K,x,x,t,p) = 0, (2)


where x is the state of the system and p is a single system

parameter. (Note that the nomenclature used by Tomovi6 has

been changed for convenience.) The dynamic sensitivity co-

efficient is defined as the change in the state x due to

variations in the parameter. This is expressed as


v(t,p) = dx(t,p) (3)
dp

The sensitivity equation is obtained by first taking the

partial derivative of equation (2) with respect to p,


aF 6x 6F 3* BF ax 6F
y 7- + J- + Ti-- = 0,







and then substituting the relations


-v= and = V.


This yields the sensitivity equation


SF .. 8F F F
-3v +- v +-v = (5)


The sensitivity equation is a linear differential

equation and can be solved by analytical means. However,

the book deals only with machine and experimental solution

methods and gives several examples. One method is the

simultaneous solution of the system, equation (2), and the

sensitivity equation, equation (5), on an analog computer.

This makes use of the connections between the two equations.

Tomovi6 also notes that the structural similarity of the

two equations facilitates solution by a digital computer.

The book also includes a discussion on the solution

of the sensitivity equations by simulation. This method,

based on earlier work by Bihovski [24], assumes that the

dynamic system has been realized and the sensitivity co-

efficients are to be obtained by direct measurement.

Bihovski's work is also the basis for the structural

method of obtaining the sensitivity coefficients of linear

systems.

The structural method has been presented in a number

of places [25] and its primary advantage is that the

sensitivity coefficients for a number of parameter variations

are obtained simultaneously. Another feature is the







simplicity of the analog model. The only portion of the

system that must be simulated in detail is the portion

that is to be studied closely.

The sensitivity coefficient (as a function of time)

about a particular parameter p is useful, but a knowledge

of its values in parameter space is even more useful. Such

problems as structural sensitivity and the effect of the

variations of a number of parameters can be studied by

means of the sensitivity coefficients in parameter space.

The problem of inverse sensitivity is also discussed

in the book. In this formulation, the variations in x are

known, and it is desired to determine the corresponding

variations in the parameter p. The book also includes

brief discussions on adaptive control of invariant systems

and performance adjustment of dynamic systems.

In 1963, another book written by Horowitz 126] gave

considerable space to the subject of sensitivity. Horowitz

used sensitivity in presenting the design and synthesis of

linear multivariable continuous and sampled-data systems.

Another indication of the growing importance of sensitivity

was a session on it at a circuit and system theory confer-

ence [27]. It was also in 1963 that sensitivity was first

applied to optimal control systems [28,29]. The subject

of sensitivity in optimal control has received a great deal

of attention since its introduction and its development will

be discussed in the next section.







To see the development for a system more general than

that of equation (2), consider the system


k = f[x(t),p) x(0) = x (6)


where x is an n dimensional state vector and p is an m

dimensional parameter vector. A sensitivity equation and

sensitivity coefficients for parameters other than those

that change system order and initial conditions can be

derived [25]. For small changes in p, a first-order

approximation for the corresponding change in x is

m
+' (7)
x = vp +., (7)
j=1 T
ap ...- The
where v is the sensitivity vector v ... The
-j j aPj p
th
i component of v is the sensitivity coefficient
-j

ax
i (8)
ij apj aP = 0,


aPm = 0


which represents the variation in the it component of the
th
state vector x due to a change in the j component of the

parameter vector p. The sensitivity equation is

n af at
v vij + k (9)
kj axi ij opj
.=1

where k = 1,..., n, and j = 1,..., m, and the initial con-

ditions are v (0) = 0.
kj







If the sensitivity to changes in initial conditions,

x(0), of equation (6) is desired, the initial conditions

for equation (9) must be vkj(0) = 1 [25]. In order to

handle parameters that change the order of the system, the

system equations


xi = fi(xl ...xn;xn+l ...xn4r;t) (i = 1,...,n)
(10)
n+s = f n (x ...x ;xn ,.. ".,x ;t) (s = l,.. ,r),
n+s n+s 1 n' n+1' 'n n+r .


are used. Note that X changes the order of the system and

when A = 0, equation (10) represents the original system.

The sensitivity equations are

n+r af
vk = v. (k = 1,...,n)
k axi 1
i=1

n n+s dXn+s
v+,r n+s i d (s = 1,.,r) (11)
Vn+s I x -x-i dt
i=l

with vh(0) = 0, where (h = l,...,n+r) [25]. Several other

papers on parameters that change the order of the system

have been published [30,31].

It should be noted that the sensitivity functions

can be determined by other methods. For example, the

method of undetermined coefficients, difference equations,

and asymptotic expansions can be used and do not require

equation (6) to be regular in p [32]. They also allow a

much wider class of perturbations to be treated. Some of

the other types of perturbations that have recently been







incorporated into the framework of sensitivity are: fre-

quency of oscillation, time delay, sampling rate, inte-

gration step, and "amount" of nonlinearity [32].

The rapid progress has been made possible, in part,

by drawing from other areas such as network theory, error

analysis in analog computers, and the theory of differential

equations. These recent developments in sensitivity analy-

sis have been covered by several articles [25,32], and an

international symposium on sensitivity analysis [33]. Also,

sensitivity was the subject of several sessions at a con-

ference on circuit and system theory [34].



Sensitivity In Optimal Control


Since 1963, when sensitivity was first applied to

optimal control, a number of uses for sensitivity in

optimal control have been presented. The sensitivity of

such quantities as the performance index, the state vector,

and the terminal state have been studied for variations in

the plant parameters, the state vector, and the control

vector. The following section will present some of the

results of the work in this area.

Dorato [28] used sensitivity to determine the variation

in the performance index


T
J = j F(x,u)dt, (12)

due to changes in the plant parameters. The plant state
due to changes in the plant parameters. The plant state






vector x(t) is related to the plant control vector u(t) by

the vector differential equation


x(t) = f{x(t),u(t),p (13)

where p represents the set of plant parameters. For this

case, the optimal closed-loop control law is of the form


u(t) = Q{x(t),p;t (14)

For changes in p from the nominal p the change in the

performance index is


AJ = J(p ) J(p).
-o

Dorato considers small variations in p and writes


AJ dJ = dpl +...+ dp (15)
8pl p m

or

8J
AJ dp, (16)
ap -


where 1J is the performance index sensitivity vector. This
ap
can be written as


6J JT 6F ax
J 2 tF dt, (17)
p J x .


where x is a matrix and is the solution to the sensitivity
ap
equation [35]. Using this notation, equation (9) can be

rewritten as


d rx .f -2 + f- (18)
dt [p I x aJ xp 6p







Dorato suggested that sensitivity might be a useful

criterion to use in comparing open-loop and closed-loop

control. Cruz and Perkins have investigated this problem

and the more general problem of the sensitivity of multi-

variable systems [36-40]. They introduce a sensitivity

matrix S(s) which relates Eo(s) to Ec(s) by the relation-

ship

Ec(s) = S(s)Eo(s), (19)

where E (s) is the output error due to plant parameter

variations of a closed-loop realization of the system.

The output error of the open-loop system is represented

by E (s). Note that in order to have a meaningful com-

parison, the output of both the open-loop and the closed-

loop systems must be equal if there are no plant parameter

variations.

Cruz and Perkins have related the sensitivity matrix

S(s) to a matrix generalization of return difference for

multivariable, linear, time-invariant feedback systems.

Also, for single-input, single-output systems, the sensi-

tivity matrix is compatible with the classical definition

of sensitivity. However, in the application of sensitivity

to optimal control systems, one of the most useful aspects

of their work has been the idea of a "comparative" sensi-

tivity.

Another definition of sensitivity that is of a "com-

parative" nature was introduced by Rohrer and Sobral [41].







In order to avoid having to completely specify the con-

ditions associated with the normal or "absolute" definitions

of sensitivity, they define a relative sensitivity


SR[u(t),p] = Ju(t),p] J[u(t),p (20)
IJ[uo(t),p]I

where J[uo(t),p] represents the performance index asso-

ciated with the system when driven by the optimum control,

u(t), for the given set of plant parameters p. J[u(t),p]

is the performance index when the control, u(t), is not

the optimum for the given set of parameters p. They use

the calculus of variations to show that,


SR u(t),p]J 6J[uo(t),u(t),p] (21)
Ji[uo(t),p~ l

for uo(t) interior to its allowable set U, and


sR[u(t),pl 62J[uo(t),u(t),p] (22)
IJ[uo(t),p]

when uo(t) is on the boundary of U. Note that relative

sensitivity approaches zero as a system approaches its

optimal performance. Also, it should be pointed out that

the relative sensitivity is a function of both the control

u(t) and the parameters p.

Rohrer and Sobral use relative sensitivity to define
a plant sensitivity


sM[u(t)] = Max[SR[u(t),p] (23)
peP -
which is the maximum value of relative sensitivity for all
which is the maximum value of relative sensitivity for all







parameters of the allowable set P. This definition of

plant sensitivity could be used as a design criterion

and the optimization would seek the control u (t) which

minimizes the plant sensitivity S M[u(t)]. Thus, if the

plant parameters were known to vary over a certain range,

the control u (t) would minimize the maximum deviation

from optimal performance for parameter variations over

the specified range.

Rohrer and Sobral have also defined a plant sensi-

tivity that is useful in systems in which the plant

parameters are given as random variables. This definition

is based on the expected value of the relative sensitivity

and can be stated as


SE u(t)] = E [SR[u(t),p]), (24)
p P

where "E" indicates the expected value. Here the opti-

mization would seek the control u (t) which minimizes

SE[u(t)]. This would minimize the average or expected

deviation from the optimal performance.

A similar optimization procedure using a game theory

approach has been formulated [42,43,44]. This method is

useful since both large and small plant parameter variations

can be considered and the controller structure need not be

fixed [42]. Dorato and Kestenbaum [43] consider a fixed

controller structure with a controller parameter pc. The

plant dynamics are given by


i(t) = f[x(t),u(t),p ),
- p







where pp is the plant parameter. If pc = ppthen the

controller generates the control


u(t) = Q[x(t),pc;t}, (26)


which is optimal and the performance index

T
J = t F(x,u)dt, (27)


is minimized. In the problem they formulate, all that is

known about p is that it lies somewhere in the range

pl pp 2a' and,therefore, the performance index is a
function of p and pc. The object of the optimization

is to determine the "best" value of p .

Since p is known only to range from p to p2, the

desirable controller parameter po should keep the per-
c
formance index equal to or less than some value, J, for

all values of p in its expected range. This can be

expressed as


J(ppp)5J for pls pp P2. (28)


Also, the inequality


Jo J(pc,po), for pl Pc SP2 (29)


must also hold if Jo is to be as low as possible.

In the game theory interpretation, J(pc,Pp) is the

value of the game or the "pay-off function" and the

players or "antagonists" are p and p The pair
p c
o o
c,(Pp) is an optimal or pure strategy and the type of







game is infinite or continuous. The conditions for optimal

strategies to be pure are [43],


Min Max J(pc,p ) = Max Min J(p ,p ) (30)
Pc Pp Pp Pc

and the existence of numbers po,pp, and J such that
c p

J(P,Pp)JOSJ(pCpp) (31)


Recently, Pagurek [45] has presented some interesting

results for linear systems. He formulates the sensitivity

of the performance index into the structure of the

Hamilton-Jacobi equation and shows that the open- and

closed-loop performance index sensitivity functions are

the same. This approach is useful in that sensitivity

analysis can be carried out by the same technique used

to obtain the optimal control law. His work has been

extended to the nonlinear case by Witsenhausen [46].

Siljak and Dorf [47] point out that most applications

of sensitivity to optimal control do not use sensitivity

as a criterion for determining the optimal control, but

determine sensitivity after the optimal control has been

synthesized. In order to avoid this, they use the time-

domain sensitivity technique [19,48] and introduce a

general index of optimality which includes both sensitivity

and performance characteristics. Thus, the optimal

control synthesized satisfies sensitivity and optimality

requirements simultaneously.








In order to include sensitivity in a general index,

the usual index of performance


J = F(x,ut)dt, (32)


is altered to also include the sensitivity functions.

The resulting generalized index of optimality is


J = Jt G(x,u,vlZ ...' m,t)dt, (33)

ax
where the sensitivity functions are vI = ap and pi is

the ith variable parameter. The sensitivity functions,

v., should appear in the index as squares or magnitudes

to avoid the canceling effects of a change in sign. Also,

the authors indicate the usefulness of a weighting function

which would allow certain sensitivity functions to receive

different emphasis. The resulting control law uo(t)

optimizes both sensitivity and performance.

In addition to the sensitivity of the performance

index, the sensitivity of the terminal state of an optimal

control system has been studied. Gavrilovi6, Petrovi6,

and Siljak [29] investigated it by the adjoint method in

one of the first articles applying sensitivity to optimal

control. More recently, Holtzman and Horing [49] used

variational techniques to study the sensitivity of the

terminal conditions of both open- and closed-loop optimal

systems. An important part of their work is the inclusion

of sensitivity prior to optimization for the open-loop








system. This allows the sensitivity of terminal con-

ditions to be prespecified or constrained. The results

of their work confirm that the closed-loop configuration

has superior sensitivity characteristics.

The sensitivity to variations in the plant parameters

has been the object of most of the work on sensitivity

in optimal control. However, Bl6anger [50] has investi-

gated the effects of variations in the control. He points

out that this is useful in suboptimal control and also in

studies of the sensitivity of the computation of the de-

sired control. Control variations have also been studied

by Gavrilovi6, Petrovi and Siljak [29]. They consider

variations in control by changes in the initial conditions

of the adjoint system instead of letting the control vary

directly.

B6langer considers both "weak" and "strong" variations

in the control. For the weak or "continual" variation,

the actual control differs from the desired control by an

infinitesimal amount E7(t). Tolerances on the control can

be set by limiting 7(t). The strong or "intermittent"

variation causes actual control to differ from the desired

control by large amounts, but only during infinitesimal

intervals of time. The continual variation is applicable

when the control is continuous and the intermittent var-

iation is useful for "bang-bang" control.

There are two effects of a variation in control. One

result of a variation in the control would be the failure








to hit a desired target or terminal state. The other

effect would be variations in the value of the performance

index. B6langer has considered both effects for the case

in which the actual target has been replaced by an ideal

target. This is useful since control tolerances necessary

to hit the actual'target can be determined and variation

in cost calculated. For example, if the target were a

small sphere, the control to hit a point at its center

would be calculated and then tolerances determined for

this control to insure that the sphere will always be

reached.



Summary


The first portion of this survey presents a historical

development of sensitivity analysis in automatic control.

The various sensitivity functions, vectors, and coeffi-

cients are defined and several methods for their calcula-

tion are discussed. Also, some of the recent contributions

toward a generalized approach to sensitivity analysis are

presented. With this background information, the appli-

cation of sensitivity to optimal control is discussed next.

A variety of uses of sensitivity in optimal control

has been formulated in recent years. The discussion in-

cludes the sensitivity of the performance index, the state

vector and the terminal state for variations in plant

parameters, controller parameters, the state vector, and

the control vector. Also included are some of the





22


optimization schemes which make use of sensitivity to

optimize system performance. The use of sensitivity in

establishing the control tolerance necessary for target

states is discussed.

In looking over the variety of methods and defi-

nitions that has been used for the sensitivity analysis

of optimal systems, it is obvious that no single method

is completely satisfactory. The subject is still in the

early stages of its development and there is much need

of a general, comprehensive approach. There is a great

deal of interest in this subject and no doubt considerable

progress will be made in the near future.














REFERENCES


1. W. H. Bode, Network Analysis and Feedback Amplifier
Design, D. Van Nostrand Co., Inc., New York, N.Y.,


2. J. G. Truxal and I. M. Horowitz, "Sensitivity con-
siderations in active network synthesis," Proc.
Midwest Symposium on Circuit Theory, Michigan State
University, East Lansing, Michigan, December 1956.

3. J. G. Truxal, Automatic Feedback Control System
Synthesis, McGraw-Hill Book Co., Inc., New York, N.Y.,
1955.-

4. S. J. Mason, "Feedback theory some further prop-
erties of signal flow graphs," Proc. IRE, vol. 44,
pp. 920-926, July 1956.

5. R. Huang, "The sensitivity of the poles of linear
closed-loop systems," Trans. AIEE (Applications and
Industry), pt. 2, vol. 77, pp. 182-187, September 1958.

6. F. F. Kuo, "Pole-zero sensitivity in network func-
tions," IRE Trans. on Circuit Theory, vol. CT-5,
pp. 372, December 1958.

7. Gh. Cartianu and D. Poenaru, "Variation of transfer
functions with the modification of pole location,"
IRE Trans. on Circuit Theory, vol. CT-9, pp. 98-99,
March 1962.

8. H. Ur, "Root locus properties and sensitivity rela-
tions in control systems," IRE Trans. on Automatic
Control, vol. AC-5, pp. 57-65, January 1960.

9. W. Lynch, "A formulation of the sensitivity function,"
IRE Trans. on Circuit Theory (Correspondence), vol.
CT-4, p. 289, September 1957.

10. I. M. Horowitz, "Fundamental theory of automatic linear
feedback control systems," IRE Trans. on Automatic
Control, vol. AC-4, pp. 5-19, December 1959.

11. I. M. Horowitz, "Synthesis of linear, multivariable
feedback control systems," IRE Trans. on Automatic
Control, vol. AC-5, pp. 94-105, June 1960.


23




24


12. W. M. Mazer, "Specification of the linear feedback
system sensitivity function," IRE Trans. on Automatic
Control, vol. AC-5, pp. 85-93, June 1960.

13. S. L. Hakimi and J. B. Cruz, "Measure of sensitivity
for linear systems with large multiple parameter
variations," IRE WESCON Cony. Record, pt. 2, pp. 109-
115, August 1960.

14. A. J. Goldstein and F. F. Kuo, "Multiparameter sensitiv-
ity" IRE Trans. on Circuit Theory, vol. CT-8, pp. 177-
178, June 1961.

15. I. M. Horowitz, "Design of multiple-loop feedback control
systems,"IRE Trans. on Automatic Control, vol. AC-7,
pp. 47-57, April 1962.

16. J. B. Cruz, Jr., "Sensitivity considerations for time-
varying sampled-data feedback systems," IRE Trans. on
Automatic Control, vol. AC-6, pp. 228-236, May ili1.

17. I. M. Horowitz, "The sensitivity problem in sampled-
data feedback systems," IRE Trans. on Automatic Control,
vol. AC-6, pp. 251-259, September 1961.

18. M. Margolis and C. T. Leondes, "A parameter tracking
servo for adaptive control systems," IRE Trans. on Auto-
matic Control, vol. AC-4, pp. 100-111, November 1959.

19. H. F. Meissinger, "The use of parameter influence co-
efficients in computer analysis of dynamic systems,"
Proc. Western Joint Computer Conf., San Francisco,
California, pp. 181-192, May 1960.

20. M. Margolis and C. T. Leondes, "On the theory of adaptive
control systems: method of teaching models," Proc. First
Int'l. Congress of IFAC, vol. 2, pp. 555-563, 1960.

21. W. Brunner, "An iterative procedure for model building
and boundary value problems," Proc. of the Western Joint
Computer Conf., Los Angeles, California, pp. 519-525,
May 1961.

22. P. E. Fleisher, "Optimum design of passive-adaptive,
linear feedback systems with varying plants, IRE Trans.
on Automatic Control, vol. AC-7, pp. 117-128, March 1962.

23. R. Tomovi6, Sensitivity Analysis of Dynamic Systems,
McGraw-Hill Book Co., Inc., New York, N.Y., 1963.

24. M. L. Bihovski, "Dynamic accuracy of electrical and
mechanical circuits," Akademia Nauk USSR, Moscow, 1958.
(in Russian)








25. P. V. Kokotovic and R. S. Rutman, "Sensitivity of auto-
matic control systems (Survey) Automatic and Remote
Control, vol. XXVI, pp. 727-748, Moscow, April 1965.

26. I. M. Horowitz, Synthesis of Feedback Systems, Academic
Press, Inc., New York, N.Y., 1963.

27. Proc. First Annual Allerton Conference on Circuit and
System Theory, University of Illinois, Allerton House,
Monticello, Illinois, November 1963.

28. P. Dorato, "On sensitivity in optimal control systems,"
IEEE Trans. on Automatic Control (Correspondence),
vol. AC-8, pp. 256-257, July 1963.

29. M. Gavrilovi6, R. Petrovi6, and D. Siljak, "Adjoint
method in the sensitivity analysis of optimal systems,"
J. Franklin Inst., vol. 276, pp. 26-38, July 1963.

30. R. Tomovi6, N. S. Parezanovi6, and M. J. Merritt,
"Sensitivity of dynamic systems to parameters which
increase the order of mathematical models," IEEE Trans.
on Electronic Computers, vol. EC-14, pp. 890-897,
December 1965.

31. P. Kokotovic and R. Rutman, "Sensitivity functions with
respect to a small parameter," Proc. Int'l. Symposium
on Sensitivity Analysis, Dubrovnik, Yugoslavia, September
1964.

32. R. Tomovi6, "Modern sensitivity analysis," IEEE Conven-
tion Record, vol. 13, pt. 6, pp. 81-86, March 1965.

33. Proc. Int'l. Symposium on Sensitivity Analysis,
Dubrovnik, Yugoslavia, September 1964.

34. Proc. Third Annual Allerton Conference on Circuit and
System Theory, University of Illinois, Allerton House,
Monticello, Illinois, October 1965.

35. S.S.L. Chang, Synthesis of Optimum Control Systems,
McGraw-Hill Book Co., Inc., New York, N.Y., 1961.

36. J. B. Cruz, Jr., and W. R. Perkins, "A new approach to
the sensitivity problem in multivariable feedback
systems design," IEEE Trans. on Automatic Control, vol.
AC-9, pp. 216-223, July 1964.

37. J. B. Cruz, Jr., and W. R. Perkins, "The role of sensi-
tivity in the design of multivariable linear systems,"
Proc. of the Nat'l. Elec. Conf., vol. 20, pp. 742-745,
1964.








38. W. R. Perkins and J. B. Cruz, Jr., "Sensitivity opera-
tors for linear time varying systems," Proc. Int'l.
Symposium on Sensitivity Analysis, Dubrovnik, Yugo-
slavia, September 1964.

39. W. R. Perkins and J. B. Cruz, Jr., "The parameter vari-
ation problem in state feedback control systems," J. of
Basic Engineering (Trans. ASME, Series D), vol. 87,
pp. 120-124, March 1965.

40. J. B. Cruz, Jr., and W. R. Perkins, "Sensitivity com-
parison of open-loop and closed-loop systems," Proc.
Third Allerton Conf. on Circuit and System Theory,
University of Illinois, pp. 607-612, 1965.

41. R. A. Rohrer and M. Sobral, Jr., "Sensitivity considera-
tions in optimal system design," IEEE Trans. on Auto-
matic Control, vol. AC-10, pp. 45-48, January 1965.

42. R. F. Drenick and P. Dorato, "Optimality, insensitivity
and game theory," Proc. Int'l. Symposium on Sensitivity
Analysis, Dubrovnik, Yugoslavia, September 1964.

43. P. Dorato and A. Kestenbaum, "Application of game theory
to the sensitivity design of systems with optimal con-
troller structures," Proc. Third Allerton Conf. on
Circuit and System Theory, University of Illinois, pp.
35-45, October 1965.

44. A. Kestenbaum, "Optimal control sensitivity design,"
Research Rept. PIB MRI 1266-65, Polytechnic Institute
of Brooklyn, N.Y., 1965.

45. B. Pagurek, "Sensitivity of the performance of optimal
control systems to plant parameter variations," IEEE
Trans. on Automatic Control, vol. AC-10, pp. 178-1-8,
April 1965.

46. H. S. Witsenhausen, "On the sensitivity of optimal
control systems," IEEE Trans. on Automatic Control
(Correspondence), vol. AC-10, pp. 495-496, October 1965.

47. D. D. Siljak and R. C. Dorf, "On the minimization of
sensitivity in optimal control systems," Proc. Third
Allerton Conf. on Circuit and System Theory, University
of Illinois, pp. 225-229, October 1965.

48. R. C. Dorf, "System sensitivity in the time domain,"
Proc. Third Allerton Conf. on Circuit and System Theory,
University of Illinois, pp. 46-62, October 1965.





27


49. J. M. Holtzman and S. Horing, "The sensitivity of
terminal conditions of optimal control systems to pa-
rameter variations," IEEE Trans. on Automatic Control,
vol. AC-10, pp. 420-426, October 1965.

50. P. R. B6langer, "Some aspects of control tolerances
and first-order sensitivity in optimal control systems,"
IEEE Trans. on Automatic Control, vol. AC-11, pp. 77-83,
January 1966.













CHAPTER 3


SENSITIVITY ANALYSIS FOR DISCRETE SYSTEMS


Introduction


The utility of sensitivity analysis in discrete

systems has been recognized [1] and a few papers have

been presented [2,3,4]. The early work on sensitivity

was influenced by the analog computer, and as a result,

most methods developed to determine sensitivity "factors"

are in the form of solutions of continuous equations.

In view of the wide spread use of discrete and hybrid

systems, a discrete approach to sensitivity analysis

would be useful. It would certainly be beneficial for

the sensitivity of discrete systems and might also be

useful in analysis of continuous systems.

The existing continuous sensitivity equations can

be solved on a digital computer. However, the discrete

version should be more convenient since an increasing

amount of system simulation is being done on digital

computers.

In developing a discrete approach to sensitivity,

discrete versions of the existing continuous factors

and equations will be formulated. Also, factors that

are only useful for discrete systems will be investigated.








Parameter Sensitivity of Discrete Systems


One of the most useful sensitivity factors is the

sensitivity of the state variable to changes in system

parameters. In the following discussion, state-parameter

sensitivity is developed by two methods. The change in

state is first expressed in terms of a perturbation matrix

and then developed by means of a sensitivity-vector

function.

The most general representation of a lumped discrete

system is given by the equation


x(k) = f(x(k-l),u(k-l),p,tk-k, (1)

where x(k) is a n-dimensional state vector, u(k) is a

m-dimensional control vector, and p is a constant r-dimen-

sional parameter vector. The sampling interval Tk is the

time between two consecutive sampling instants,


Tk = tk+1 tk (2)

For convenience, the arguments of x and u will not include

the time t explicitly. Instead, time will be designated

by the particular sampling interval. For example;

x(tk) = x(k).

If the discrete system is linear, it can be repre-

sented by the equation

x(k) A(k-l)x(k-l) + B(k-l)u(k-l), (3)

where A(k-l) and B(k-l) are A(tkl,tk)and B(tkl,tk)

during the sampling interval Tk_1 = (tk tk-l).







For linear systems, this is often more convenient than

the form given in equation (1). If the system of equation

(3) is also stationary, the matrices A and B are constant;


A(j) = A(i) = A and B(j) = B(i) = B.


In the following developments, some of the methods

are only applicable for linear systems, and equation (3)

will be used.



Perturbation matrix approach


One method for determining the variation in the

state x(k) for changes in system parameters makes use

of a perturbation matrix. This method is applicable

only to linear systems and will be developed from

equation (3) and its solution, which is [5],

k-i
x(k) = @(k,0)x(0) + (k,j+l)B(j)u(j), (4)
j=0

where the transition matrix is given by,

k-I
i(k,j) = A(i) = A(k-l)A(k-2) A(j+l)A(j). (5)
i-j

If the system is stationary, the transition matrix is


(k) = A A-A.A-.*; (6)


and the solution vector can be written as

k-i
x(k) = (k)x(0) +X 4(j)Bu(k-j-l). (7)
j-0







The derivation will be carried out for a stationary

system. The results will also be given for time-varying

systems. To determine the effects of a change in param-

eters, let the matrix A of a stationary system change by

an amount EC. Equation (3) can be written for the perturbed

system [6]


3(k) = [A + EC]x(k-1) + Bu(k-1). (8)


If equation (8) is written for successive values of k,

and certain substitutions made, the system transition

equation can be derived for S(k) in terms of the control

over the interval, the initial value x(0), and the system

parameters. The resulting equation is

k-i
3(k) -= (k)x(0) +7 t(j)Bu(k-j-l)
j-0

k-i
+ Ej 4(k-j-l)CG(j)x(0)
j=0

k-2 k-i-2
+ Ei I X (k-j-i-2)C(j)Bu(i) + 0(E2) (9)
i=0 j=0


The change in x(k), ax(k) = X(k) x(k), can be

determined by subtracting equation (7) from equation (9).

The first-order change in x(k) is obtained by neglecting

the higher order terms. For a stationary linear system,







the first-order change is

k-i
Ax(k) E $(k-j-l)C (j)x(O)
j=0

k-2 k-i-2
+ el (k-j-i-2)C4(j)Bu(i). (10)
i=0 j=0

A similar equation gives the first-order change in x(k)

for a time-varying linear system.

k-1
Ax(k) = E (k,j+l)C(j))(j,0)x(O)
j=0
j=O (11)

k-2 k-i-2
+ C I t(k,j+i+2)C(j+i+l)i(j+i+l,i+l)B(i)u(i).
i=0 j-0


The first-order effects of EC can also be derived in

the form of a difference equation


Ax(k) = A(k-l) Ax(k-l) + (C(k-l)x(k-l), (12)


with Ax(0) = 0. Thus, the first-order change at each

sampling instant can be calculated from the state and the

first-order error at the preceding sampling instant.

An exact difference equation for the change in a

state x(k) due to the perturbations EC(k) is


Lx(k) [A(k-l) + EC(k-l)]Ax(k-l) + EC(k-l)x(k-l),








where ax(0) = 0. Note that the exact change given by

equation (13) differs by only one term from the first-

order change of equation (12). An equation of the form

of equation (11) could be derived for the exact change,

but it is too complex to be useful in computations.

Equation (12) is not so accurate as equation (13),

but it is more general. If equation (12) is divided by

e, the resulting difference equation is


Ax(k) = A(k-l)a (k-l) + c(k-l)x(k-l). (14)
C E


The vector x(k) can be calculated along with x(k) and

then the first-order error for any small E can be deter-

mined from Ox(k). This generalization cannot be made on

equation (13).

The equations derived in this section are not

sensitivity equations by definition. However, they do

give the change in the state due to known perturbations

of system parameters. Note that by carefully selecting

the elements of the matrix C, any number of the system

parameters of A can be varied. Equations (10) and (11)

give the first-order change in any state in terms of

x(0) and the control over the interval. Equations (12)

and (13) give the first-order and exact change in any state

in terms of the preceding state and change. These equa-

tions will be illustrated in an example problem and then

will be used to check the accuracy of other sensitivity

methods.







Sensitivity vector-function approach


Another approach to the sensitivity of discrete

systems is very similar to the sensitivity vectors and

sensitivity functions of continuous systems [7]. This

method does not require the system to be stationary or

linear. A sensitivity vector can be defined as


v (k(k) axl(k ax) ax,( (15)
v i(k) -
1 api pi pi p Pi J


where the sensitivity functions are the components,


vij(k) = x(k) (16)
Spi 6pi = 0.


Apr = 0

There will be a sensitivity vector for each of the r

parameters of the parameter vector p. Using the sensi-

tivity vectors, a first-order approximation of the change

in x(k) can be written as

r
ax(k) = j(k)Pj. (17)

j=1

The method used to calculate the sensitivity co-

efficients is of primary importance. In continuous

analysis, a differential equation is formulated and its

solution yields the sensitivity functions. For the

discrete approach, a difference equation would be desirable.







To formulate a sensitivity difference equation, take

the partial derivative of equation (1) with respect to pi.

This yields the equation


ax(k) af(k-1) ax(k-l) + 3f(k-1) (18)
Spi ax(k-l) api api


Note that the terms involving u are not present, as it is

assumed that the control is not dependent upon the changing

parameters. Substituting equation (15) into equation (18)

results in a difference equation for the sensitivity vectors

of a discrete system,


af(k-l) af(k-1)
iv(k) k- ) v(k-1) + (19)
-i ax(k-1) -i api


where vi(0) = 0. The sensitivity functions are determined

from an equation for the components of equation (19),


n afj(k-1) afj(k-l)
v. (k) -= sf(k-1) s(k-1) + f -) (20)
bxss(k-l) vs api
s=l

where v..(0) = 0.

The sensitivity vector function approach of equations

(19) and (20) is in general more useful than the transition

matrix approach discussed in the previous section. The

primary advantage of the sensitivity vector function

approach is that the varying parameters are not restricted

to the elements of A. Both methods are illustrated by an

example problem in a later section.








Sampling Interval Sensitivity of Sampled Systems


The performance of a sampled-data system is usually

very sensitive to changes in the sampling period. There-

fore, knowledge of the sampling interval sensitivity

should be quite useful. Sampling interval sensitivity

could be used as a basis for selecting the universal

sampling interval or for selecting individual sampling

intervals. These two applications are significant in

that they illustrate two distinct sampling interval

sensitivity functions. In selecting a universal sampling

interval, the fixed sampling rate that optimizes system

performance can be determined with the aid of a "global"

sensitivity function. Individual sampling intervals

would be selected by means of a "local" sensitivity

function [3]. In the following work, both "global" and

"local" sampling interval sensitivity functions are

defined, and the equations necessary for their calculation

will be presented.



Global sampling interval sensitivity


If a system has a fixed sampling interval, then

Ti = Tj in equations (1) and (3), and a global sampling

interval sensitivity function can be defined as


ST(k) lim x(k(T+AT)} x[k(T) (21)
-T AT-O AT








Equation (21) is an expression for the change in the state

at the kth sampling instant if every sampling interval is

changed by AT. Note that if K samples are required to

cover the interval of interest, then the size of AT must

obey the inequality

TTotal Time (22)
aTc (22)
K
th
This is necessary since the i sampling instant is

shifted by i'AT and the maximum shift K'AT must not move

the kth sampling instant into another sampling interval

[3].

If the state x(k) is a continuous function of T, the

global sampling interval sensitivity function of equation

(21) can be written as


T (k) (23)
-T T

A discrete sensitivity equation for vT(k) can be

derived by taking the partial derivative of equation (1)

with respect to T. If parameter variations are not

considered,


Bx(k) f(k-1) ax(k-l) af(k-1) bu(k-1) af(k-1)
6T ax(k-l) 3T 3u(k-l) 6T T (24)


Substituting with equation (23) yields the discrete

sensitivity equation


af(k-1) af(k-1) 6u(k-1) af(k-1)
vT(k) = vT(k-1) + +
ax(k-l)- u(k-l) 6T 8T
(25)








subject to the initial conditions v (0) = 0.
-T
If system is linear, then equation (3) is used to

represent the system, and the sensitivity equation is


3A(k-l)
v (k) A(k-l)v (k-l) + -x(k-l)
T -T T

+ AB(k-1) u(k-1) +B(k-1) yu(k-l), (26)
6T aT

with the initial conditions v (0) = 0.
-T
The n-dimensional global sensitivity vector will have

a set of values for each sampling interval. The it com-

ponent of v (k) indicates the effect of changes in every
-T
sampling interval on the ith component of the state vector

x(k) at the kth sampling instant. Therefore, it can be

used to extrapolate solutions about a nominal sampling

interval T For small values of k, the first two terms

of a Taylor's series,


x[k(To + AT)] a x(kT) + vT(k)AT, (27)


will give good results. The accuracy of the approximation

deteriorates as k increases. This is characteristic of

sensitivity methods that involve frequency [3]. Global

sampling interval sensitivity will be illustrated in an

example problem.


If the sampling interval is not held constant, then


Local sampling interval sensitivity







the global sensitivity defined in the preceding section

does not apply. Therefore, a local sensitivity function

is required to determine the effect of a change in the

kth sampling period on the state at the (k+l)st sampling

instant. A local sampling interval sensitivity function

can be defined as


S lim x(tk+At) x(tk) (tk)
v(k)= (28)
Att-0 At atk


To derive a sensitivity equation for local sampling

interval sensitivity, the system equation is first written

in the form


x(k) = f(x(k-l),u(k-l),tk_,tk) (29)


To determine v(k), differentiate equation (29) with respect

to tk,


v(k) = 3 {(k-l),u(k-l),tki,tk. (30)
8tk

If the system is linear, the sensitivity equation is

6A(tk-l,tk) 6B(tk-l,tk)
v(k) = x(k-l) + u(k-l). (31)
-tk -tk


Note that the terms A(tkl,tk)- (k- 1) and B(tk_,tk)aL(k-1)
8tk atk

are missing from equation (31) because both ag(k-1) and
atk
u(k-1 ) are zero. This is true since x(k-1) and u(k-l)
atk








are insensitive to changes in the following kth sampling

instant.

Equation (31) gives the sensitivity of x(k) to changes

in Tk-_ only. Similar to global sensitivity, local sensi-

tivity can be used for extrapolation, but only about tk.

Also, additional insight into the relationship between

local and global sensitivity is gained by noting that for

a linear system, the global sensitivity is the sum of all

preceding local effects [3].



Example Problems


The purpose of this section is to illustrate the

calculation of the various sensitivity functions that

have been defined.



Problem 1


As a first example, consider the system of figure

3-1. The equation for the continuous system is


x(t) = -ax(t) + u(t). (32)


The discrete version of the system is


x(k) = Ax(k-1) + Bu(k-1), (33)


where


A = exp[-a(tk tkl)),








and

t
B = exp(-a(tk-))}d7. (35)

k-1


For a sampling interval of 0.1 seconds and a = 1.0, the

coefficients are; A 0.90484 and B = 0.09516. The system

was simulated on a digital computer and then response for

a unit step is shown in figure 3-2.

To illustrate the perturbation matrix approach for

parameters variation, equation (14) for the generalized

first-order change, was programmed on a digital computer.

The solution, ,x(k), for a unit step input is plotted in

figure 3-3. It should be noted that the response is not

continuous as shown in figure 3-3, but is actually a series

of discrete data points.

The sensitivity vector approach given in equation

(19) was also used. For variations in A, this first-

order system will have only one sensitivity vector with

one component. The solution to equation (19) is identical

to that of equation (14). Thus, the sensitivity vector

and the perturbation matrix methods have the same solu-

tion. Figure 3-3 represents the sensitivity of state x

to variations in the parameter A when determined by either

method.

To check the accuracy of both methods, the system

was simulated with A = 0.89584, B = 0.09516 and T = 0.1

seconds. This is equivalent to perturbing A by -0.009.








Zero-Order Hold Plant

S- -sT 1

u(t) s (s + a) x(t) x(kT)


Figure 3-1 First-Order System







1.0


.75-


.50


.25


0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Time in Seconds

Figure 3-2 Unit Step Response




10.0


7.5


5.0
Ax
e 2.5


0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Time in Seconds

Figure 3-3 Generalized First-Order
Difference and Sensitivity








Figure 3-4 contains a plot of the difference between the

original response x and the perturbed response x. For

comparison, the figure also includes a curve for the first-

order change calculated by equation (12) with EC = -0.009.

The exact change, given by equation (13) with EC = -0.009,

is identical to the difference in x from the two sim-

ulations. The first-order change was calculated by

equation (10) for several values of k and the results

agree with those of equation (12).

The accuracy of the sensitivity vector approach was

checked by using equation (17) with p = -0.009. The cal-

culated first-order change in x is identical to that of

the perturbation matrix approach. Thus, the first-order

error curve of figure 3-4 represents both methods.

The global sampling interval sensitivity function of

equation (26) is shown in figure 3-5 for a unit step input

and T = 0.1 seconds. The coefficients are; A = 0.90484

and B = 0.09516. Figure 3-5 also contains a plot of the

local sampling interval sensitivity function as given in

equation (31). The extrapolation by equation (27) of

Ax(k) for several values of TO and AT, was compared

with the actual change in x(k) when the system was sim-

ulated with T = To+AT. The results were excellent as

long as inequality (22) held. It should be pointed out

that the Ax(k) predicted by global or local sensitivity

is not an error, but the change in x(k) due to a variation

in T or Tk.










Time in Seconds
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0


-.02-

-.04-
ax
-.06. Exact Change

-.08-
First-Order Change
-.10-


Figure 3-4 Exact and First-Order Change










4.0-


3.0
vT I Global Sensitivity

2.0
v(k) ^
10

Local Sensitivity


0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Time in Seconds


Figure 3-5 Global and Local Sensitivity







Problem 2


The sensitivity functions have also been calculated

for the system in figure 3-6. The discrete version of

the system is


x(k) Ax(k-l) + bu(k-l). (36)


The coefficient matrix A has the components:


all = exp(-37)[cos(4t7) + .75sin(477)] (37)


a12 = (25/24)exp(-37)sin(4T) (38)

a21 = -1.5exp(-377)sin(47) (39)


a22 = exp(-37l)[cos(47) .75sin(41)], (40)


where 1 = (tk tk_). The coefficient of u(k-l) is a

vector b with the elements:


b- = 6.25exp(-36)sin(46)dT (41)
tk-

Ik
b2 = 6exp(-36)[cos(46) .75sin(46)]dT, (42)
t_-l


where 6 = (tk T). For a sampling interval of 0.01

seconds, all = 0.99877, a12 = 0.040424, a21 = -0.058211,

a22 = 0.94056, bl = 0.0012251, and b2 0.058211. Figure

3-7 contains a plot of the response of x1(k) and x2(k)

for a unit step input.











Zero-Order Hold


Figure 3-6 Second-Order System


Time in Seconds


Figure 3-7 Unit Step Response








The perturbation matrix method was used to calculate

the effects of changing the components of the matrix A.

In this example, every element of A was changed the same

amount by setting all elements of the matrix C equal to

one. The resulting first-order change in xl(k) and x2(k)

calculated by equation (14) for a step input are shown in

figure 3-8. The change in x(k) for a given value of e

can be determined from figure 3-8 by multiplying the

curves by e.

The accuracy for a specific perturbation was deter-

mined by calculating the change in x(k) for various values

of ecij, and then simulating the system with the elements

of A actually changed by ecij. The actual changes in

xl(k) and x2(k) from the simulations and the first-order

changes calculated by equation (12) are very close. The

exact changes given by equation (13) are identical to the

changes observed in the simulations.

The sensitivity vectors were calculated for each

element of A as a variable parameter. The elements of

the sensitivity vectors for changes in all are shown in

figure 3-9. To avoid confusion, the vectors for changes

in the other elements of A are not included. The changes

in xl(k) and x (k), due to changing all elements of A by

0.02, were calculated by equation (17). The results are

identical to the first-order changes predicted by the

perturbation matrix approach.

















Ax1

E 2.0


1.0
AX2

0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2,
Time in Seconds
-1.0
AX2
-2.0

-3.0

Figure 3-8 Generalized First-Order Change


3.0
V11
2.0


1.0


1 0.2 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.

S-1.0
I T e i S.c o n d s l =


V12

-2.0


-3.0

Figure 3-9 Sensitivity Vector Components















v1(k)


v (k)


Time in Seconds




Figure 3-10 Local Sensitivity


1T (k)


Figure 3-11 Global Sensitivity


-1.0.


-2.0.






75-


50-


25-

a



-25-
0


-50-


-75-







The global and local sampling interval sensitivity

vectors were calculated by equations (26) and (31) for a

step input and T 0.01 seconds. Figure 3-10 contains a

plot of each element of the local sensitivity vector. The

global sensitivity vector is shown in figure 3-11. Excel-

lent results were obtained when the two vectors were

checked by extrapolating values of x(k) about T = 0.01

for AT = 0.005 and comparing them with values obtained

from a simulation with T = 0.015.



Summary


Two approaches to parameter sensitivity for discrete

systems have been presented. In each case, difference

equations for the sensitivity function were developed

and their calculation and use demonstrated.

The sensitivity to local and global changes in

sampling interval were investigated and formulas for

their calculation were given. Example problems were

worked and the results compared with the changes observed

when the sampling interval was actually changed.

Every method discussed was found to be accurate, and

no problems were encountered in calculating and using the

functions. Based on these observations, parameter and

sampling interval sensitivity should be quite useful in

the analysis and design of discrete systems.














REFERENCES


1. R. Tomovi6, "Modern sensitivity analysis," IEEE
Convention Record, vol. 13, pt. 6, pp. 81-86,
March 1965.

2. R. Tomovi6, W. Karplus and J. Vidal, "Sensitivity of
discrete-continuous systems," Third IFAC Congress,
London, England, June 1966.

3. G. A. Bekey and R. Tomovi6, "Sensitivity of discrete
systems to variation of sampling interval," USCEE
Report 127, University of Southern California, School
of Engineering, Los Angeles, California, March 1965.

4. J. Vidal, W. Karplus and G. Keludjian, "Application
of sensitivity analysis to hybrid computations,"
Proc. Int'l. Symposium on Sensitivity Analysis,
Dubrovnik, Yugoslavia, September 1964.

5. H. Freeman, Discrete-Time Systems: An Introduction
to the Theory, John Wiley and Sons, Inc., New York,
N.Y., 1965.

6. R. C. Dorf, "System sensitivity in the time-domain,"
Proc. Third Allerton Conference on Circuit and System
Theory, University of Illinois, pp. 46-62, October


7. R. Tomovic, Sensitivity Analysis of Dynamic Systems,
McGraw-Hill Book Co., Inc., New York, N.Y., 1963.














CHAPTER 4


VARIABLE INCREMENT OPTIMAL SAMPLING


Introduction


The purpose of this work is to develop a method for

optimal adjustment of the sampling intervals of a sampled-

data system. The object of adjustment is to obtain a low

sampling rate, subject to a performance criterion that re-

flects system fidelity. This is appealing for such appli-

cations as time-sharing of a digital computer, control of

discrete systems in which energy is to be conserved, and

selection of a set of optimal sampling instants.

Adjustable sampling has been investigated by several

approaches. Dorf, et al., [1] used the absolute value of

the derivative of the error signal in Type I and Type II

unity feedback systems to adjust the sampling rate. Gupta

[2] used the ratio of the first and second derivative of

the error signal to determine the sampling intervals. More

recently, Tomovic and Bekey [3] have used amplitude sensi-

tivity to adjust sampling rate. The algorithm is based on

the sensitivity of the system output to changes in output

of a zero-order data hold for the error signal. They have

also used the sensitivity of the system output to changes








in sampling interval to adjust the sampling rate [4]. The

related problem of quantization error in hybrid systems has

also been studied [5].

In this work, performance criteria for sampling inter-

val adjustment will be selected, and error and state

variable sampling' interval sensitivity functions defined.

Also included will be sampling interval formulas that in-

corporate sensitivity and the performance criteria to

determine sampling intervals that maintain desired system

fidelity.



Performance Criteria for Variable Increment Sampling


The performance criteria for sampling interval selec-

tion should reflect the effects of sampling interval

variations on the discrete system state variables. The

criterion should also include the type of data reconstruc-

tion, since this can affect the fidelity of the response.

Consider first the effects of sampling interval

variations on the discrete system state variables at the

sampling instants. If the discrete model of a continuous

system is exact, then the values of the state variables

are correct for any sampling interval size. However, if

the discrete model is not exact, then the error between

the continuous and discrete state variables at the sam-

pling instants will, in general, depend upon the sampling

intervals. The performance criteria should include this

modeling error. This can be done analytically for linear








systems. For nonlinear systems, a combination of analytical

and experimental methods must be used.

When the reconstructed response must meet certain

specifications, then the error of the data reconstruction

device must also be included. For a linear continuous

system with an exact discrete model, the only error in

the response is that of the reconstruction device. If an

approximate discrete model is used, then both types of

errors are present and must be considered. Since the data

reconstruction device operates over a full sampling inter-

val, a suitable criterion would be the magnitude or the

mean- or integral-square value of the difference between

the continuous and the reconstructed response over each

sampling interval. If the exact solution is unknown, the

response for very small sampling intervals could be used

for the comparison. The exact solution will, of course,

seldom be known in an on-line application.



Sampling Interval Sensitivity


In order to select the sampling interval, Tk_1

tk-tk_-, so that x(tk) satisfies a sensitivity performance

criterion, the effect of variations in Tk-l on x(tk) must

be known. This can be determined from local sampling

interval sensitivity. If an approximate model is used,

it would also be helpful to know the effects of variations

in Tk-1 on the error between continuous and discrete state







variables. This section will be devoted to an investigation

of these effects.



Local state variable sampling interval sensitivity


The sensitivity of x(tk) to variations in Tk_- is

given by the local sampling interval sensitivity function.

Variations in Tk-1 = tk-kl are obtained by perturbing

tk while tk-1 is held constant. A definition for local

sampling interval sensitivity can be stated,


v(k) ,(t (1)
tk

where v(O) = 0.

For the system represented by the equation


x(tk) = f[x(tk-l),u(tk- ,tk-l'tk1, (2)


the sensitivity, v(k), can be obtained by differentiating

with respect to tk,


v(k) = af(x(tk-l),u(tk-l),tk-ltk1 (3)
-tk

If the system is linear, sensitivity can be calculated

from the equation


v(k) A(tk-l,tk) xB(tk-,tk)
v(k) x(tk-l) + u(tk_l) (4)


It should be noted that local sampling interval sensi-

tivity gives the effect on the state vector x(tk) of

changing only Tk-_. It is not directly related to modeling







error at sampling instants nor reconstruction error between

sampling instants. Unlike parameter sensitivity, it is not

desirable to have sampling interval sensitivity approach or

equal zero, for this would mean that the state vector never

changes.



Local error sampling interval sensitivity


If the discrete model is approximate, it would be

desirable to base sampling interval selection on some

function of the error of the approximation. In order to

determine sampling intervals that maintain certain error

criteria, the sensitivity of the error to variations in

sampling intervals is needed. This can be determined

analytically for a linear system, using the exact discrete

model and the discrete approximation used to obtain the

approximate model.

As an example, consider the effect of using the

rectangular rule to derive the discrete model of the

linear continuous system,


x(t) = A(t)x(t) + B(t)u(t), (5)


where x(0) = xo. If the rectangular rule for integration

is used, an approximate discrete model for the system is,


k(tk) = [I + TkiA(tk,tkl)]t(tk-l)


+ Tk-,lB(tk,tk.k-)u(tk-1),







where Tk_ tk-tk-1 and I is the identity matrix.

An exact discrete model can be determined by dis-

cretizing the state transition equation,


x(t) = Q(t,to)x(to) + t Q(t,n)B(7)u(?)dn. (7)

For the interval Tk_l, equation (7) becomes,

t k
x(tk) = 0(tk,tk-l)x(tk-l) + it k (tk,7)B())u(77)dl.
k-1 (8)

Equation (8) can be further simplified if u(t) and B(t)

are constant over the interval Tkl. This implies that

a zero-order sample and hold is being used at the input,

and that B(t) is constant during sampling intervals. An

additional simplification can be made by defining,



tkl
(tk,tk-1) = t k tk,l)dy. (9)
k-1

Equation (8) can now be written,


x(tk) = S(tk,tk-_l)(tkl)

+'((tk,tk-l)B(ktk,tk-l)u(tk-l). (10)

The error, x(tk), in x(tk), due to the approximate

discrete model, is the difference,


(tk) x(tk) !(tk). (11)


Using equations (6) and (10), an equation for the error

is,







(tk) = [Q(tk,tk-1) I TkiA(tk,tk-l)T](tkkl)

+ Q(tk,tk-1){(tk-l)

+ [2(tk,tk_) Tki]B(tktkl)(tk-). (12)


By varying tk with tk_1 constant, the sampling interval

Tk-l is changed,and the sensitivity of the error in x(tk),
due to variations in the sampling interval, can be defined

as,

(tk) (tk (13)
S atk

At k = 0, the error, _(to), and the error sensitivity,

v(0), are zero since the initial conditions are identical
for both the exact and the approximate discrete models.

Equation (12) can be used to determine the equation for

the error sampling interval sensitivity,


= 1 (tk'tk-l) A t(tktk-1 (tk-l)


TkI -t- + t (tk-l)

+ (1tk,_tl atk (tk-l)


a(tkstk-1) I B(tk,tk-l)u(tk-l). (14)
L atk J

If the allowable error, 5(tk), is specified, equation
(12) could be used to determine the sampling intervals








required to keep x(tk) within tolerance. However, in this

work, sampling interval equations will be based on the

error sampling interval sensitivity of equation (14). It

should be noted that the error formulas were derived for an

approximate discrete model based on the rectangular inte-

gration formula. Similar formulas can be derived for other

approximations.

Since it is not always possible to determine an exact

discrete model, the analysis of the effects of an approxi-

mate discrete model must often be determined by experi-

mental means.



Variable Increment Sampling Based on Sensitivity


The accuracy of the response of certain components of

the state vector might be more important than others, and

sampling interval calculation should reflect these require-

ments. One way to do this is to weigh the components of

the sensitivity vector. Each component of the local sam-

pling interval sensitivity vector, v(k), is the sensitivity

of the corresponding component in the state vector to

changes in sampling interval. Another way of incorporating

different performance requirements would be to specify a

performance criterion for each element of the state vector.

For each interval, a sampling increment could be calculated

from each element of the sensitivity vector and the

corresponding performance criterion. The desired interval

would be selected from the resulting set.








Sampling interval calculation and selection could

possibly be simplified somewhat by using the quadratic

form, v Qv, for sampling interval calculations. There

are several advantages. Only one sampling interval would

be calculated for each sampling period, and there would be

no need to select an interval from a set of n sampling

intervals. The different performance requirements on the

elements of the state vector can be maintained by selecting

the elements of the matrix Q to give different weighting to

the elements of the sensitivity vector.



Linear system simulation with an exact discrete model


Consider the simulation of a linear continuous system

with an exact discrete model derived from the state transi-

tion equation. The error due to discretization of the in-

put will be neglected. Under these conditions, the values

of the discrete system state variables at the sampling

instants are identical to the response of the continuous

system. The only error in the response of the discrete

model is between sampling instants,and it depends upon the

sampling interval and the type of data reconstruction used.

In this section, formulas that use sampling interval

sensitivity to calculate sampling intervals will be derived.

The sampling interval formulas will incorporate performance

criteria that constrain the difference between the con-

tinuous system response and reconstructed response of the

discrete model.








To derive sampling interval formulas, consider first

zero-order hold reconstruction. The zero-order hold main-

tains the value of the state at the preceding sampling

instant until a new value is obtained at the next sampling

instant. For an exact discrete model, the continuous and

sampled response for a zero-order hold is shown in figure

4-1.

The derivation of sampling interval formulas will be

based on two approximations. First, it will be assumed

that the continuous response can be approximated by a

straight-line over a single interval. This is shown in

figure 4-1. For this assumption, the error of zero-order

hold data reconstruction is approximated by the shaded

area of figure 4-1. The other assumption is that x(tk)

can be determined from the first two terms of a Taylor's

series,


x(tk) x(tk-l) + [v(k)](tk tk-l), (15)


where v(k) is the local sampling interval sensitivity.

In order to maintain the error within specified limits,

the performance criterion should be related to the error

triangle of figure 4-1. The maximum difference, the inte-

gral of the difference, or the average of the difference

squared, could be used. Since all three are based on the

shaded triangle of figure 4-1, they are identical. If the

performance criterion is the maximum difference, then the

sampling interval Tkl should be selected so that,








[xi(tk) xi(tk-l) mi ( = 1,.-.,n). (16)

The elements, mi, of the performance vector m will be

selected to maintain the desired accuracy of the corre-

sponding state variable. For the approximations used in

this derivation, the integral of the difference is related

to the maximum difference by idi = 1/2Tklmi. The maximum

difference is related to the average of the difference

squared by adsi 1/3Tkl(mi)2

A sampling interval equation that maintains the in-

equality of equation (16) can be derived with the use of

equation (15). The ith sampling interval equation for

Tk-1 is,

mi
T(kl)i i (k) (17)


where Tk_1 must be selected from the set of n intervals.

If the minimum value is used,


Tk-1 min T(k-l)i (i = 1,***,n), (18)


then all elements of the state vector should be within the

limits of the performance criteria. Different performance

requirements on certain elements of the state vector can

be handled by using different values for mi or by weighting

the elements of the sensitivity vector vi(k).

If the quadratic form v (k)Qv(k) is used, a different

approach must be used to derive a sampling interval equa-

tion. To investigate the use of the quadratic form to














IL /X~i(tk)

Xi- (tk-i)

xi Xi (tk-2)
xi (tk-3)

k-3 Tk-2 k-l





tk-3 k-2 tk- tk
Time
Figure 4-1 Zero-Order Data Reconstruction









Xi (tk)

i(t-k-l)

xi xi (tk-2)

k-3 k-2 Tk-




k-3 tk-2 tk- tk

Time
Figure 4-2 First-Order Data Reconstruction







determine sampling intervals, consider a second-order system

and let Q be a diagonal matrix with diagonal elements qll

and q22. For these conditions, the quadratic form is


vT(k)Qv(k) = q11v2(k) + q22vi(k). (19)


If the approximation of equation (15) is used, a sampling

interval equation can be derived


T [qlll[x(tk) x(tk-1)]2
k- 1 L T(k)Qv(k)
1/2
Sq22[x2(tk) x2(tk-1 (20)
v (k)Qv(k) (


Once the desired difference between xi(tk) and xi(tkl) is

established, equation (20) can be used to calculate the

sampling interval. The use of the quadratic form for sam-

pling interval calculations will be illustrated with an

example problem.

If a first-order hold is used for data reconstruction,

the response during a given interval is the straight-line

extension of the sampled values from the two preceding

sampling instants. This is shown in figure 4-2. The

reconstructed sampled response during the interval

(tk tk-1) is given by the equation


xi(tk- ) [xi(tk-1) xi(tk-2)] () + xi(tk-l),
Lk-2 -







where 7 (tk tk_).

In deriving sampling interval equations for first-

order hold reconstruction, the same approximations used

in the zero-order hold case will be assumed. That is, the

continuous response will be approximated by a straight-line

over a single interval. Also, it is assumed that x(tk) is

given by the approximation of equation (15). The difference

between the assumed continuous response and approximate

response is represented by the shaded triangle of figure

4-2. Based on these assumptions, the maximum difference

will occur when 7 = (tk tk-_). Thus, maximum error

occurs when the reconstructed response of equation (21) is


[xi(tk-1) xi(tk-2]T (22)
k) = [ Tk jTk- + xi(tk-1). (22)


The value of the maximum difference for the interval

(tk tk-1) is xi(tk) xi(t).

If the performance criterion is based on limiting

the maximum difference, a sampling interval formula can

be derived for the criterion,


xi(tk) xi(tk)1 mi (i = 1, -.,n). (23)


Using equations (15),(22) and (23), the sampling interval

is

m.


k-2
k-2







where i = l,...,n. From the approximation for the x(tk)

in equation (15), the sensitivity is


xi(tkI) xi(tk_2)
vi(k-l) 1 (25)
Tk-2

and equation (20) can be changed to


mTk-l) (i = 1,-*,n). (26)
Tk i vi(k) vi(k-1)

The actual interval Tk-l is selected from the set on n

intervals calculated from equation (26).

The quadratic form v (k)Qv(k) can also be used to

determine the sampling interval for an exact discrete

model with first-order data reconstruction. Consider a

second-order system and use equations (15),(19) and (22).

The resulting sampling interval equation is

2 2
[Tk ll[xl(tk)-l(tk-l)] xl(tk)-xl(tk_-)] 2
k-1 T--
L v(k)TQ(k) v (k-l)Qv(k-i)

q22([x2(tk)-x2(tk1)]2 [x2(tk)-x2(tk-l)]2} 1/2
+ I
v(k)TQv(k) v (k-l)Qv(k-1) (27)
(27)

Note that in order to use equation (27), both the maximum

difference between the sampled and the continuous response,

and the maximum difference of the state variables,over an

interval, must be specified. For this reason, it may not

be as useful as other formulations.

A more general approach for determining the re-

constructed response of a discrete system would be to use








a fractional-order hold [6]. For a fractional-order hold,

the reconstructed response during the interval (tk tk_-),

is given by the equation


xi(tk-1 + 7) = (F) xi(tk-1) xi(tk- (n) + xi(tk-)
Tk-2
(28)

where 7 = (tk tki) and 0 5 F 1 If F = 0, equation

(28) is for a zero-order hold,and for F = 1.0, equation

(28) is identical to the first-order hold of equation (21).

If the fractional-order hold is used, the sampling interval

equation is

mi
T (29)
(k-1) Ivi(k) Fvi(k-l)2


where i = 1,--,n, and O F 1. If the quadratic form is

used, equation (27) will have the terms qll[xl(tk)

xl(tk1)]2 and q22[x2(tk) X2(tk-1)2 divided by F2.


Linear system simulation with an approximate discrete model


In a previous section, it was pointed out that both

modeling and reconstruction errors are present if an

approximate model is being used. In this section, sampling

interval formulas that constrain these errors will be

derived.

First, consider only the error in the discrete system

variables at the sampling instants. In figure 4-3, the

response for the discrete model is shown along with the











x x i(t k)

X(tk-1) xitk)
S(tk-1)



tk-1 tk

Figure 4-3 Approximate Model Response


Figure 4-4 Calculation Procedure







exact or approximate continuous response. A suitable

criterion would be to constrain the magnitude of the error

by the relationship,


[xi(tik)| mi. (30)

For a linear system, an equation for the error sensitivity,

similar to equation (13), could be derived for the approxi-

mate discrete system. Then the approximation,


ii(tk) x i(tk-l) + Tk-liv(k), (31)

could be used to derive a relationship for the sampling

intervals,


xi(tk-) + Tk-li(k) j mi (i = l,--,n). (32)

The error sensitivity could also be used to derive sampling

interval formulas that constrain the change in error,


i(tk) xi(tk-l) S mi. (33)

The resulting sampling interval formula is


T(k-l) mi (i =l,..,n) (34)
i Vi(k)I

If the difference between the reconstructed response

and the exact or approximate continuous response is to be

constrained, the sampling interval formulas must include

both data reconstruction and modeling errors. With zero-

order hold reconstruction, figure 4-3 indicates that a







suitable criterion for sample interval selection would be
the constraint


[xi (tk) xi(tk1)] + xi(tk) m. (35)

Substitution of the approximations of equations (15) and
(31) into equation (35) gives


I[vi(k) + Vi(k)]Tk-l + xi(tk-_l)I mi. (36)

If first-order data reconstruction is used, the per-
formance criterion takes the form of the constraint


[xi(t) xi(tk)] + xi(tk) 5 mi. (37)

Equations (15),(22) and (31) can be used to determine an
approximate sampling interval relationship,

vi(k) vi(k-l) + i(k)]Tk-1 + i(tk-1) mi. (38)

If a fractional-order hold is used, the relationship is


I[(i(k) FOi(k-l) + Vi(k)]Tk-l + 2i(tk-l)I mi, (39)

where 0 F 1.0.


Nonlinear system simulation

If the continuous system is nonlinear, the discrete
form of the system will, in general, be an approximate
model. Therefore, both modeling and reconstruction errors
will be present. Both types of simulation errors have








already been discussed for linear system simulation. State

variable sampling interval sensitivity was used to derive

sampling interval formulas that constrained the reconstruc-

tion error. The modeling error of linear system simulation,

with an approximate model, was limited by sampling intervals

derived from an error sampling interval sensitivity.

Since state variable sampling interval sensitivity is

also applicable for nonlinear systems, it can be used for

adjusting sampling intervals to constrain the reconstruction

error. However, error sensitivity is not readily available

for nonlinear systems, and, therefore, it cannot be used to

determine sampling intervalsthat constrain modeling error.

Instead, state variable sampling interval sensitivity will

be used to calculate sampling intervals. The resulting

modeling error will be controlled by adjusting parameters

of the sampling interval calculator. This is not as

appealing as the variable increment sampling technique

worked out for linear system simulation. However, the

sampling efficiency can be improved without seriously

impairing the accuracy of the simulation.

For nonlinear system simulation, with an approximate

discrete model, equation (29) can be used to calculate

sampling intervals. The values of m and F are then

adjusted experimentally to achieve the desired system

fidelity. The process will be illustrated with example

problems.








Calculation Procedure for System Simulation


The order of the calculation sequence is very impor-

tant. Several problems were encountered in applying the

formulas that have been derived. In order to provide

additional insight into sample adjustment, these problems

and their solutions will be presented along with the cal-

culation procedure of simulation.

The first problem encountered was with calculating

the interval between x(k-l) and x(k). Before any of the

formulas for sampling interval can be used to calculate

Tk-l, the sensitivities, v(k), and _(k), must be known.

However, v(k), the sensitivity of x(k) to changes in

Tk-l, and V(k), the sensitivity of 2(k) to changes in

Tk-l, are functions of tk, which depends on Tk_1. The

easiest way to avoid this problem is to use values from

the preceding interval to calculate v'(k) and V'(k). For

aA(tk_2 ,tkl) aB(tk_2,tZk_)
example, A(t and -2 are used in place
atk-1 6tk-l

OA(tk-ltk) 2B(tk-l,tk)
of and in equation (4). This pro-
atk atk

cedure was used in actual simulation and gave good results

for some example problems.

An improvement can be obtained by using the predicted

values of Tk-l to calculate values for aA(tk-ltk) and
atk
bB(tk-ltk), and then use these values to recalculate
btk

v(k), v(k), and Tk_-. This predictor-corrector scheme








could be repeated to further improve the accuracy of TkI-

An additional refinement can be obtained by comparing the

prediction with the correction. A large difference would

indicate a need to reduce the sampling interval.

The order of calculation for simulation is as follows:


o Calculate v(k) and v(k) using values from
the previous interval.

Use sensitivity to calculate Tk_1 as out-
lined in the preceding paragraphs.

Calculate x(k) with equation (2).

o Repeat the cycle for the next interval.


The method is not self-starting, and the first interval

must be preset to start the simulation.

Another problem encountered in developing the vari-

able sampling technique was brought about by sudden changes

in the input. When the system is operating in a state of

low sensitivity, the sampling intervals will be large. If

a sudden change in input occurred just after a large sam-

pling interval had been selected, a considerable portion

of the ensuing transient response might be missed. This

can be avoided by monitoring the input and sampling immedi-

ately after any change in input greater than a suitable

threshold. Figure 4-4 contains a flow chart for the cal-

culation procedure.








System Analysis for Variable Increment Sampling


If the techniques developed in the preceding sections

are used to determine sampling rate, the analysis of the

resulting system is quite involved. A block diagram for

the complete system is shown in figure 4-5. Block A rep-

resents the actual system. The sensitivities are deter-

mined in block B. Block B would also represent the com-

parison of the prediction and correction if this is

included. Block C calculates the sampling interval from

the sensitivities (predicted and corrected), and block D

limits the size of the sampling intervals. The limits on

the size of the sampling interval will be discussed further

in the next paragraph. Block E provides To to start sim-

ulation, begins correction if the predictor-corrector

scheme is used, and initiates sampling if a sudden change

in input occurs.

In discussing limits on sampling interval size,

consider first the lower limit. Since the purpose of

varying the sampling increment is to increase sampling

efficiency, the sampling rate is only as high as is

necessary to obtain an acceptably performing system. The

minimum value of sampling interval is that sampling in-

terval beyond which further reduction in size is not

considered worthwhile. Certainly, the highest sampling

frequency (smallest sampling interval) must be several

times the highest frequency in the input and output of

the system [6]. In actual practice, the minimum is


















.1
S-- I



x

all
a I
r I
I




Fll

C b.D




o
a r)
*








CC
0 C



-1 0
i^a -n






I
a
i ar




II
i a








O
I
0
a


a C
S 0 0
E- E- E-








determined by the system bandwidth and the required accu-

racy. Therefore, the lower limit on sampling interval

cannot be set in general.

The maximum allowable sampling interval for a closed-

loop system is usually determined by steady-state sta-

bility requirements. However, if an approximate model is

being used, the accuracy of the approximation will require

that the maximum allowable sampling interval be somewhat

below the stability limit. Thus, the upper limit on sam-

pling interval can be set only after careful study of the

particular system.

The introduction of variable increment sampling com-

pletely changes the stability considerations for the

sampled-data system. In addition to the stability of

the system of block A, the sampling interval selection

loop stability (blocks B, C, D, and E) must also be

studied. In addition, there is certainly coupling be-

tween loops.

The system stability (block A) can be studied by

opening the sampling interval selection loop and deter-

mining limits on the allowable values of sampling interval

for stability. For a linear system, the allowable values

of sampling interval can be determined by root-locus

techniques. However, a procedure based on the "second-

method" of Lyapunov is more general, since it applies to

nonlinear systems. The use of the "second-method" to

determine asymptotic stability of difference equations







has been studied [7,8,9], and it has also been applied to

frequency and pulse-width modulated sampling systems [2,

10,11].

The stability theorem can be stated [10] as follows:

a sufficient condition for asymptotic stability (in the

large) of the vector difference equation, x(k) = f[x(k-l)},

is the existence of V(x), a scalar function of the state

variables such that:


1. V(0) = 0

2. V(x) b 0 when x / 0

3. V[x(k)]
4. V(x) is continuous in x

5. V(x)- oo when x-co,


where V(x) is a Lyapunov function for the system.

The stability of the sampling interval selection loop

is more difficult to determine. Probably the best approach

is computer simulation.



Variable Increment Sampling for Optimal Control


The application of variable increment optimal sam-

pling to an optimal controller is very appealing. Sam-

pling interval sensitivity could be used to determine

the sampling interval for both the system and the optimal

control computer. A system of this type is shown in

figure 4-5. The optimal control computer (block F) is

shown with dotted connections.








The scheme in figure 4-5 could possibly be improved

by determining the sampling interval sensitivity of both

the optimal controller and the system. The two sensi-

tivities could then be used to determine a sampling interval

for the entire system. The resulting sampling interval

should be the "best" for both the system and the optimal

controller.

There are several ways to determine a sampling interval

from the two sensitivity vectors. One method is to use

the quadratic form vT(k)Qva(k), where v (k) is a n + m
-a --a -a
column vector constructed from the sensitivity vectors

of the system and the controller. Another method is to

weigh the elements of the sensitivity vectors. Both methods

have already been discussed. The calculation procedure is

as follows:


Determine the sampling interval sensitivity
vectors. There are n components for the
system and m for the optimal controller.

o From the n + m sensitivity components, cal-
culate a sampling interval.

Use the resulting sampling increment for
the next interval of both system and con-
troller.

Repeat the procedure.


Although this procedure requires m additional sensi-

tivity calculations, the possibility of obtaining the

"best" sampling increment for both system and controller

may justify the extra calculations.








Example Problems


The techniques discussed in the preceding sections

will be illustrated in the following example problems.



Problem 1


For the system shown in figure 4-6, the form of the

difference equation is,


x(k) = bx(k-l) + cu(k-l).


If the state transition equation is discretized, the exact

(except for the effect of discretizing the input) model

will have the coefficients


b = exp[-a(tk tkl)],


c = l/a[l exp(-a(tk tk-l)3.


The input selected for checking the variable increment

technique was


u(t) 1.0, 0 t a 5.0 and 6.0 t t A 7.0,


u(t) = 0,


for all other values of t. This is shown with a dashed

line in figure 4-7. This particular input was selected









Zero-Order Hold Plant

-esT 1
u(t) s (s+a) x(t) x(kT)



Figure 4-6 First-Order System




1.0 - ---- --u--- -


x .2x




x .2 v
V C'V
1 2 3 4 58 10
u -.2I



-.6.

-.8

-1.0
Figure 4-7 Response and Sensitivity





Predictor-Corrector Error
Predictor Error

--- - ---- -

Error Criteri

b 2\



0 1 2 3 4 5 6 7 8 9 10
Time in Seconds

Figure 4-8 Data Reconstruction Error: Zero-Order Hold








since it can eliminate the error of discretizing the input

and, at the same time, provide a rather severe test of the

sampling adjustment process. The response, x,and the

local sampling interval sensitivity, v, for an initial

condition x(0) = 0, are shown in figure 4-7.

For this exact discrete model, the only error is that

of data reconstruction. The magnitude of the reconstruction

error of a zero-order hold is shown in figure 4-8. Sampling

intervals were determined from equation (17) with m = 0.05.

The figure includes curves for both predictor and predictor-

corrector calculating methods.

The reconstruction error for the predictor scheme is

closer to the desired value, 0.05, than that of the pre-

dictor-corrector method. The reason for this can be seen

in figure 4-7. Sensitivity is a decreasing function, and

if data from the preceding interval are used, a smaller

sampling interval is calculated. If the sensitivity is

corrected, the sampling interval will be larger, and as

a result, the error is increased. If the sensitivity were

an increasing function, the predictor-corrector scheme

would give smaller intervals and the reconstruction error

would be reduced.

The large initial error shown in figure 4-8 is due to

the size of the first sampling interval. The first interval

is preset, and, therefore, the initial error can be con-

trolled. In figure 4-8, the initial sampling interval was

selected to give a large initial error to show that the








error is reduced at the next sampling interval. The sudden

decreases in the error at 5, 6 and 7 seconds are due to the

small sampling intervals selected by the sudden change

provision. Also,note that the sign of the reconstruction

error changes at 5, 6 and 7 seconds.

Figure 4-9 contains a plot of sampling interval versus

time for several different values of reconstruction error

magnitude. The discontinuities at 5, 6 and 7 seconds are

due to sudden changes in the input.

Figure 4-10 presents the trade-off between the number

of samples required to cover 10 seconds of the response and

the magnitude of the reconstruction error. The curve shown

is for predictor sampling. The number of samples required

is also strongly dependent on the nature of the input.

Since the main objective of variable increment sam-

pling is to save computer time, it is necessary to compare

variable and fixed increment sampling for the same magni-

tude of reconstruction error. For a zero-order hold and

variable increment sampling, 130 samples were required to

cover 10 seconds of the response. The maximum magnitude

of the reconstruction error was 0.02635. In order to main-

tain the same maximum error with fixed interval sampling,

476 samples would be required to cover 10 seconds. The

ratio of the number of samples is 0.273.

The reduction in the number of sampling intervals

does not give a complete evaluation. Each iteration of

variable increment sampling requires more computer time













'a

a.)
0
d
o
U



*' 1.
rl

cr
0



C!
ElO
C!


d
w o


0 1 2 3 4 5 6 7 8 9 10
Time in Seconds
Figure 4-9 Sampling Interval: Zero-Order Hold




.12



o .10


.08



o .06-



0 .04



.02



0
0 40 80 120 160 200
Number of Intervals

Figure 4-10 Zero-Order Reconstruction Error vs. Number of
Intervals








than an iteration for fixed interval sampling. For this

particular system, the ratio of calculation time is

approximately 3.11. Thus, a more accurate figure of merit,

for the first 10 seconds, would be the product of the two

ratios, 0.273 x 3.11 0.848. If the same input is used,

and the first 20.0 seconds considered, variable increment

sampling is even more appealing. The combined figure of

merit drops to 0.438.

The effect of varying F in fractional-order hold

reconstruction is shown in figure 4-11. For values of F

greater than 0.5, the sampling intervals were alternately

large and small and the results are not useful.

The system was simulated with an approximate model

based on the rectangular rule for integration. The

difference equation for the system is


x(k) = [l-a(tk-tkl)]x(k-l) + (tk-tk_l)u(k-1). (45)


The sampling interval sensitivity equation is


v(k) = -ax(k-l) + u(k). (46)


The difference equation for the error is


K(k) = [exp[-a(7)} 1 + a(n)]x(k-1)


+ exp[-a())1x(k-l)


+ [l/a(l-exp[-a(?)1) 7]u(k-1),











11
0

3-
F=0.3 F=0 F-0.3

S F=0.55 F=0.5













o
0



0 1 2 3 4 5 6 7 8 9 10
Time in Seconds

Figure 4-11 Reconstruction Error: Fractional-Order Hold




S3










S1
0 2 1 2- 3 4 - 7 S -
TieinScod


0 1 2 3 4 5 6 7 8 9 10
Time in Seconds

Figure 4-12 Modeling Error of Approximate Model








and the error sampling interval sensitivity is


V(k) = a[1-exp[-a())]3X(k-1)


-a[exp[-a(7))]x(k-l)


+ [exp[-a(n)) l]u(k-l), (48)


where 7 = tk tk_-.

The error sensitivity of equation (48) was used in

equation (34) to calculate sampling intervals for the

approximate model. The magnitude of the resulting modeling

error is shown in figure 4-12 for m = 0.02. Figure 4-14

contains a plot of sampling interval size versus time.

Sampling interval size is varied by 20 to 1 to keep the

magnitude of the modeling error within the desired level,

0.02.

In figure 4-13, the magnitude of the algebraic sum

of the reconstruction and modeling errorsis shown for two

values of F, the fraction of the data-hold circuit. Sam-

pling intervals were determined from equation (39). The

discontinuities at t = 5, 6 and 7 seconds are due to sudden

changes in the input.

The corresponding curves for sampling interval size

are shown in figure 4-14. It is interesting to compare

the F = 0 sampling interval curve of figure 4-14 with the

m = 0.02 curve of figure 4-9. If the modeling and re-

construction errors are of opposite sign, the sampling

interval curve of figure 4-14 gives a larger sampling















N
0





o
rbl
X
























10.
a ,
*'a
C
o












r)
(1.
e













ti
Sm
.
0
O













a)
a
10.



d0





a
H

a,
0


It
MO



ag
m-






010


Criterio N F=.3












1 2 3 4 5 6 7 8 9 10
Time in Seconds

Figure 4-13 Reconstruction and Modeling Error

/I // | /
/ / Error Magnitude Set
/ / /
/ at 0.02 /
// /


F=. / I F=








F-0


0 1 2 3 4 5 6 7 8 9 10
Time in Seconds

Figure 4-14 Sampling Interval: Approximate Model





88


interval than the curve of figure 4-9. When both errors

are of the same sign, the curve of 4-14 gives lower values

of sampling interval.



Problem 2


Variable increment sampling has also been applied to

the system of figure 4-15. The exact difference equation

for the system is


x(k) = Ax(k-l) + bu(k-l), (49)


where the components of A are,


all = exp(-3T)[cos(4T) + .75sin(4T)] (50)


-12 = (25/24)exp(-3T)sin(4T) (51)


a21 = -1.5exp(-3T)sin(4T) (52)


a22 = exp(-3T)[cos(4T) .75sin(4T)]. (53)


The elements of the vector b are


bl = 0.25[4.0 exp(-3T)[3sin(4T) + 4cos(4T)}] (54)


and

b2 = 1.5exp(-3T)sin(4T). (55)


The local sampling interval equation is


v(k) = Cx(k-l) + du(k-1), (56)








where the components of C are,


Cll = -exp(-3T)[6.25sin(4T)] (5;


c12 = exp(-3T)[(25/6)cos(4T) (25/8)sin(4T)] (5i


c21 = exp(-3T)[4.5sin(4T) 6cos(4T)] (5i


c22 = -exp(-3T)[6cos(4T) + 1.75sin(4T)]. (6(

The elements of d are


dI = 6.25exp(-3T)sin(4T) (61


and


d2 = exp(-3T)[6cos(4T) 4.5sin(4T)]. (6;


Variable increment sampling was investigated for the

input


u(t) = 1.0 0 t-1.0 and 1.4 M t l1.6,


u(t) = 0,


for all other values of t. The input is shown with dashed

lines in figure 4-16. This input is similar to the input

used in problem 1. Figure 4-16 also contains the response

curves,x1 and x2,for the initial conditions xl(0) = 0

and x2(0) = 0. The sensitivities, vl and v2, are shown

in figure 4-17.













(t) x (kT)
__/1I


Figure 4-15 Second-Order System


.5
Time in Seconds


Figure 4-16 Input and Response




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs