• TABLE OF CONTENTS
HIDE
 Title Page
 Acknowledgement
 Table of Contents
 List of Tables
 List of Figures
 Abstract
 Introduction
 Discrete approximation techniq...
 Identification
 Conclusions
 Appendices
 References
 Biographical sketch














Title: Model and identification theory for discrete systems
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00097881/00001
 Material Information
Title: Model and identification theory for discrete systems
Physical Description: viii, 125 leaves. : illus. ; 28 cm.
Language: English
Creator: Smith, Stanley Louis, 1934-
Publication Date: 1966
Copyright Date: 1966
 Subjects
Subject: Numerical analysis   ( lcsh )
Programming (Mathematics)   ( lcsh )
Linear programming   ( lcsh )
Mathematical optimization   ( lcsh )
Electronic digital computers   ( lcsh )
Electrical Engineering thesis Ph. D
Dissertations, Academic -- Electrical Engineering -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis -- University of Florida.
Bibliography: Bibliography: leaves 119-124.
Additional Physical Form: Also available on World Wide Web
General Note: Manuscript copy.
General Note: Vita.
 Record Information
Bibliographic ID: UF00097881
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000568502
oclc - 13675220
notis - ACZ5236

Downloads

This item has the following downloads:

modelidentificat00smitrich ( PDF )


Table of Contents
    Title Page
        Page i
    Acknowledgement
        Page ii
    Table of Contents
        Page iii
    List of Tables
        Page iv
    List of Figures
        Page v
        Page vi
    Abstract
        Page vii
        Page viii
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
    Discrete approximation techniques
        Page 7
        Page 8
        Page 9
        Page 10
        Page 11
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
        Page 48
        Page 49
        Page 50
        Page 51
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
    Identification
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
        Page 72
        Page 73
        Page 74
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
        Page 82
        Page 83
        Page 84
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
        Page 90
        Page 91
        Page 92
        Page 93
        Page 94
        Page 95
        Page 96
        Page 97
        Page 98
    Conclusions
        Page 99
        Page 100
    Appendices
        Page 101
        Page 102
        Page 103
        Page 104
        Page 105
        Page 106
        Page 107
        Page 108
        Page 109
        Page 110
        Page 111
        Page 112
        Page 113
        Page 114
        Page 115
        Page 116
        Page 117
        Page 118
    References
        Page 119
        Page 120
        Page 121
        Page 122
        Page 123
        Page 124
    Biographical sketch
        Page 125
        Page 126
Full Text









MODEL AND IDENTIFICATION THEORY

FOR
DISCRETE SYSTEMS










By
STANLEY LOUIS SMITH












A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY










UNIVERSITY OF FLORIDA
August, 1966








ACKNOWLEDGMENTS


The author wishes to express his appreciation to the members of

his supervisory committee for their advice and cooperation. In par-

ticular, acknowledgment is made to the Chairman, Dr. A. P. Sage, for

his introduction to the research topic and the stimulating motivation

of his discussions and professional example. Special appreciation is

expressed to the Co-Chairman, Prof. W. F. Fagen, for his counsel and

continued confidence throughout the author's graduate program. Grati-

tude is expressed for the support of the late Dr. M. J. Larson who made

it possible for the author to extend his graduate study.

Expressions of gratitude cannot suffice to adequately acknowledge

the inspiration and support of the author's wife, Annie Laura, certainly

the ideal of Proverbs 31:10b-31.

The financial support' of the Department of Electrical Engineering

and National Aeronautics and Space Administration Grant A 26 NsG-542

is gratefully noted.







TABLE OF CONTENTS



Page

ACKNOWLEDGMENTS . . . . . . . . . . . . ii

LIST OF TABLES . . . . . . . . ... ...... iv

LIST OF FIGURES . . . . . . . . ... . . . v

ABSTRACT . . . . . . . . ... . . ..... vii

CHAPTER

1. INTRODUCTION . . . . . . . . ... . .. 1

2. DISCRETE APPROXIMATION TECHNIQUES . . . . . . 7

3. IDENTIFICATION . . . . . . . . ... .. 67

4. CONCLUSIONS . . . . . . . . ... . . 99

APPENDICES

APPENDIX I . . . .. . . . . .... ..... 102

APPENDIX II . . . . . . . . ... . . . 105

APPENDIX III . . . . . . . . ... . . . 110

APPENDIX IV . . . . . . . . ... . . . 113

REFERENCES . . . . . . . . . . . . 119

BIOGRAPHICAL SKETCH . . . . . . . . ... ..... 125







LIST OF TABLES


able Page

1. Examples of Discrete Forms for Integrating Operators . 13

2. Examples of Optimum Discrete Operators (Appendix II) . 105

3. Computed Initial Conditions for the Linear System
Simulations . . . . . . . . ... . 38

4. Normalized Error Square Criterion for the Ramp
Response Experiment . . . . . . . . .. 39

5. Gain Parameter Convergence . . . . . . .... 82

6. Convergence of Quasilinearization for Nonlinear
System with 10sin2t Input and T 0.2 Second . . .. 91

7. Convergence of Quasilinearization Process
for the Nonstationary System Example . . . . .. 94







LIST OF FIGURES


Figure Page

1. Impulse Modulator-Sampler . . . . . . . . 9

2. Open-loop Sampled-data System . . . . . . .. 10

3. Nonlinear System with a Separable Nonlinearity .... .18

4. Discrete Model Configuration for the Fowler
Approximation Method . . . . . . . .... 19

5. System Configuration for Error Determination . . .. .20

6. Linear Example System . . . . . . . .... 26

7. Error Criterion for the Output of the Linear System . 35

8. Linear System Unit Step Response . . . . . .. 37

9. Linear System Ramp Response. . . . . . .. 40

10. System for Nonlinear Example . . . . . . . 41

11. Tustin Model for the Nonlinear System . . . . . 43

12. Error Criterion for xl with 10 Unit Step Input . 50

13. Error Criterion for x2 with 10 Unit Step Input . . 51

14. Nonlinear System Response to 10 Unit Step Input .... .52

15. Nonlinear System x2(t) Response for 10 Unit Step Input 53

16. Error Criterion for xI of Nonlinear System for
Step Input with Modified Fowler Result . . . .. 55

17. Error Criterion for x2 of Nonlinear System for
Step Input with Modified Fowler Result . . . .. 56

18. Sine Wave Response of Nonlinear System for
T = 0.2 Sec. . . . . . . . .. .. . 59

19. Error Criterion for xl of Nonlinear System
with 10sin2t Input . . . . . . . . 60






Figure Page

20. Error Criterion for x2 of Nonlinear System
with 10sin2t Input . . . .. . . . . . 61

21. Nonlinear System x2(t) Response for 10sin2t Input . . 62

22. Response of the Nonlinear System to a Ramp Input . . 63

23. Nonlinear System for Parameter Identification . . . 77

24. Discrete Model Gain Adjustment via Quasilinearization . 83

25. Error Criterion for xl with Identified Gain Parameters,
Step Input . . . . . . . . . . .. 84

26. Error Criterion for x2 with Identified Gain Parameters,
Step Input . . .. . . . . . . . .... 85

27. Effect of Gain Parameter Change on Step Response of
Nonlinear System . . . . . . . . .87

28. Gain Adjustment via Differential Approximation .... .88

29. Gain Adjustment for Input Step Change . . . ... 88

30. Error Criterion for x1 with Sine Input and Identified
Gains . .... .. . . . . . . .... .. 90

31. Time-Varying System for Identification . . . ... 93






Abstract of Dissertation Presented to the Graduate Council
in Partial Fulfillment of the Requirements for
the Degree of Doctor of Philosophy



MODEL AND IDENTIFICATION THEORY FOR DISCRETE SYSTEMS


by

Stanley Louis Smith

August 13, 1966

Chairman: Dr. A. P. Sage
Major Department: Electrical Engineering

Discrete-time systems have assumed increasing importance with the

advent of high-speed digital computers. Digital simulation of continu-

ous systems, digital implementation of control strategies, and identi-

fication of discrete systems, requires techniques for accurate model-

ling of system dynamics. Classical methods for discrete representation

of systems continue to be employed while recently introduced methods

offer decided advantages but have not received wide attention. Compre-

hensive studies of both classical and modern methods for discrete

modelling have not appeared to permit evaluation of their relative

merits. An intensive study is undertaken to determine the effective-

ness of the different discretization techniques for the digital simu-

lation of linear and nonlinear continuous systems. A number of digital

experiments are performed to obtain quantitative data for comparison of

the capabilities of the discrete modelling methods.

The improved performance.of recently developed methods for digital





simulation is related to certain features of the discretization pro-

cedure. These characteristics indicate an approach for the improvement

of other discrete representations. This approach requires the identi-

fication of parameters within the discrete model to obtain desired

response characteristics. Formulation of a procedure for the parameter

identification leads to a two-point boundary-value problem. This

problem is resolved via a discrete version of the method of quasi-

linearization. A procedure for digital computer implementation of thiq

technique is developed and a number of digital experiments performed.

Improved discrete models of continuous systems are shown to be obtained

with the technique presented. A discrete formulation for the method of

differential approximation is presented and the effectiveness of this

approach for parameter identification investigated through a series of

computer experiments.






CHAPTER 1

INTRODUCTION



Application of digital computers to the analysis and study of dy-

namic systems requires a discrete formulation of the relationships

describing the system behavior; hence every system, whether naturally

of discrete-time form, continuous with sampled-data variables, or

purely continuous, is represented as a discrete-time system within the

digital machine. Methods of numerical analysis are routinely employed

in implementing the digital computer solution of differential equations

for continuous systems. This approach achieves the system discreti-

zation in an indirect manner and may obscure certain properties of

interest in the physical system, even while yielding quite accurate

solutions to the system differential equations. In a study of system

behavior in response to different inputs and nonstationary system

parameters, methods for digital computer implementation of the system

differential equations have been sought,permitting a discrete model of

the physical system to be obtained which will yield accurate represen-

tation of the physical system behavior-in the physical time domain,

i.e. real-time digital simulation.

Discrete modelling techniques developed over the past several

years are in general based on standard methods of numerical analysis

and related approximate techniques arising in the engineering sciences.

While satisfactory digital simulations have long been achieved for such


-1-




-2-


cases as relatively slow chemical processes, real-time simulation of

many dynamic systems was not possible for some time with the digital

equipment and computation methods available. With advances in digital

hardware design and increased sophistication of computer organization,

real-time digital simulation of many complex dynamic processes has been

achieved and increasing effort placed on more efficient and more accu-

rate computational techniques. Any desired degree of accuracy within

machine capability may be attained by traditional numerical analysis

techniques for solution of differential equations [1,2,3] but the

time required to produce the solution is usually prohibitive for real-

time simulation even with high-speed machines, and for the majority of

systems of interest. Over approximately the last 20 years, many

approximate techniques for the discrete representation of continuous

systems have been proposed and have been employed for digital simu-

lations with varying degrees of success. Tustin's method, one of the

earliest techniques [4] developed within the engineering field, has

been perhaps the best accepted approach for digital simulation and is

probably the prevalent method in current applications. Most recently,

the discrete modelling technique introduced by Fowler [5] has provided

a significant advance in the capability of methods for digital simu-

lation, especially for nonlinear systems.

While many papers have appeared in the general literature of

engineering and mathematics discussing specific methods for discrete

modelling or digital simulation, there have been few attempts to




*Numbers in square brackets refer to entries in the references.








compare a number of techniques and to evaluate their relative effective-

ness. In a paper on numerical transform calculus, Boxer [6] discusses

the fundamental aspects of some of the earlier techniques but gives

major emphasis to the Boxer-Thaler approach and results obtained via

this method; he makes only general reference to the comparative accu-

racy of other methods. A more recent study of digital simulation

methods has been reported by Fryer and Shultz [7]. In this study are

included those methods mentioned by Boxer and other methods later

introduced. The authors attempt to compare the effectiveness of the

discretization techniques studied by application of each method to the

modelling of a specified example with certain other common constraints

on the simulation characterizations. The examples considered were those

of linear stationary systems, and the simulations were made for a

sampling interval time of rather large magnitude in relation to the

system dynamics. This study of digital simulation techniques remains

the most comprehensive reported in the open literature, to this writer's

knowledge. Introduction of the so-called IBM method of digital simu-

lation, Fowler's method, by Hurt [8], reflected research accomplished

concurrently with the work of Fryer and Shultz. The depth of experi-

ence indicated by the initial reports of Fowler's technique and re-

flected in subsequent company documents [9] reveals perhaps the major

research endeavor in the digital simulation area to this time. The

implication of the results obtained with the Fowler method, as reported

in the above-referenced papers, is that extensive comparative studies

were made of different discrete methods, but no comprehensive sup-

porting data are presented. There have been no published reports of

work accomplished with this method from sources other than those cited

above.




-4-


The study of discrete modelling techniques reported herein is

initiated by presenting some of the basic concepts of discrete-time

system theory. A state space approach is presented for the de-

scription of discrete systems which facilitates formulation of problems

for digital computer solution. Following this introduction to discrete-

time systems, the more significant of the classical discrete modelling

techniques are discussed. The fundamental concepts of each approach to

digital simulation are summarized in a procedure for application of

each method. Fowler's method of discrete modelling is presented and the

procedure for application of the method illustrated in detail. The

technique for optimum discrete representation of the integration

operation recently reported by Sage and Burt [10] was extended by Sage

[11] for modelling of higher-order operators. As yet, neither has

significant computational experience with this technique been reported,

nor have practicable procedures and computation algorithms been defined.

This technique is herein developed and a procedure presented for formu-

lation of the discretization of more general systems.

After discussing the essential features of each of the digital

simulation techniques, the methods are employed for the simulation of

two example systems. Since all of the techniques presented have been

developed for modelling of linear systems, the first example is a

second-order linear system for which the step response is sought. Each

simulation method is employed to formulate a digital computer algorithm

for determination of the desired response. The response of each simu-

lation to a step input is investigated for a number of sampling inter-

vals, and from tiese data, the relation of the discretization error of




-5-


each simulation to change in sampling interval is evaluated, permitting a

comparison of the relative effectiveness of the different techniques.

The second example considered is that of a second-order nonlinear

system. The discretization procedures are here applied to the modelling

of the linear portions of the system, and a study made of the simu-

lation sensitivity to sampling interval size for a step input. Ad-

ditional experiments are performed with the Fowler and optimum discrete

approximation methods in which evaluation is made of the simulation

performance for sine and ramp function inputs.

The experimental results obtained in the comparative study of

digital simulation techniques indicated above, reveal a possible manner

in which a discrete model may be improved. The model is tailored for

changing sampling intervals and inputs by adjusting selected parameters

in the discrete representation. Formulation of this approach leads to

a two-point boundary-value problem which is resolved via a discrete

form of the generalized Newton-Raphson method, or quasilinerization

[12,13,14]. Evaluation of the desired parameters may also be achieved

by means of a discrete form of Bellman's differential approximation

technique [12,15,16]. This latter approach may also be employed to

estimate parameters for initiating the quasilinearization procedure.

Procedures for implementation of these techniques are formulated and

computational algorithms developed for digital computer solution of the

identification problem. Development of this portion of the research

comprises the principal effort of the work undertaken.

The approach taken in the discussion of discrete modelling

techniques is that of considering a linear transfer function as an

operator. The result of a procedure for discretization of a continuous




-6-


system transfer function is then a pulse transfer function which will

hopefully perform the same operation on an input signal. Such a dis-

crete operation can be implemented on a digital computer by expressing

the pulse transfer function in its difference equation form. The

resulting equations provide recursive relationships for efficient

digital computer representation of a continuous system and hence pro-

vide the most probable avenue to real-time digital simulation.

An important class of digital simulation techniques exists in the

simulation languages or digital analog simulators. These techniques

in general emphasize convenience for the programmer at the expense of

computation time. With a simulation for temporary study having no

'emphasis on real-time operation, convenience and speed in programming

is a decided advantage even at the expense of computation time.

Despite the significance of simulation languages they will not be dis-

cussed here since the present emphasis is on computation techniques

which hopefully result in real-time simulation.






CHAPTER 2

DISCRETE APPROXIMATION TECHNIQUES


State-Space Representation of Discrete-Time Systems

Convenient digital implementation of discrete-time models for

systems may be achieved through the concepts of state-space represen-

tation of discrete-time systems. The principal impetus for this

approach to discrete system characterization has been provided by the

work of Kalman and Bertram C173, and later, by the work of Zadeh [18].

The more recent publications of Bekey [19] and Freeman [20] are es-

sentially drawn from earlier referenced works, principally those above.

Given the state Z(kT) of a system at time t kT, the state at

time greater than kT for a system input u(kT) is given by


x(k+l T) = A(kT)x(kT) + B(kT)u(kT), (2.1)

where x is an n-vector, u is an m-vector, A is an nxn system operator

matrix, and B is an nxm input weighting matrix. Having the state

vector defined for k = 0, the system state for k > 0 is obtained by

repeated application of the recursion formulas of Equation (2.1) to

yield


k-1 k-l k-l
x(kT) = LA(nT)x(0) + [17- A(nT)] B(jT)u(jT). (2.2)
n=0 j=0 n=j+l


-7-




-8-


The system state transition matrix, or fundamental matrix, is defined by


k-i
()(k,m) = rT A(nT), for k > m, (2.3)
n=fm

where


()(k,k) I, (2.4)

I an nxn identity matrix. With this definition of the transition

matrix Equation (2.2) may be written as


k-1
x(kT) D(k,0)x(0) + ( (k,j+l)B(jT)u(jT). (2.5)
j=o


For a stationary system with constant matrices A and B, the transition

matrix becomes


(D(k,m) Ak-, and ((k,O) = Ak

so that the state transition equation is written as


k-1
x(k) D(k)x(O) + ()(j)Bu(k-j-1 T). (2.6)
j=0


The formulation exemplified by Equation (2.1) in general provides

the basis for efficient digital computer algorithms for a discrete

representation of system behavior. This characterization permits con-

venient incorporation of initial conditions, and representation of non-

stationary, nonlinear systems through proper definition of the system

operator and input weighting matrices. The discrete models for con-

tinuous systems derived by the approximation methods to be discussed

may be represented by difference equations of the form of Equation (2.1).




- 9 -


Fundamental Concepts of z-transform Theory

Representation of a dynamic system by a digital computer computa-

tional algorithm permits the development of a discrete-time system

description which approaches the idealized concepts of conventional

sampled-data theory. The operation representing an ideal impulse

sampler or modulator such as shown in Figure 1 is achieved by the

ordinary computation processes within a digital machine. The con-

tinuous signal x(t) may be viewed as producing pulse amplitude modu-

lation on the impulse train g(t), or alternately, the impulse train

may be viewed as gating a unity gain amplifier to produce output at

distinct instants of time. Assuming the impulse train to consist of

delta functions uniformly spaced an interval T in time, the impulse

train may be represented by


+00
g(t) S(t-nT). (2.7)
n=-co






x(t) Modulator x W(t)




g(t) _




Figure 1. Impulse Modulator-Sampler


With the continuous function x(t) defined for t 2 0, the modulator out-

put x*(t) is




- 10 -


co
x*(t) Fx(nT) S(t-nT). (2.8)
n=0


Taking the Laplace transform of Equation (2.8) results in the form


co
X*(s) Zx(nT)e-Ts. (2.9)
n=0


Defining the change of variable as introduced by Hurewicz [21],
sT
z esT, leads to

co
X(z) = Tx(nT)z-n, (2.10)
n=0


the z-transform of the time function x(t).

Consider the linear, stationary, open-loop, sampled-data system

of Figure 2 with synchronized ideal samplers on input and output and

impulse response h(t). With the input to G(s) appearing as a sequence





x(t) x*(t) G(s) y(t) Y*(t)
X(s) T X(z) Y(s) T Y(z)





Figure 2. Open-loop Sampled-data System


of impulses, the output y*(t) = y(nT) at any sampling instant for a

relaxed system is given by the convolution summation


COD
y(nT) h(n-k T)x(kT), n 0,1,2,. ., (2.11)
k=O




- 11 -


where the sequence given by the h(nT) is termed the weighting sequence

for the sampled-data system and is zero for negative argument. If the

z-transform rule of Equation (2.10) is now applied to Equation (2.11),

and the index change j = n-k made, the z-transform Y(z) is given by



Y(z) i h(jT)x(kT)zj-k (2.12)
j=O k=O


where h(jT) exists for j > 0. Recognizing the expression for the

z-transform of the weighting sequence and the z-transform of x(nT)

imbedded in Equation (2.11), and denoting the weighting sequence trans-

form by G(z), the pulse transfer function, Y(z) is given by


Y(z) = G(z)X(z). (2.13)


The broad application of z-transform theory since its introduction by

Hurewicz has yielded many comprehensive treatments of the subject

[22,23,24] and extensive tabulations of time functions and pulse

transfer functions, facilitating application of the z-transform tech-

niques. For analysis problems of great complexity, computer programs

are available [9] to aid in performing the z-transform analysis.



Discrete Modelling Methods



Transform Methods

The pulse transfer function resulting from the z-transform

operation may be implemented on a digital computer as a difference

equation relating the input and output variables of the system under

study. Consider a system represented by Figure 2 where the input x(t)




- 12 -


and output y(t) are sampled; hence the pulse transfer function may be

represented as G(z) Y(z)/X(z), or


a0 + alz-1 + a2z-2 + . + anz-n
G(z) ---- (2.14)
S 1+ blz-1 + b2-2 + . + b z"-


Knowing that z-n F(z) [f(t-n T) ]*, where [ ]* represents the

z-transform of the function within the brackets and F(z) [f(t)] ,

leads to a difference equation relating y(kT) and x(kT) when G(z) is

replaced by Y(z)/X(z) and the inverse transform obtained for the

resulting expression.

This difference equation may now be written as a recursion formula

for y(kT),


y(kT) aox(kT) + alx(k-l T) + . + anx(i-n T) bly(k-i T)

. bq(k-- T) (2.15)

which is the desired relationship for digital computer implementation

of the discrete approximation resulting from the z-transform method.

It is noted that the form of Equation (2.15) is not in the state

variable form shown in Equation (2.1). The exact form employed for

computer solution depends upon the available computer memory storage.

The z-transform expressions for integration operators, s", may

be employed in an alternate approach to discretization of a continuous

system transfer function. Having expressed a transfer function G(s)

as a ratio of polynomials in s-1, substitution is made for the sn by

the corresponding z-transforms. Typical z-transforms for integration

operators are given in Table 1. This approach to discretization of a

continuous system transfer function is aptly termed integrator







Table 1


Examples of Discrete Forms for Integrating Operators


Methods of Operators
Approximation 1
1/s 1/s2 1/s3

Tz-1 T2z-1 T3z-(1+ z-1)
z-Transform 1 -1 (i z--)2 (1 z-1)3

-1 [ -1 2-1 3
Tustin T( + z-1) T + z1 T( + l)
2(1 -1) 2(1 -1) 2(1 z)

T(I + z-1) T2( + 4z-1 + ?-2) T3(1 + 11z-1 + llz-2 + z3)
Madwed-Truxal -16(1- 24(1- z-l
2(l z ) 6(1-z) 24( 1 z

Boxer-Thaler T(1 + z-1) T2(1 + 10z-1 + -2) T3(-(1 + z-1) )
2(1 z-1) 12(1 z-1)2 2(1 z-1)3




- 14 -


substitution, or internal substitution, and is the approach commonly

taken with several other techniques to be discussed.

Multiplication of the z-transform of s-n by the sampling interval

T is required to obtain the z-transform integration operator. Fryer

and Shultz [7] have reported experimental results for the discreti-

zation of a system utilizing Blum's technique for derivation of digital

filters [25]. The authors observed an unanticipated reduction in the

steady-state gain of the simulation employing this method, but offer

no conclusion regarding the source of the gain loss. It is shown in

Appendix I that the discrete system gain for this approach is reduced

by a multiplicative factor, the sampling interval T, from the

z-transform approximation. This technique for discrete modelling is

mentioned only for its connection with the basic z-transform theory

and to resolve the dilemma posed by Fryer and Shultz; for experimental

results obtained with this approach the reader is referred to the

previously referenced paper [7].

Tustin Method

Originally presented as a general approach to linear system

analysis through the representation of time functions in terms of

sequences of numbers [26], the practical application of the Tustin

method may be reduced to use of Tustin's definition of the differ-

entiating and integrating operators. A linear transfer function G(s)

expressed as a ratio of polynomials in s is readily digitized by

substituting for sn the Tustin operator expressed as




sn 1-
.T 1 + z~-1




- 15 -


where T is the sampling interval. It should be noted that this corre-

sponds to repeated usage of the trapezoidal integration rule.

Madwed-Truxal Method

Madwed extended the Tustin time series approach to system analysis

and developed higher order integrating operators of increased accuracy

in his comprehensive treatment [27] of this technique. The complex

notation developed by Madwed for the polygonal approximation of time

functions was clarified by Truxal [28] in his formulation of the

numerical convolution operation for system analysis using z-transform

notation. The first order integration operator of Madwed is the

trapezoidal rule encountered in the Tustin method; however, higher-

order Madwed integration operators assume different forms as shown by

the examples of Table 1. To digitize a transfer function using this

approach, the integrator substitution technique is employed, and the

appropriate Madwed integrating operator substituted for s-n to obtain

G(z).

Boxer-Thaler Method

The Boxer-Thaler method was presented as a technique for numerical

inversion of Laplace transforms [29,6]. The procedure for application

of this approach follows that of the previous methods in that substi-

tutions are made for the complex variable s, but the integrating

operators employed are the "z-forms" developed by Boxer and Thaler.

These special forms were developed in the frequency domain in contrast

to the time domain development of Tustin and Madwed. It was noted

that polynomial approximation for s-1 P T/ln z could be obtained by

expanding In z in a rapidly convergent series and then expressing the

operator s-l as




- 16 -


T T
s-1
In z 2(u + u3/3 + u5/5 + . .)


where


1z-1
1 + z-1
u =



From this expression there results by synthetic division



s-1 -(u1/3 -4u/45- . .)
2

which leads to z-forms for s-n when both sides of the above expression

are raised to the nth power and the constant term and principal part

of the resulting series retained. Table 1 contains z-forms for

several orders of integrating operators. A linear system transfer

function nay be discretized by integrator substitution employing the

appropriate z-forms for the s-n.

Anderson-Ball-Voss Method

This technique was presented [30] as an approach to discretization

of linear differential equations. If the input to a system can be

approximated by a polynomial in time, this approximation is substituted

into the system differential equation for the input and the analytical

solution determined. For system differential equations of the form


dny dn-ly
An + An-1 + . + Ay = x(t), (2.16)
d tn datl


the input, x(t), is approximated by a low-order polynomial h(t) per-

mitting the solution, y(t) to be written as




- 17 -


n at
y(t) g cie + H(t)
i=l


where the ai are assumed to be distinct. A recursion formula for y(t)

can be obtained by writing the solution as


y(iil T) = aly(mT) + a2y(C-l T) + .. + a.y(i-j+l T)

+ Slx(? -l T) + B2x(mT) + . + k+x(Cl -4+1 T) (2.17)


where j is the order of the differential equation, and

k is the degree of the input approximation polynomial.

Writing the solution y(t) as

n ai(t-tm)
y(t) = Z cie + H(t) (2.17a)
i=l

and representing the input approximation by


h(t) = ho + hl(t-tm) + h2(t-t)2 + .

permit the coefficients 5i to be evaluated when y(t) and h(t) are

substituted into Equation (2.16) and t made equal to zero. If t is

successively made equal to tm, ,t_1, . .,* t-j+l, a set of j

equations are obtained from y(t) for evaluation of the ci and subse-

quently the ai.

Fowler Method

The approach to digital simulation developed by Fowler [31]

employs root-locus techniques in conjunction with the z-transforms for

the continuous system under consideration. For a nonlinear system

such as in Figure 3, the z-transforms of the individual linear transfer

functions are first determined.




- 18 -


Figure 3. Nonlinear System with a Separable Nonlinearity


The nonlinearity is replaced by a representative gain, frequently

unity, and the poles of the resulting closed-loop pulse transfer

function

C1(z)G2(z)
1 + Gl(z)G2(z)H(z) (


made equal to the poles of the z-transform of the closed-loop

continuous linearized system

SGl(s)G2(s) "
G(z) = (2.19)
1 + G1(s)G2(s)H(s)


by adjusting gain parameters in the forward and feedback paths. This

may be accomplished by equating the denominator terms of Equations

(2.18) and (2.19) and solving for the unknown gain parameters. The use

of the notation Fl(z) and F2(z) in Figure 4 reflects the changed

character of Gl(z) and G2(z) when the gain parameters are inserted.




- 19 -


Figure 4. Discrete Model Configuration for the Fowler
Approximation Method



Additional requirements of steady-state gain and system input

approximation are met by determining an input pulse transfer function

Fi(z) of Figure 4 so that the product of this transfer function and the

closed-loop expression (2.18) equals the z-transform of the product of

the desired input approximation or data hold and the linearized closed-

loop transfer function, i.e.


Fi(z)Fl(z)F2(z) = HO(s)Gl(s)G2(s)
1 + FI(z)Fz2() 1 + Gs(s)G2(s)H(s)


where HO(s) is the data hold transfer function. With the discrete

model completed, it can be implemented by a digital computer program

as shown earlier for pulse transfer functions.

Optimum Digital Simulation

Discrete approximation of a continuous linear system may be

approached by a direct effort to minimize the error between the output

of the discrete system and the sampled output of the continuous system,

as shown by Sage and Burt [10,11]. The signal comparison is made as




- 20 -


illustrated in Figure 5, where r(t) is the input, I(s) is the ideal

continuous operation, H(z) is the unknown pulse transfer function, and

F(z) is a fixed portion of the system.


Figure 5. System Configuration for Error Determination


The resulting error sequence is

e(nT) = ci(nT) cd(nT)


where ci(nT) is the ideal output after sampling, and cd(nT) is the

actual discrete system output. Taking the z-transform of e(nT) yields


E(z) = [R(s)I(s)]* R(z)F(z)H(z).


(2.21)


Since the desired approximation is sought to minimize the error above,

the criterion for optimization is chosen as the sum of error squared

which may be expressed as



e2(T) 1 E(z)E(z-)z-ldz
n=0 2Trj Fi


where the contour of integration I is the unit circle. Substituting




- 21 -


the expression for E(z) from Equation (2.21) into the integral yields



Se2(nT) [A(z)-R)F(F()H(z)] [A(z-1) -
n=O 2jrrj/

R(z-1)F(z-)H(-1)]z-ldz (2.22)

where

A(z) = [R(s)I(s)].


The sum of error squared may be minimized by applying the calculus of

variations to the integral of Equation (2.22) yielding the result 32],


SR(z-1)F(z-1)A(z)
0(zR()R(z)-1)F(z)F(z-1)]- P.R. (2.23)
Ho(z) = 1 (2.23)
[R(z)R(z-I(z)z)F(z-l)]


where the symbol P.R. refers to the physically realizable portion of

the term within the braces and the + and subscripts refer to the

conventional spectrum factorization operator denoting extraction of

the multiplicative term containing poles and zeroes either inside (+)

or outside (-) the unit circle.

In the digital simulation of closed-loop systems use of the

techniques discussed earlier requires the introduction of a delay in

the feedback path to conveniently implement the closed-loop approxi-

mation, particularly in the nonlinear case. The need for the delay

can be eliminated if transfer functions are approximated in a manner

such that computation of the output requires knowledge of only previous

values of the output variable. This is achieved in the approach under




- 22 -


discussion by taking the fixed portion of the system as



F(z) esnT z-, n = 1,2, . (2.24)


If F(z) is taken to be z-1, the pulse transfer function resulting from

this approximation technique is determined to give the least sum of

error squared when the present value of the dependent variable is not

known. Pulse transfer functions with delay are termed closed-loop

realizable and those without delay as open-loop realizable. Simulation

of a single loop requires only one closed-loop realizable pulse transfer

function. Some typical optimum pulse transfer functions are shown in

Table 2. The discrete forms for the first order integrator are

those developed by Burt [32]; the higher order expressions were

derived in the course of this research. It is noted that there is some

correspondence to the classical approximations for integrators in

Table 1. Sage and Burt [10] have presented a study of the discrete

integrator representations.

While system discretization utilizing this approach might be

accomplished through integrator substitution, it seems advantageous

to obtain the optimum approximation for complete linear transfer

functions. Applied to a general system configuration such as illus-

trated in Figure 3, the method would require obtaining the optimum

discrete approximations for the linear transfer functions Gl(s),

G2(s), and H(s). Having determined the discrete model, difference

equations can then be written for the system state variables and a

computational algorithm developed for a digital computer study of

system performance.




- 23 -


Numerical Analysis Methods

The differential equations describing system dynamics may be

solved by any of the standard numerical integration methods; however,

these methods cannot generally satisfy the requirement for real-time

computation and simulation. In the comparison of discrete methods

for approximating continuous system response, such integration schemes

do provide a convenient means of obtaining results for the continuous

system, especially in the nonlinear case. One of the best known

approaches to integrating differential equations is the Runge-Kutta

method, for which there are many modifications [1,2,3]. A commonly

employed formulation is shown here. Given a differential equation


dx
-= f (t,x) ,
dt


where the sampling interval T is the increment in t, and x is an n-

vector, a set of coefficients ai are computed where the i are

n-vectors and


a- Tf(nT,x(nT) ),

T al
92 = Tg(nT +- X(nT) +--),
2 2
(2.25)
T "2
3 = Tf(nT +- x(nT) +--),
2 2


.4 = Tf(n+l T (nT) + a3).


The new value of a is then computed from




- 24 -


1
x(n+1 T) = x(nT) + (91 + 2a2 + 2a3 + a4). (2.26)
6

This formulation is that of a fourth-order Runge-Kutta method, having

a truncation error proportional to T5. Selection of a sufficiently

small increment in the independent variable produces a solution of the

desired accuracy; however, the very small increment size sometimes

required for a suitable result and the calculation of a complete set

of coefficients at every iteration combine to frustrate attempts to

utilize such a technique for real-time simulation for most applications.



Experimental Study of the Discrete Modelling Methods



As discussed earlier, few comprehensive, comparative studies on

discrete modelling methods or digital simulation techniques have been

reported, and none which include more recently introduced methods.

Detailed results published for the optimum discrete approximation

technique [10,32] consist largely of a study of discrete forms for

integrators. Sage [11] has presented a formulation for the optimum

approximation approach to digital simulation which included some brief

results. For those interested in digital simulation techniques, a

comparison of the newer methods with the older, classical approaches

to digital simulation may offer some significant insight into progress

being made in this area of research. Additional evidence of experience

with the modern techniques alone is also of merit. To obtain data for

a critical comparison of discrete modelling methods, and to better

define the application of the optimum approximation technique, a

program of experiments was developed for the digital computer. The




- 25 -


experiments consist of the digital simulation of two example systems

by each of the several discrete modelling methods discussed earlier,

and of determining the simulation sensitivity to changing sampling

intervals and different inputs. Those methods commonly implemented

by integrator substitution are briefly treated in obtaining the

describing difference equations for the example systems,while the

more involved procedures are more completely developed.

Linear System Approximation

Since all the discrete modelling techniques have been developed

primarily for linear transfer function approximation, a fundamental

basis for comparison of the different methods should be their ability

to model a linear system. The example chosen for this purpose is the

second-order linear system shown in Figure 6. The differential

equation describing the system dynamics is


d2x dr.
dt + 6 + 25 x = 25 r(t) (2.27)


which describes a position servomechanism with a damping factor of 0.6

and an undamped natural frequency of 5. radians per second. The

difference equations for simulation of the example system on the

digital computer will be developed for each method. After obtaining

the several representations for the approximating model, the results

for the complete group of experiments will be shown.




- 26 -


Figure 6. Linear Example System



The system closed-loop transfer function, C(s), given by


25
C(s) = (2.28)
s2 + 6s + 25


is placed in the form


25s-2
G(s) (2.29)
1 + 6s-1 + 25s-2


in preparation for obtaining the pulse transfer function by means of

integrator substitution.

The Tustin approximation for the example transfer function is

obtained through integrator substitution employing the trapezoidal

rule integrator form


2T 1 + -n
2 1 z-


in Equation (2.29), yielding the pulse transfer function

) + 2z-1 + 0 z-2
G(z) (2.30)
1 al-, a2z-2




- 27 -


where


a, = (8 50 T2) A a2 = (12T- 25 T2 ) ,



l = 25 T2A ,2 = 2 21 3 '3


and A (4 + 12 T + 25 T2)-1.


The desired difference equation can now be obtained in the form of

Equation (2.15) as


x(-nl T) = alx(nT) + a2x(;n- T) + 1rr(nti T) + B2r(nT)


+ r(n-1 T) (2.31)


The Madwed-Truxal form for the pulse transfer function approxi-

mating G(s) of Equation (2.28) is obtained by integrator substitution

in Equation (2.29) making use of the integrator forms


T 1 + z1
s-1 =
2 1 z-1

and

T2 1 + 4z-1 + z-2
s-2 -
6 (1 z-1)2


The resulting pulse transfer function is given by



1 + 2z-1 + 83z-2
G(z) = (2.32)
1 a1z-1 a2z-2




- 28 -


where


a1 = (12 100 T2) A ,


,1 25 T2 2 4


a2 = (18 T- 25 T2- 6) A ,


Pl 3 = l


A = (6 + 18 T + 25 T2)-1.


The related difference equation then assumes the form of Equation

(2.31) with the ai and Bi as defined here.

Integrator substitution utilizing the z-forms of Boxer and Thaler

in Equation (2.29) requires substitution of


T 1 + z-1
-1 ----
2 1 z1


T2
s-2
12


1 + 10z-1 + z-2

(1 z-1)2


yielding the pulse transfer function



l, + 2z-1 + z-2
G(z) 1 -1 3-2
1 alz-1 a2z-2


for which


al = (24 250 T2) A a2 = (36 T 25 T2 12) A ,




- 29 -


1l = 25 T2A 02 = 10 01 3 = l ,


and

A = (12 + 36 T + 25 T2)-. (2.33)


The Boxer-Thaler approximation may then be completed by implementing

the difference equation of Equation (2.31), with the ai and .i of

Equation (2.33).

Since the procedure for application of the Anderson-Ball-Voss

method is somewhat lengthy, and has been stated in complete form

earlier in this chapter, only the principal steps and results are

offered here. The system input time function, r(t), in Equation

(2.27) is replaced by an approximating polynomial of second-order


r(t) = K1 + K2(t-nT) + K3(t-nT)2 (2.34)

r(n+1 T) r(n-1 T)
where K1 = r(nT), K2 = r(n+ 2T r( T)


r(n+l T) 2r(nT) + r(n-l T)
and K =
3 2T2


The solution to Equation (2.27) with r(t) as defined by Equation

(2.34) may now be written in the form of Equation (2.17a), and the
necessary coefficients evaluated by substituting the solution into

Equation (2.27), with the r(t) of Equation (2.34), and then succes-

sively imposing the conditions t = nT, t = ni T on the solution.

The difference equation for the solution to Equation (2.27) is now

obtained in the form of Equation (2.17) by letting t n+l T, and by




- 30 -


employing the coefficients evaluated above, so that


x(+-l T) = aix(nT) + a2x(n-l T) + O$r(n7- T) + S2r(nT)


+ B3r(n-l T) (2.


where a = 2e-3T cos 4T a2 = -e6T


11-75 T 25 T 12 1
01 (1-&1--a2) -(1a2) + (l--(1--2)
625 T2 50 T 2


625 T2 22 12
0 2 (1l-a2) + -(l+-2) (l-2*2)
625 T2 25T

and
75 T+ 11 25 T + 12 1
3 = (1-a1-a2) (1+a2) -+ -(--a2).
625 T2 50 T 2


For a completely linear system, the application of Fowler's

method reduces to the derivation of the z-transform for the linear

system transfer function with some desired input data hold or input

approximation. If the input data hold for the present example is

taken as a zero-order data hold, g(s), with an adjusting lead of

one-half sample period to compensate for the lag of the basic hold

operation, the pulse transfer function sought to model G(s) of

Equation (2.28) is



[HO(s) e-5sTG(s)]


Carrying out the indicated z-transformation results in the pulse

transfer function




- 31 -


1 + z-1, 2 + z-2 3
G(z) -
1 z-.l -22


for which the related difference equation is

x(-+l T) = a x(nT) + a2x(n-l T) + 01r(nTl T) + S2r(nT)

+ 3 r(-1l T) (2.36)


where al = 2e3Tcos 4T a2 = -e-6T


l 1 e-1-5T(cos 2T + .75 sin 2T) ,


32 = e-1l5T(cos 2T + .75 sin 2T) +


e-4.5T(cos 2T .75 sin 2T) al ,


and 83 = -a2 e-4.5T(cos 2T .75 sin 2T) .


Since the test input function to be employed in the computer

implementation of the equations developed here is a unit step function,

the optimum discrete approximation is derived for this input.

Desiring an open-loop realizable approximation for the closed-loop

transfer function of Equation (2.28), the fixed portion of the

discrete model F(z) of Equation (2.24) is made unity, F(z) = F(z-1)
= 1, and the term [R(z) R(z-1) F(z) F(z-1)] of Equation(2.23) is

[(l z)(1 -l)] Factoring this term as required by the proce-

dure to utilize Equation (2.23) results in




- 32 -


[R(z) R(z-1) F(z)





[R(z) R(z-1) F(z)


1
F(z-l)]+ =
1 z-1



1
F(z-1)] =-
1 -z


The term A(z) of Equation (2.23) is given by


25
A(z) = -----
s(s2 + 6s + 25)


The expression for the optimum pulse transfer function then becomes


H0(z) =


[A(z)] P.R.

R(z)


and,since A(z) is physically realizable, HO(z) results immediately

as


z-1 1 + z-282
H0(z) = l -
1 z- z-2ca2


where


al = 2 e-3Tcos 4T ,


P1 = 1 e-3T(cos 2T + .75 sin 2T) ,



32 e-6T e-3T(cos 2T .75 sin 2T).


The requisite difference equation for the discrete model output is


L2 = -e-6T




- 33 -


given by


x(nEl T) = alx(nT) a2x(n-l T) + Blr(nT) + P2r(-l T)
(2.37)

Solution of the difference equations derived above for the several

approximation methods was carried out by programming them for the IBM

1401-709 data processing system. The organization of the programs

developed is shown by the flow charts of Appendix IV. A comparison of

the discrete models developed for the example system was made by

determining the response of the model to a unit step function input at

a number of sampling intervals. The approximate model response was

compared to the solution of the system differential equation obtained

by a fourth-order Runge-Kutta integration method for a small sample

interval, .001 second. The standard of comparison for the different

methods was chosen as a normalized sum of error squared, NSES, between

the approximate model response and that obtained via the Runge-Kutta

integration scheme. The NSES criterion is given by


1 N-1
NSES = (llikT) d(kT) 2 R(kT) (2-38)
k=0


where xi(kT), an n-vector, is the continuous system state vector at

t = kT, the n-vector gd(kT) is the discrete model state vector at the

same sample instant, R(kT) is an nxn positive semi-definite weighting

matrix, and N is the number of sample points over the observation

interval. The observation interval for the computer experiments was

taken as five seconds. An analytical determination of the sum of error

squared in Equation (2.38) is possible for linear systems through the




- 34 -


integral relationship of Equation (2.22) for the case where the

weighting matrix, R, is constant and where N -=o A bilinear trans-

formation for a mapping from the z plane into a w plane,


l+w
1 + w

1 w
l-w


maps the contour of integration P, the unit circle, into an inte-

gration about the left half of the w plane. The requisite integration

in the w plane can be accomplished by employing the integral tables

found in Appendix E of Newton, Gould, and Kaiser [33]. The NSES

criterion is stated in general form here for convenient application to

systems other than that of the immediate example. For the present

examples the weighting matrix is the nxn identity matrix.

Results of the digital computer experiments for sampling intervals

ranging from .01 to 0.3 seconds are shown in Figure 7. The classical

approximate methods and the Fowler method yield quite the same error in

the simulation result for sampling intervals within what would normally

be a practical magnitude. In order to adequately display the system

dynamic behaviour for inputs with high frequency content, the sampling

interval would probably be chosen not greater than 0.2 of the system

rise time, and perhaps as small as 0.1 of the rise time. In this range

of sampling interval size, the majority of the discrete models yield

essentially the sane result for the step input. For sampling periods

greater than 0.1 second, the methods become increasingly distinctive in

response, though still not greatly different, with exception of the

Boxer-Thaler approximation, which demonstrates increasing sensitivity

to change in the sampling interval and was found to be unstable for a




- 35 -


10-2


Sz-h Anderson-Ball-Voss

-+ Boxer-Thaler


O Madwed-Truxal

A Tustin


X Fowler


0 Optimum


1010
^i---C O -----



0.0 0.1 T sec 0.2.

Figure 7. Error Criterion for the Output of the Linear System




- 36-


sampling period of 0.5 second. Quite distinguishable from the tightly

grouped methods of higher error is the optimum approximation error curve

of Figure 7. It is noted that the vertical scale is broken, for con-

venience in plotting, and that the error for the optimum discrete model

is lower than for the other methods by a factor of 10-6. Also of

importance is the insensitivity of the model to change in sampling

interval.

Figure 8 illustrates the step response of the continuous system

and the discrete approximations. For the sampling interval of 0.1

second employed for the case shown, the response of the models de-

termined by the Tustin, Madwed-Truxal, Boxer-Thaler, Anderson-Ball-Voss,

and Fowler methods are nearly indistinguishable on the drawing scale

except for the initial values. The Tustin model response is chosen as

representative of the group for improved clarity in the presentation.

The initial values for the different methods which are the principal

differences in the responses at the stated sampling interval are shown

in Table 3. The lack of delay in the forward path of the approximate

models for these methods contributes a major portion of the simulation

error which distinguishes these techniques from the optimum method. As

the sampling interval increases, the initial value increases very

rapidly, with an accompanying detrimental effect on the simulation

accuracy.

Further comparison of the linear discrete approximations was made

for selected methods by computing the response of each method to a ramp

function input, the ramp function having a five to one slope. Of the

discrete models developed previously, the experiment was conducted for

the Tustin, Fowler, and optimum representations. The results are





- 37 -


x(t)


1.1 -


1.0 -


0.9 -


0.8 -


0.7 -


0.6 -


0.5-


0.4-


0.3-


0.2-


0.1



0


.25 .5 .75 1.0
time sec


Figure 8. Linear System Unit Step Response


- Continuous


A Tustin


O Optimum


T = 0.1 sec.




- 38 -


Table 3

Computed Initial Conditions for the
Linear System Simulations


Method x(O)


Tustin .04587


Madwed-Truxal .03106


Boxer-Thaler .01577


Anderson-Ball-Voss .02026


Fowler .02820


Optimum .00000


Continuous .00000





39 -



displayed in Figure 9 where the data for the response of the Fowler

model are the same as for the Tustin model within the resolution of the

plot. It is apparent that the model which was optimum for a unit step

input is not so for the ramp input. The unit delay in the optimum

discrete model for a step input which permitted a realistic approxi-

mation of physical system delay now appears as a damaging parameter. A

second linear optimum approximation was developed to be optimum for a

ramp input, with the response also shown in Figure 9. This discrete

approximation is developed in Appendix III as a detailed example of the

optimum approximation procedure. For some applications all the models

tested here might be acceptable in performance, but it is clear that

the simulation optimized for the low order input exhibits less accurate

performance with higher order inputs. The values of the NSES criteria

for the discrete approximations employed in the ramp response experiment

are given in Table 4.



Table 4

Normalized Error Square Criterion
for the Ramp Response Experiment


Method NSES


Tustin 4.2 x 10-5


Fowler 8.2 x 106


Optimum for step 2.6 x 10-2


Optimum for ramp 5.8 x 10-6




- 40 -


0 Optimum for ramp


X Optimum for step

3.0
A Fowler


Continuous


Figure 9. Linear System Ramp Response




-41-


Nonlinear System Approximation

Digital simulation and discrete modelling of physical systems can

seldom be accomplished for the majority of modern analysis work through

purely linear approximations. While some segments of complex systems

may certainly be adequately represented by means of a linear model, it

is unlikely that a complete analysis could in general be satisfactorily

completed on this basis. Digital simulation of aircraft dynamics has

been an active area of inquiry and has stimulated the efforts for more

accurate discrete modelling of nonlinear systems 334,35,363. The air-

craft systems contain nonlinear elements in such forms as amplifier

response and actuator characteristics which must be included in a model

intended for adequate system representation. Consideration is here

given to the discrete modelling of a nonlinear system in order that

additional insight may be gained into the usefulness of methods for

discrete approximation of physical systems. The example system is

shown in Figure 10.


Figure 10. System for Nonlinear Example




- 42 -


The state equations for the example system may be given utilizing

the state variable identification of Figure 10 as follows


25
S [x2 + .01 x] (2.39)

and

x2 = 6 [r (x1 + x2)]


thus permitting immediate application of the Runge-Kutta integration

method for first-order differential equations.

Discretization of the system is best effected by first replacing

the minor loop by its closed-loop form. The integrator substitution

methods are then applied to the transfer functions


6 25/6
G1(s) -- and G(s) -- (2.40)
s+6 s


Since the integrator substitution here requires only the first-order

integrator form, the Tustin, Madwed-Truxal, and Boxer-Thaler techniques

yield the same discrete model. This common formulation of the discrete

representation will be termed the Tustin approximation. Substitution

of the trapezoidal rule integration operator into Gl(s) and G2(s) of

Equations (2.40) yields pulse transfer functions


3 T(l + z-1)
G,(z) -
(1 + 3T) (1 3T) z-1

and

25 1 + z-
G2(z) - T (2.41)
12 1 z-1




- 43 -


The discrete model for the example system then appears as shown in

Figure 11.




-r(t) T (z) -x2N. L. G2(z)










Figure 11. Tustin Model for the Nonlinear System



The difference equations for the state variables of the system may now

be written as follows:


25
x,(ni+l T) x1(nT) +- T [f(n+l T) + f(nT)]
12


1 3T 3T
x2(n+l T) -- x2(nT) + -- [e(ni1 T) + e(nT)]
1 + 3T 1 + 3T

f(nT) = x2(nT) + .01 x23 (nT)


e(nT) r(nT) x1(iT- T) ,


(2.42)


Development of the difference equations for the Anderson-Ball-Voss

approximation for the nonlinear system requires obtaining of the

discrete forms of the solutions to Equations (2.39). This is ac-

complished by carrying out the requisite procedure for each differential




-44-


equation as was done for the linear system previously treated in this

chapter. The input function approximation employed for the procedure

is the linear time function


r(n+- T) r(n-l T)
r(t) = r(nT) + (t-nT). (2.43)
2T


The difference equations obtained from this procedure describe a

discrete model of the form of Figure 11, and may be stated as


25
xl(niT T) xl(nT) + 2T [f(n+i T) + 4 f(nT)
24


+ f(n- T)] ,


x2(nl T) = ax2(nT) + 3le(n+l T) + B2e(nT)


+ 3 e(-l T) ,


where f(nT) and e(nT) are defined as in Equation (2.42) above, and


1 1
a e6T i = (1 a) ,
2 12T


2 1 a 3 = -i (2.44)


The procedure for the Fowler method is shown more explicitly by

the development of the discrete model for the nonlinear system than by

the previous linear system approximation with the method. The z-

transforms for the transfer functions of Equations (2.40) are obtained




- 45 -


6 25
Gl() -- T and G2(z) -
6eTzl1 2 6 (1 .71)


Multiplication of G2(z) by the sampling interval T is effectively

replacing the integrator of the continuous system by a backward differ-

ence rectangular integration rule. The numerator of Gl(z) is replaced

by a parameter K which is to be adjusted so that the eigenvalues of the

closed-loop discrete system correspond to the eigenvalues of the closed-

loop linearized continuous system. With these modifications Gl(z) and

G2(z) become the Fl(z) and F2(z) of Figure 4 and appear as

K 25T
Fl(z) 1, and F (z) 2- (2.45)
1 e-Tz- 2 6 (l z-1)


Letting the nonlinearity of the system be replaced by a simple unity

gain, the z-transform for the resulting closed-loop linear system is


z-125 e-3Tsin 4T
G(z) (2.46)
1 z-'2e-3Tos 4T + z-2e-6T


Similarly linearizing the discrete system,the closed-loop pulse transfer

function is

KT(25/6)
F(z) 25 (2.47)
1 z-l(l+e-6T -2-- KT) + -2e6T (2.47)
6


where H(z) in Figure 4 is z-1. The requirement that the closed-loop

eigenvalues of the two discrete systems be equal is met if the denomi-

nators of Equations (2.46) and (2.47) are the same. This condition is




- 46 -


realized if


25 -6T 2 3T
- KT - e 2 ecos 4T,
6


which yields


K -(l+e"6T 2e-3Tos 4T).
25T


The final determination for the Fowler model

function Fi(z) of Figure'4. This element is

multiplied times the F(z) of Equation (2.47)


G(z) = [HO (s) e.5sto(s)]


is the input transfer

sought so that when

the result will yield


(2.49)


where H,(s) is a zero-order data hold transfer function. Development

of the z-transform indicated by Equation (2.49) gives


(+ z-182 + z-28
1- z- l- z-2 2


ai 2e3Tcos 4T ,


51 i e-.5TA,


3 - 2 e-4A2,


a2 = -e-


B2 = e-1.5TA + e-4.5TA2 al


- cos 2T + .75 sin 2T ,

- cos 2T .75 sin 2T.


(2.50)


(2.48)




- 47 -


The input transfer function requirement is met if

25
SKT Fi(z) -= 1 + z-102 + -203 (2.51)
6


where the Bi are defined in Equations (2.50). Incorporating the

evaluation of K from Equation (2.48) gives


Fi(z) P + z-1P2 + z-273

where

1 3I/D 2 2 02/D ,3 = 3/D ,

and

D = 1- al- a2 (2.52)


Information is now complete to permit expression of the difference

equations for the Fowler approximation to be written, so that

25
xl(n1 T) xl(nT) +--T [x2(n-l T) + .01 x23(n+1 T)] ,
6

x2(n T T) e x + e(nT)Ke( ,


and

e(n+1 T) = plr(nl T) + p2r(nT) + 3r(ni T) xl(nT) (2.53)

with the p. and K as defined above.

The difference equations for the optimum discrete approximation

to the nonlinear system are readily developed through utilization of

the discrete forms of Table 2. The integrator of the continuous system




-48-


is modelled by the closed-loop realizable integration rule developed

for a ramp input, i.e.

z--i
25 T z1(4 z 1 + z-2)
G (z) ---- (2.54)
2 12 1 z-1


The minor loop is approximated by the open-loop realizable form derived

for a ramp input which appears as


(6T 1 + e-6T) + z-(l e-6 6T eT)
Gl(C) 6T (1 e-6Tz-l) (2.55)



The requisite difference equations may be written immediately as

25
xl(n+l T) xl(nT) + -T [4f(nT) 3f(n-l T) + f(n-2 T)] ,
12

x2(n-1 T) ax2(nT) + 81 e(n+l T) + 02e(nT)


where

e(nT) r(nT) xl(nT)


f(nT) x2(nT) + .01 x 3(nT) ,


a e-6T


0 (6T 1 + a) / 6T ,


and 2 = (1 a 6T ) / 6T (2.56)




- 49 -


Having developed the difference equations needed to implement the

discrete models as digital computer programs, a series of computer

experiments was conducted similar to those undertaken for the linear

example. The sampling interval sensitivity of the different discrete

models was determined by computing the step response for each model at

a number of different sampling intervals. The results of this sequence

of experiments is shown in Figures 12 and 13, where the NSES criterion

defined in Equation (2.38) has been computed for sampling intervals in

the range 0.01 second to 0.3 seconds for each state variable. It is

clear that the performance of the classical methods has deteriorated

from that achieved for the linear system approximation. With the test

input for this experiment being a 10 unit step function, the system is

driven so that the effect of the nonlinearity is evident although not

dominating. The Anderson-Ball-Voss approximation becomes unstable

very rapidly with increase in the sampling interval being unstable for

a sampling interval of 0.09 second. This is attributable in part to

the low order input approximation employed in deriving the discrete

model for this method. The Tustin approximation becomes unstable at a

sampling interval of 0.225 second. Response of the different simulations

to a 10 unit step function input is shown in Figures 14 and 15 for a

sampling interval of 0.1 second which seems a maximum reasonable value

in relation to the example system dynamics. The approaching insta-

bility of the Tustin approximation is evident from Figure 14. Results

from these experiments also indicate that the improved approximation

of the output state by the optimum discrete model is due largely to the

more accurate integration operator employed in that model.




- 50 -


10-1






10-2 Anderson-Ball-Voss


A Tustin


X Fowler
10-3
0 Optimum




10-4
0 0.1 0.2
T sec


Figure 12. Error Criterion for x, with 10 Unit Step Input




- 51 -


10





c lo
x








10"-2 X O Anderson-Ball-Voss


A Tustin


0- x X Fowler
10-3
O Optimum



i0--7 I x
0 0.1 0.2
T sec

Figure 13. Error Criterion for x2 with 10 Unit Step Input




- 52 -


xl(t
1 (A A


14. A

A
13.
A
12.




9.

1. A




7.

S- Continuous
6. /
SxO / Tustin


0 Optimum
4.
X Fowler
3.
SO] Fowler, modified

T = 0.1 sec.
1.
0

0 .25 .5 .75 1.0 1.25
time sec


Figure 14. Nonlinear System Response to 10 Unit Step Input





- 53 -


0


Continuous


0 Optimum


X Fowler


a Tustin


O Fowler, modified


T 0.1 sec.


Time sec


Figure 15. Nonlinear System x2(t) Response for 10 Unit Step Input








Perhaps the most significant result of the initial experiment is

the evidence in Figure 12 of the lower sensitivity of the Fowler

approximation to changes in sampling interval. Although the optimum

discrete approximation is indeed optimum for an adequate range of

sample interval size, the sensitivity of this model to change in

sampling interval is greater than for the Fowler model and is a

damaging characteristic. A principal distinguishing feature of the

Fowler model is the input transfer function. Since input data holds

were not generally treated as a part of all the methods considered here,

but only for the Fowler method for which the procedure explicitly in-

cludes this feature, it is of interest to observe the performance of a

modified Fowler model in which the input transfer function is not

present. Such a model is readily derived from the original Fowler

approximation, Equations (2.53), by replacing the error term in the

equations by


e(n+l T) = r(n+l T) xl(nT) (2.57)


This modified Fowler approximation was programmed for the series of

computer experiments conducted previously. Figures 16 and 17 display

the values of the sum of error square for the response of this model

to the 10 unit step function input for a study of sampling period

sensitivity. The results shown emphasize the significance of a proper

input approximation in any discrete model, a fact long recognized.

These results do however weaken the implication above that the input

transfer function might be the determining factor in the sampling

interval sensitivity of the model, for while the overall accuracy of

the simulation is decreased by deletion of this part of the model, the





- 55 -


0 .1


T sec


Figure 16. Error Criterion for x, of Nonlinear System for
Step Input with Modified Fowler Result




- 56 -


10-1


10-2


.1 .2 .3
T sec


Figure 17. Error Criterion for x, of Nonlinear System for
Step Input with Modified Fowler Result





- 57 -


sensitivity to sampling interval change is approximately the same, as

evidenced in Figure 16. The assertion by Fowler of the importance of

matching the discrete model eigenvalues to those of the continuous

system appears to be the critical factor in the modelling procedure

for this characteristic. The computational algorithm written for the

complete Fowler model is made to achieve the matching of the eigen-

values at each new sampling interval for constant improvement of the

model dynamics. This feature of the Fowler method suggests that

similar adjustment of parameters might be of value in the other methods.

Such a scheme for refinement of the optimum discrete approximation will

be considered in the next chapter.

Discrete model response to sine wave inputs is a common test of

simulation performance and is a characteristic of importance in

practical application of discrete systems. Response of the Fowler

simulation, complete and modified, and response of the optimum simu-

lation to such an input have been determined in a computer experiment.

A typical sine wave response of these models shown in Figure 18 reveals

the excellent ability of the complete Fowler model and optimum model to

simulate the system. The effect of removing the input transfer function

from the Fowler model appears principally as a time lead in the dis-

crete representation, a characteristic observable also in Figure 14 for

the step response. That this time lead is an undesirable character-

istic is also evident from the plots of the NSES criterion in Figures

19 and 20. Sensitivity of the different simulations revealed by the

NSES data is not so disparate as for the step input response. The

apparent reduction in the NSES criterion of the complete Fowler simu-

lation for an increase in sample interval at small interval size is a




- 58 -


result of the definition of the criterion. The total simulation error

in this region actually doubles, but the normalizing factor, the number

of sample intervals, is now 0.2 its original value for a sampling

interval of 0.05 second; consequently the normalized sum of error

squared appears to be reduced. Although the NSES criterion for xl

reflects badly upon the optimum discrete approximation for all values

of sample interval with the sine wave input, the actual simulation

result is not at all objectionable. The optimum discrete model

response shown in Figure 18 for a sample interval of 0.2 second is

quite acceptable for many applications, and for smaller sampling

intervals such as would normally be used in simulating this system, the

optimum discrete simulation output response cannot be distinguished

from the Fowler model response on a scale of resolution such as that of

Figure 18. The results for the state variable x2 in Figures 20 and 21

give additional support to the capability of the optimum approximation.

For a ramp function input to the example system, the discrete

simulations by the Fowler and optimum approximations yield excellent

results as shown in Figure 22. The modified Fowler simulation

possesses the characteristic lead in the response and accompanying high

simulation error. Evaluation of the NSES criterion for the optimum

simulation state vector yields 7.2 x 10l5, for the Fowler simulation

1.5 x 10-3, and for the modified Fowler simulation .22.

Discrete modelling via direct z-transform replacement of linear

system transfer functions has not been experimentally studied. Every

discrete model represents in effect a z-transform approximation, and

the Fowler method is considered to be z-transform modelling at its

best. The modified Fowler model studied does in fact yield results





















Continuous


Optimum


Fowler


Fowler, modified


xl(t)
10.


8.


6.


4.


2.


0.


-2.-


-4.


-6.-





-10.-
-8.


Figure 18. Sine Wave Response of Nonlinear System for T 0.2 Sec.


- 59 -


3. ./ time
/ sec.




- 60 -


0/ Optimum


S_ X Fowler
xZ
-X--X
0 Fowler, modified



0 .1 .2
T sec


Figure 19. Error Criterion for xl of Nonlinear System
with l0sin2t Input




































C
N


x


T sec


Figure 20. Error Criterion for x2 of Nonlinear System
with 10sin2t Input


- 61 -




- 62 -


x2(t)


3.0- \x

A/X \ X
2.0


x
1.0.



0.0 1?


X time sec
-1. 0 A X



-2.0



-3.0- Continuous


-4. O. 0 Optimum

y A/ X Fowler

D Fowler,
modified

T 0.2 sec.



Figure 21. Nonlinear System x2(t) Response for 10sin2t Input








xl(t)
O

3.5 /
i - Continuous

0 Optimum
3.0 -
X Fowler

O Fowler, modified
2.5



O
2.0




1.5



O
1.0



O
0.5 T 0.1 sec.

O



0 .2 .4 .6 .8 ]
time sec

Figure 22. Response of the Nonlinear System to a Ramp Input


- 63-


@





-64-


for what may be equally well termed a modified z-transform model.

Because of the close association of the Fowler method and z-transform

concepts, concentration on the former method was felt to be most

rewarding.

Uncertainty in the time of occurrence of input signals is a source

error in any discrete system or discrete simulation. Consideration of

this problem places additional limits on the permissible magnitude of

sampling interval. By making some assumptions regarding the statisti-

cal properties of the time of arrival of signals, it is possible to

determine bounds for the error associated with given types of input

signals and sampling interval size. Results regarding research on this

aspect of system discretization have been discussed by Sage and Melsa

[37].

Summary

A comprehensive study of significant classical and modern tech-

niques for discrete modelling of differential systems has been pre-

sented. Discussion of the basic application procedures for each method

has been supported by extensive digital computer experimentation,

providing a basis for comparison of the different techniques. It has

been demonstrated that recently introduced methods for discretization

of differential systems permit accurate digital simulation of such

systems and yield performance at least equal to that of the classical

methods for linear systems simulations. For discretization of non-

linear systems, the Fowler method and optimum discrete approximation

method have been shown to lead to superior discrete models. The

specific nature of the examples investigated is recognized, as well as

the corresponding limitations impressed on possible conclusions re-

garding the experimental results. Such limitations are inherent in any




- 65 -


study of nonlinear and discrete systems and are accepted as normal

constraints on the discussion of results.

Due to the limitations of the timing facilities in the present

IBM 1401-709 data processing system, and the necessity for programming

economy, it has not been possible to establish absolute evidence of

real-time digital simulation. The coarse time increment available on

the digital machine only permits bounds to be set on the simulation

running time. Indications obtained from timing program segments for

simulations of different sampling interval size are that real-time

simulation is achieved by the optimum discrete model and, by impli-

cation, other techniques for the same order difference equation. For

the example considered here, the optimum discrete model permits imple-

mentation of a more rapid simulation since a larger sample interval

size may in some cases be employed for a given simulation error.

The experimental data presented for the discrete approximation

techniques considered give evidence of decided advantages for the

optimum discretization approach. The model derived via this method has

been shown to be of adequate and frequently superior simulation capa-

bility. In particular, the optimum discrete representation compares

favorably with the Fowler approximation. In addition to desirable

performance characteristics, the optimum discrete model is readily

developed for most systems through utilization of discrete forms availa-

ble in Table 2 and others which may be derived in the course of work

with the method and retained. For complex systems, the discretization

may be accomplished on a segmental basis, not requiring extensive

digital computer analysis as has been suggested for the Fowler method

formulation. For these reasons the optimum discrete approximation




66 -



method appears to be particularly promising for many applications.

Experimental results obtained in the course of this study indicate

that further improvement is possible in the discrete model characteri-

zations. The reduced sensitivity to sampling interval change of the

Fowler model appears related to the parameter adjustment carried out in

developing the discrete system. Similar adjustment of parameters in the

discrete representations derived by other methods seems a promising

avenue for improved modelling.















CHAPTER 3

IDENTIFICATION



The experimental study of discrete approximation techniques

discussed in the last chapter revealed a possible avenue of approach

to the improvement of discrete models. Adjustment of gain parameters

in a model for improvement of the approximation is not a new concept,

but the procedure normally used has been based on the intuition and

experience of the experimenter. The Fowler method of discretization

presents a first approach to an orderly attack on the problem.

Efforts made in the areas of system identification and control have

employed parameter identification techniques [38,39,40,41] which

may be adapted for a new approach to the parameter adjustment in

discrete models. In recent research on this aspect of discrete

modelling, Sage [10,11] has introduced a formulation for the

problem of parameter identification employing the methods of quasi-

linearization and differential approximation. Since these techniques

have not been previously applied in this manner for the study of

discrete systems, an intensive effort has been made to examine the

problem formulation and experimentally study the properties of the

identification procedures.

Quasilinearization

The method of quasilinearization for the resolution of boundary-


- 67 -




- 68 -


value problems arising in the solution of nonlinear differential

equations has been widely applied by Bellman in earlier referenced

works. A formulation of the method for two-point boundary-value

problems had been discussed by McGill and Kenneth [42] who refer to the

method, perhaps more properly, as the generalized Newton-Raphson tech-

nique. Application of this technique to difference equations appears

to have been lightly treated. Henrici [43] discusses this approach in

relation to a finite difference scheme for solution of a class of non-

linear boundary-value problems of second-order, and offers a proof for

convergence of the proposed scheme. A similar approach to solution

of two-point boundary-value problems via finite difference techniques

and use of quasilinearization has been presented by Sylvester and

Meyer [44].

Consider a system of nonlinear difference equations


x(k-l T) = f(x(kT),k), k [L,K] (3.1)


with boundary conditions


(,(jjT), (jT) = djT), j = L,K

i = 1,2, . ., m/2,
(3.2)


where x and c are m-vectors, ,> denotes the inner product, and

the period of observation of the solution vector is t = LT to t KT.

Utilizing the method of quasilinearization, an iterative procedure is

established for successively approximating the solution to Equation




- 69 -


(3.1) by solutions of the system of linear equations


xq+l(n1- T) = f(xq(nT), n) + J(xq(nT),n) [xq+l(nT) xq(nT), (3.3)

where J is the Jacobian matrix of partial derivatives having as its
ijth element the partial derivative afi/ axq and ix indicates the

solution at the qth iteration. Equation (3.3) may be stated as

xq+1(n+l T) = A(nT)xq+l(nT) + B(nT) (3.4)

where

A(nT) = J(xq(nT),n) ,

B(nT) = f( x(nT), n) J(xq(nT), n) lx(nT).


Since Equation (3.4) is linear in the (q + 1)st approximation, a

solution may be determined by generating the homogeneous and particular

solutions and imposing the boundary conditions of Equation (3.2).
Let q+l (nT) be the fundamental matrix of


Sq+l(n+1 T) = A(nT) (q+l(nT), (3.5)

with

q+1
) (L) = I, the m x m identity matrix.

The particular solution pq+l(nT) is generated by the equation


pq+(n+l T) = A(nT) pq+l(nT) + B(nT),


(3.6)




- 70 -


with


+1(L) = 0

The solution for Equation (3.4) is now given by


xq+l(n- T) = (q+l(nT) q+1 + q+l(nT) (3.7)

with the constant vector v+1l obtained from the boundary conditions

by solving the equations



<(i(jT), q+ (jT) y +1 +1(jT)> = di(jT), (3.8)

j = L,K

i = 1,2, . m/2

An initial or zeroth trajectory x may be generated by selecting

initial conditions for the unknown elements of ;(LT) and solving

Equation (3.4) forward in time. In practice, Equation (3.7) is

seldom used to obtain the (q + 1)st trajectory. Such an approach

requires retention of the complete homogeneous and particular solutions

trajectories with a corresponding requirement for computer memory

storage. An approach having reduced memory storage requirements

consists of retaining only pq+1(LT), p+l(LT), (q+l(KT),

and pq+l(KT) in memory until evaluation of vq+l by Equations (3.8).

The (q + 1)st trajectory q+1(nT) is then generated from Equation (3.4)

with requisite initial condition vector elements obtained from yq+.

This trajectory is stored in computer memory as the final solution




- 71 -


or for evaluation of A(nT) and B(nT) for the next iteration.

Differential Approximation

The method of differential approximation introduced by Bellman

[12,15,16,38] offers a direct approach to the problem of parameter

estimation [45]. Because of the straightforward manner in which differ-

ential approximation is applied, it may serve as a convenient means for

obtaining parameter estimates to initialize the quasilinearization

procedure. This adds particular value to this procedure since the

parameter values obtained from differential approximation are

frequently quite poor and in need of refinement.

Consider an equation


y(n+l T) = f(y(nT),b) n [L,K] where


y(nT) is an m-vector, and b is a parameter vector, an r-vector, to

be determined so that y(nT) closely approximates some vector z(nT).

If some suitable parameter vector, bo, can be found so that


z(n+l T) = f(z(nT),b ) ,


this set of parameters with the initial conditions z(LT) will make

y(nT) identical with z(nT). Such a set of parameters may not exist;

however, b may be determined to make


z(nil T) f(z(nT), b)


as near zero as possible. The set of parameters may then be chosen

so that




- 72 -


K

SJjz(n+l T) f(z(nT), b)Il 2
n=L
R(nT)


is minimized with respect to b, where R(kT) is an m x m positive

semi-definite weighting matrix. The minimization may be accomplished

by equating to zero the partial derivatives of Equation (3.9) with

respect to the components of b,yielding equations


K

[ \7bf'(z(nT),b)] [z(n- T) f(z(nT),b)] = 0 (3.10)
n=l


where 7b f' = [f / ab ] R = I, and 0 is the r dimensional

null vector. Solution of these r equations in the components of b

produces the parameter estimates.

Formulation of an ADproach to Discrete System Parameter Identification

It has been noted that the dependence of discrete approximation

error on sampling interval size may be reduced by modification of the

approximation in relation to changes in the sampling interval. A

similar dependence of approximation error on input signals occurs

in the simulation of nonlinear systems, and it appears that improve-

ment in the approximation error may also be achieved for this case by

suitable model adjustment. Having a discrete model, represented by

state vector _yd(nT), for a continuous system with state vector y (t),

it is desired to determine a parameter vector, b, for the discrete




- 73 -


system so that y (nT) y (nT) for a given input signal and sampling
d -a
interval T. Knowing the continuous system response for a given input,

the parameters are to be adjusted to minimize the sum of error squared

between the continuous system state and the discrete system state,

N- 2
J = L y (kT) y d(kT) (3.11)
2 1 l "d
k=0 R(kT)


subject to the constraints


yd(nl T) = f(Yd(nT), b) n [ 0,N] (3.12)

b(n+l T) = b(nT), (3.13)

and


Yd(O) = ya(0) (3.14)

where Yd(nT) and ya(nT) are m-vectors, b(nT) is an r-vector, and R

is an m x m positive semi-definite weighting matrix.

The minimization of J is accomplished via variational calculus

procedures, utilizing a Lagrange multiplier formulation for discrete

systems [46,47] Adjoining the system constraints to J yields an

augmented criterion


N-1
j* = L I |ya(kT) -Yd(kT) 2
k-- 2 "-a d R




- 74 -


+ '( +1 T) [yd(k+l T) (yd(kT), b)]


+ '(kI7 T) [b( T) (k+ b(kT)] (3.15)


where the argument of R(kT) has been omitted for convenience in

writing and where p' denotes the transpose of the vector P. The

resulting equations for the adjoint variables become

P(nT) = [V7yf (y (nT),b)] p(n~l T) + R(nT) [ ya(nT) d (nT)],
(3.16)


and


0(nT) = B(ni T) + [V bf(y (nT),b)] (ni T) (3.17)
b- d

with boundary conditions


{(N-I T) = B(NF- T) = B(0) = Q (3.18)

Equations (3.12), (3.13), (3.16), and (3.17) together with the

boundary conditions pose a two-point boundary-value problem which may

be resolved by the quasilinearization approach discussed above. The

difference equations are transformed into a new family of variables

by defining a 2(m + r) dimensional vector

x'(nT) = [Cy (nT), b', p'(nT), B'(nT)3 (3.19)

This combines Equations (3.12), (3.13), (3.16), and (3.17), and may

now be written as




- 75 -


x(n+1 T) = g(x(nT)) (3.20)

with boundary conditions from Equations (3.14) and (3.18)

= di(JT), j = 0,N-1
i = 1,2, . (m + r) .
(3.21)


*An initial estimate of the parameter vector b permits Equation (3.20)

to be solved, yielding an initial trajectory xO(nT). The (q + l)st

approximation to the solution is given by Equation (3.3) which here
becomes


x(nT1 T) = [7xg'(xq(nT)) i' q+'1(nT) xq(nT)] + g(xq(nT)
(3.22)


where [Vg'J is the Jacobian matrix earlier defined. The
x
successive approximations to x(nT) and estimates for the parameter

vector b are obtained by the procedure of Equations (3.5) through (3.8).

The computational load required by the implied matrix inversion

of Equation (3.8) may be alleviated by reducing the order of the
coefficient matrix from the a priori knowledge of the boundary
conditions

xi(0) = Yai(0) i = 1,2, . .,m (3.23a)


i = 2m + r + 1, . ., 2(m + r)


x (0) = 0,


(3.23b)




- 76 -


x (N-1 T) = 0, i = m + r + m+ r + 2, ,
i

2(m + r). (3.23c)


The necessary values for the b and u (0) may be obtained by

inverting only an m + r square matrix consisting of ijth terms of

)(N-l T) where i ranges from m + r + 1 to 2(m + r) and j ranges

from m + 1 to m + r. The corresponding constant vector consists

of the final values of the particular solution vector, pi(N-l T),

for i = m + r + 1 to i = 2(m + r). This modification of the procedure

requires that only the final values of the homogeneous and particular

solutions be retained in the computer memory. Termination of the

iteration process is caused by satisfaction of some desired criterion

such as a specified rate of change of parameter values.

Differential approximation may be readily applied to this problem

for parameter identification or initialization of the quasilineariza-

tion procedure. This is achieved by applying the conditions of

Equation (3.10) to Equation (3.12) with the result


N-l

Z 7 fIf'(ya(nT), b)] [ya( l T) f(ya(nT),b)] = 0. (3.24)
n=0 b


Experimental Study of Parameter Identification

Digital computer experiments designed to test the effectiveness

of the identification procedures formulated above have been organized

in relation to the work in the last chapter. Since studies were made




- 77 -


there of the dependence of discrete approximation error on sampling

interval size, the data from that study supplies a convenient basis

for comparison in an attempt to improve discrete model characteris-

tics through parameter adjustment. The first experiment undertaken

is an application of the quasilinearization procedure to the nonlinear

system example of Chapter 2. The system configuration is repeated in

Figure 23 for convenient reference, where the system state variables

are now termed Yal(t) and Ya2(t). The parameter identification

procedure is implemented for the optimum discrete approximation to

the continuous system. The difference equations for this approximation



N. L. 25/6 yal
s +6 -






Figure 23. Nonlinear System for Parameter Identification



were stated in Equations (2.56) and are restated here, calling the

states ydl(nT) and yd2(nT) and including in the expressions parameters

b and b which are to be identified.
1 2


Ydl(;n T) = ydl(nT) + 25 T b (nT) [4h(nT) 3h(n-l T) + h(n-2 T)]
2
12




- 78 -


Yd2(+l T) = ayd2 bT) + b2(nT) [dle(n T) + d2e(nT)]


h(nT) = yd2(nT) + .01 y d2(nT)


e(nT) = r(nT) y (nT)
dl


dl = (6T 1 + a) / 6T


d2 = ( a 6Ta) /6T

-6T
a = e (3.25)


Development of the equations for the adjoint variables gives rise

to a problem in the formulation. Desiring to obtain an expression

for P(nl T) from Equation (3.16), the matrix V7yf'(y (nT),b) must

be inverted. When.Equations (3.25) are placed in true discrete

state variable form, as in Equation (3.12), so that all terms on the

right hand side are of argument nT, the matrix resulting from the

gradient operation, yf'(yd(nT), b), is "almost" singular. Such

a poorly conditioned matrix produces equations of correspondingly

poor stability. When implemented on the computer, the effects of

rounding numbers within the machine are apparently enough to cause

certain singularity of the matrix.

The problem of instability of the adjoint equations may be

circumvented, for values of the bi near the optimum, by considering

variables with delayed arguments in Equation (3.25) as coefficient

terms, and obtaining the gradient matrix V7f' by taking




- 79 -


derivatives with respect to ydl(nT) and yd2(nT). For choices of

the b which are not sufficiently near optimum, instability appears

to be generally unavoidable. The approximate gradient matrix

7 f' now appears as
y

1 b2(-a)

(3.26)
s a dbs_
1 2


s = 25 T bl l. + .03 y2 (nT)]
d2


where a and dI are as defined in Equations (3.25). Since detailing

of all the matrices involved in developing the examples would appear

to add little to the presentation, they will not be generally shown.

A concise statement for p(n+l T) and @(n+l T) is obtained from

Equations (3.16) and (3.17) as

-1
(n71 T) = [ Vf' (y (nT), b) ] [(nT) + y (nT_ y (nT)] ,
y- d -d -a
(3.27)

and

(n+- T) = g(nT) [Vbf(_yd(nT), b)] A(n+ T), (3.28)


where R has been made a 2 x 2 identity matrix. A new state vector

is obtained in the form of Equation (3.20) by adjoining Equations (3.27)




- 80 -


and (3.28) to Equations (3.25) together with the vector [b b ]'.

The boundary conditions are given by


y(0) = y () = 0
-d -a


and


y(N-1 T) = g(N-l T) = 8(0) = 0 (3.29)


The linearized equations for the iteration procedure now require

derivation of the Jacobian matrix [\7 g']' which for this example

is an 8 x 8 matrix whose ijth element is gi/ axj The system of

linear, time-varying equations are again stated as


q+(n+1 T) = [Vg'(xq(nT)]' [q+l(nT) q( nT)] + g(xq(nT)).

The homogeneous and particular solutions of the linearized equations

are now sought. The homogeneous solution is obtained from

q+l N-2 q+1
(N-1 T) = [7xVg'((nT))]. e (0) (3.30)

k=0

q+1
where ) (0) = I, an 8 x 8 identity matrix. It is noted

here that only q+l(N-i T) and pq+l(N-l T) need be retained for

evaluation of the initial condition vector yq+l The particular

solution vector is given by


N-2
pl(N- T) = j H(N-1, j+l) B(jT), (3.31)

j=0




- 81 -


where
N-2

H(N-1, j+l) = i i'(x(kT))]
= j+l x


and


B(JT)= g(xq(T)) [V g'(x(jT))]' x9(JT),



since pq+l(0) = 0. Flow diagrams for the computer programs to

accomplish these operations are presented in Appendix IV. From

the boundary conditions of Equations (3.29), it is recognized that

vq+l = 0 for i = 1,2,7,8. It is then necessary to evaluate only
i

the remaining four initial conditions at each iteration.

For the example system with a 10 unit step input, the gain

parameters b1 and b2 were determined for sampling intervals of

.01 to 0.3 second for an observation time of 5 seconds. The most

rapid convergence of the process was achieved by starting the experi-

ments at the low sample period where the bi are approximately unity.

For successive programs with increased sample periods, the parameter

values obtained from the previous experiment were employed as starting

estimates. The iteration process for each experiment was terminated

on a successful convergence by halting the iterative procedure when
-4
parameter values changed less than 10 between successive iterations.

Since no "true" value of parameter was known, the procedure was

forced to run for a minimum number of iterations, usually three to

five. Table 5 illustrates the convergence of the gain parameters for




- 82 -


the present example for the case where the sampling interval is

.1 second. The numbers shown have been rounded to six decimal

places and in the actual result do change less than 10-6 between

iterations 3 and 4.



Table 5

Gain Parameter Convergence



Iterati6n b b2



0 1.000000 1.000000


1 .846688 1.035062


2 .820133 1.041022


3 .819920 1.040923


4 .819920 1.040923



Figure 24 illustrates the parameter adjustment obtained via

the quasilinearization procedure as the sampling interval was changed.

The related effect on the approximation error dependence on sampling

interval is shown by the data of Figures 25 and 26 which display the

value of the NSES criteria for xl and x2 In these figures the data

from the experiments of Chapter 2 are repeated to demonstrate the

effect of gain parameter change. It is apparent that there has been




- 83 -


2.0-
b2




1.5





1.0
Gain parameters:

Sbl, integrator

0.5 b2, minor loop
0.5






0 .1 .2 .3
T sec


Figure 24. Discrete Model Gain Adjustment via Quasilinearization




- 84 -


10-1






10-2


O Optimum


X Fowler
10-3mm, QL
O Optimum, QL Gains

A Optimum, DA Gains


10-4
0. .1 .2
T sec


Figure 25. Error Criterion for x, with Identified
Gain Parameters, Step Input




- 85 -


10 -2





X Fowler


O Optimum
103- -
'IO Optimum, QL Gains


/A Optimum, DA Gains


10"'X
10-71
.1 .2
T seo


Figure 26. Error Criterion for x2 with Identified
Gain Parameters, Step Input




- 86 -


some achievement in reducing the error in xl, but at the expense

of x2. It should be recalled that the weighting matrix in the

criterion of Equation (3.11) was taken to be an identity matrix

for this example. Some further experimentation with the form of the

weighting matrix appears to hold promise as a means of tailoring

model response for desired error distribution. The step response

of the system is shown in Figure 27 further illustrating the effect

of parameter adjustment.

For the same conditions of input signal and sampling interval,

the differential approximation method was used to determine the

parameter adjustment. The expression of Equation (3.24) was formed

employing for ya(nT) values of system response obtained via the

Runge-Kutta integration method. The effect of these gain settings

on the simulation error is shown in Figures 25 and 26. Like the

quasilinearized gains these have negligible effect on the error at

small sampling intervals where the error is very low, but there is

improvement at the larger sampling intervals. The error reduction

achieved by differential approximation is not so beneficial as seen

in the data for xl but the effect on x2 is somewhat less damaging.

Here again uniform weighting was employed in the problem formulation.

The gain parameter variation with sampling interval as determined

via differential approximation is shown in Figure 28.

A second experiment utilizing the method of quasilinearization

was performed to study the manner in which the gain parameter values

are affected by changing input magnitude of the nonlinear system.




- 87 -


xl(t)

14.


13.


12.


11.
X E
10.*


9.-

8.


7.
Continuous
6.-
X
x 0 Optimum

o Optimum, QL Gains
4.

X Fowler
3.

2. -


1.


O .25 .O0 .75 1
time see

Figure 27. Effect of Gain Parameter Change on Step
Response of Nonlinear System




- 88 -


Gain parameters

b, integrator

b2, minor loop


0. .1 .2 .3
T sea


Figure 28. Gain Adjustment via Differential Approximation


Gain parameters

b, integrator

b2, minor loop


2. 10. 20.
step amplitude


Figure 29. Gain Adjustment for Input Step Change




- 89 -


With a sampling interval size of .1 second, the input step function

amplitude was varied from .5 unit to 20. units. The gain parameter

values obtained with the quasilinearization method are shown in Figure

29. The principal result of the gain change to larger magnitude

input step functions is a reduction in the initial overshoot in the

response of the optimum discrete simulation. This is reflected in

the NSES criterion for the output state xl for which the value with

with 20. unit step function input is reduced from 1.659 to 0.904.

Here again the gain adjustment is of no benefit to x2 for which the

criterion: value is slightly increased from unity gain magnitude of

0.808 to an adjusted magnitude of 0.821.

Experimentation performed with a sine wave input to the system

yielded less promising results than for the step input and made

apparent the convergence problems which constitute a major weakness

in the quasilinearization procedure. Figure 30 displays the data

for the NSES criterion for x1 with an input of 10sin2t. The data

for the quasilinearized gains are not extended beyond a sampling

interval of .15 second due to the uncertainty introduced into the

results by the poor convergence properties of the method as the

sampling interval is increased. For the sampling interval range in

which data points are given, some problems occur but reliable

convergence appeared to be obtained after some experimentation with

initial values. The convergence problem is illustrated by the data

of Table 6. Displayed are the gain parameter values obtained from

different initial estimates for a sampling interval of 0.2 second.




- 90 -


Figure 30. Error Criterion for xl with Sine Input
and Identified Gains


10-5









Table 6

Convergence of Quasilinearization
for Nonlinear System with 10sin2t Input and T = 0.2 Second


First Run


Second Run


b
1


b
1


.700000 1.200000


.824744 1.866403


.781149 1.183690


.783074 1.181670


.782964 1.181708


.782963 1.181708


.1033

.2456


.945781


.90048(


.90510(


.90499f


.90499~


.904991


3 1.022408


6 1.036902


6 1.035109


5 1.035158


; 1.035158


; 1.035158


.0370

.0223


Iteration


0


1


2


3

4


5


NSES (x1)

NSES (x2)




- 92 -


and the accompanying values of the NSES criteria for the state

variables. It is noted that convergence is obtained for both cases

with quite different gains resulting and with correspondingly quite

different criteria values of which none improves the unity gain case.

Identification of the gain parameters via differential

approximation was also attempted for the case of the sine wave input.

These NSES criterion values for x, are also shown in Figure 30. The

results obtained reveal the complete failure of the approach in this

case. The gain parameter values obtained are so far from the expected

neighborhood of optimum values that they could not serve as starting

values for quasilinearization with convergence resulting even at

small sample intervals.

Application of the quasilinearization and differential

approximation methods to identification of time-varying system

parameters is a matter of interest in attempting to obtain a useful

discrete approximation to physical systems. As an example of such

a case, the nonlinear system studied earlier was modified by

replacing the nonlinearity with a time-varying gain. The resulting

system is shown in Figure 31, for which system it is desired to

evaluate the parameter b in the gain having records of the state

variable trajectories for a step function input. The discrete

system state equations are given by Equations (3.25) where the b

and b there are made unity and h(nT) becomes bsin2nT.
2




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs