Title: Identification and control of nonlinear differential systems with stiction and coulomb friction
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00097818/00001
 Material Information
Title: Identification and control of nonlinear differential systems with stiction and coulomb friction
Alternate Title: Nonlinear differential systems
Physical Description: viii, 95 leaves : illus. ; 28 cm.
Language: English
Creator: Acker, William Fred, 1930-
Publisher: University of Florida
Place of Publication: Gainesville
Gainesville
Publication Date: 1967
Copyright Date: 1967
 Subjects
Subject: Servomechanisms   ( lcsh )
Simulation methods   ( lcsh )
Electrical Engineering thesis Ph. D
Dissertations, Academic -- Electrical Engineering -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Bibliography: Bibliography: leaves 89-92.
Additional Physical Form: Also available on World Wide Web
General Note: Manuscript copy.
General Note: Thesis - University of Florida.
General Note: Vita.
 Record Information
Bibliographic ID: UF00097818
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000537986
oclc - 13026269
notis - ACW1192

Downloads

This item has the following downloads:

identificationco00ackerich ( PDF )


Full Text













IDENTIFICATION AND CONTROL

OF NONLINEAR DIFFERENTIAL SYSTEMS

WITH STICTION AND COULOMB FRICTION








By

WILLIAM FRED ACKER


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY











UNIVERSITY OF FLORIDA


April, 1967












ACKNOWLEDGMENTS


I thank all those who helped me obtain the NASA fellowship

which paid for the first two years of this four-year effort. That

includes two people who helped me obtain the forms, six who wrote

letters of recommendation, dozens who processed and approved forms,

and millions who paid taxes. There is not room here to mention them

all individually, but my gratitude is sincere.

Many fellow Honeywell employees have helped me by arranging

a leave of absence and allowing me to work as a part-time employee.

A few of those people who helped in special ways are: F. X. Pesuth,

G. T. Bynum, M. J. Conway, S. F. Sando, T. M. Berlage, and

T. W. Christensen. A particular debt is owed to Dr. E. Tims who

triggered my decision to start on this project and showed me how to

start. Thanks are also expressed to R. Meredith who came out in the

middle of the night on his own time to repair the computer for me, and

to many others who were equally helpful in other ways such as

Roy Nurse and Leo Spiegel.

A debt is acknowledged to my family which has gotten along on

half of my normal income and a tiny fraction of my normal time and

attention for the past four years.






I deeply appreciate the cooperative attitude of all of the faculty

at Gainesville, particularly the members of my committee. I thank

the chairman of my committee, Dr. A. P. Sage, and whole-heartedly

recommend him to any graduate student who wants an education in

control theory and is willing to work for it.

Thanks are also given to Mrs. Gertrude Wharton, who

proofread and typed the final draft of this document.











TABLE OF CONTENTS


Page

ACKNOWLEDGMENTS ...................... ii

LIST OF FIGURES ................... ..... v

ABSTRACT ..................... ......... vi

Chapter

1 INTRODUCTION ..................... 1

2 BACKGROUND AND DEFINITION OF THE PROBLEM 9

3 DEVELOPMENT OF THE MULTIMODE-LINEARIZATION
TECHNIQUE FOR THE PRECISE HIGH-SPEED
SIMULATION OF NONLINEAR SYSTEMS . . .. 25

4 EXTENSION OF MULTIMODE LINEARIZATION TO
INCLUDE STOCHASTIC NOISE .......... 36

5 SOLUTION OF A MULTIMODE-IDENTIFICATION
PROBLEM BY USING MODE ESTIMATORS,
PARTITIONED AND MODIFIED KALMAN FILTERS,
AND BOUNDARY-CONDITION MATCHING ..... 49

6 DEFINING, EVALUATING, AND MINIMIZING THE
COST FUNCTION ... ......... ....... 64

7 THE DIGITAL SIMULATION ............... 72

8 CONCLUSIONS, APPLICATIONS, AND TOPICS FOR
FURTHER STUDY .................. 85

KEYED BIBLIOGRAPHY ..................... 89

ADDITIONAL BIBLIOGRAPHY. ................. 91

BIOGRAPHICAL SKETCH .................... 93












LIST OF FIGURES

Figure Page

2-1 BLOCK DIAGRAM OF THE NONLINEAR-PLANT
MATH MODEL SHOWING THREE PIECEWISE-
LINEAR MODES ................... 22

7-1 GREATLY SIMPLIFIED AND ABBREVIATED
COMPUTER FLOWCHART . . . . . .... .75

7-2 ERROR VERSUS TIME .................. 79

7-3 ERROR VERSUS TIME, CONTINUED . . . ... 80

7-4 FRICTION ESTIMATES VERSUS TIME . . . ... 81

7-5 COST AND Ac RATIO VERSUS TIME . .. . .. 82












Abstract of Thesis Presented to the Graduate Council
in Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy

IDENTIFICATION AND CONTROL
OF NONLINEAR DIFFERENTIAL SYSTEMS
WITH STICTION AND COULOMB FRICTION

By

William Fred Acker
April 22, 1967

Chairman: Dr. A. P. Sage
Major Department: Electrical Engineering

The object of this study is to develop theoretical techniques

which will be helpful in designing a more accurate servomechanism

in order to control the angular position of an output shaft so as to

track a slowly moving reference position. It is assumed that motion

is slow enough to excite stick-slip, nonlinear oscillations. The

specific application motivating this study is the desire to build more

accurate gimbal-position control systems for testing inertial-guidance

components; however, the analysis and control techniques are

developed and discussed from a general rather than specific point

of view. The results are applicable to a wide range of problems.

Using previous results, an inequality is derived which specifies

the maximum positional accuracy obtainable with a particular set of

plant parameters when using classical, linear, time-invariant control

strategies. The assumptions in that derivation are examined, and it

vi







is discovered that they can be circumvented by using semi-open-loop,

nonlinear, identification and control strategies; hereafter called,

solnic strategies. The solnic strategies are called "semi-open-loop"

because an entire corrective-manipulation cycle of the control variable

is completed on an open-loop, predictive basis before any feedback

concerning the results of those manipulations is received. When feed-

back information is eventually received, it is used to improve the

parameter and state-variable estimates with which the controller pre-

dicts the next appropriate sequence of control-variable manipulations;

therefore, the control loop is not completely open. As a consequence

of the optimization requirements, the control variable is switched

rather than controlled linearly. The control process can be no more

accurate than the' identification process; therefore, great emphasis is

placed on the development of an accurate state and parameter

estimation technique for highly nonlinear systems.

A discrete simulation technique for noisy nonlinear systems

is developed which is at least a thousand times faster than standard,

fixed-increment numerical integration for this particular system.

Using time domain, piecewise linearization, the nonlinear equations

are partitioned into linear sets which are called modes. State-

transition matrices are used to update the state variables from mode-

switching time to mode-switching time. Boundary conditions are

matched, state-transition matrices changed, and noise vectors added

at the mode-switching times. A completely general technique for

vii






constructing vector, stochastic samples with the desired variances

and covariances from a minimum number of independent, zero-mean

unity-variance, Gaussian samples is derived and explained in

considerable detail.

The Kalman-filtering technique is modified using the piecewise,

multiple-mode linearization approach sketched above; however, non-

linearities remain. Techniques for handling these nonlinearities and

for estimating the system mode are discussed and specific solutions

presented. The multiple-mode Kalman-filtering technique has such

great generality and is so conceptually simple that it has implications

much broader than those presented here.

Digital simulations were performed using all of the techniques

sketched above. When started with large initial estimation errors,

the controller rapidly improves the estimates and converges to an

optimal strategy in order to minimize a predefined cost function. It

was demonstrated that, by use of the solnic strategy, it is possible to

control the angular position of the output shaft with at least an order

of magnitude more accuracy, and with less energy than when using

classical, time-invariant, linear control strategies. The ultimate

accuracy limitation when using a solnic strategy has not yet been

determined.


viii











CHAPTER I


INTRODUCTION

1. 1 Motivation

The objective of this study was to develop theoretical

techniques which would be helpful in designing a more accurate

servomechanism for controlling the angular position of an output

shaft so as to track a slowly moving position. It was assumed that

the motion would be slow enough to excite stick-slip, nonlinear-

frictional oscillations. The application motivating this study was

the desire to build more accurate gimbal-position control systems

for use in the laboratory testing and calibration of inertial guidance

components. The same techniques would also be useful for precision

pointing of astronomical telescopes, radar antennas, and machine

tools.

Most of the techniques which were developed have far greater

generality than would be expected from the narrowness of the stated

objective. Actually, a conscientious effort has been made throughout

this study and the reporting on tlris study to generalize as much as

possible in order to keep from getting lost in a mass of detail. After

all, as R. W. Hamming said, "The purpose of computing is insight,

not numbers" (I, p. v). Chapter 4 is a specific example of the effort

I




2

spent in generalizing. After the specific solution required for simu-

lating the plant noise used in this study had already been obtained,

several weeks were spent developing a completely general technique

for doing the same thing. Only the general technique is included in

this report. The derivation of the general technique is presented in

Chapter 4. The specific solution, which was omitted, is very im-

pressive because it involves contour integration; the formation and

integration of joint density functions to obtain marginal variances,

conditional variances, and covariances; and the guessing of a par-

ticular nonunique solution out of an infinite set. The general solution

is much less impressive; in fact, it would be a trivial task to program

a digital computer to perform the entire solution automatically. On

the other hand, that simplicity is the very thing which makes the

general solution much more valuable. It is hoped that enough detail

has been omitted that the reader will find "insights, not numbers, and

yet that enough detail has been retained to allow him to continue from

the point where this study stopped without the need for rediscovering

anything.


1. 2 Precis

In Chapter 2, an inequality is derived which specifies the

maximum accuracy obtainable using classical, linear, time-invariant

control techniques for the position control servomechanism with

inertia, stiction, and coulomb friction acting at the load. A semi-

open-loop, nonlinear, identification and control strategy is then




3

proposed for breaking the theoretical accuracy bound. The strategy

is called "semi-open-loop" because an entire cycle of corrective

manipulations of the control variable is completed before any feedback

concerning the results of those manipulations is received. Each

individual correction cycle is handled on an open-loop predictive

basis. The strategy is only semi-open-loop because feedback data

is eventually received and used for improving the accuracy of the

plant model which is used for estimating the appropriate open-loop

actions. The strategy is called nonlinear because the control variable

is switched to conserve energy rather than varied proportionally.

When stiction is present, the typical linear control will waste large

quantities of energy by pushing gently on a shaft'which will not come

unstuck until a much larger torque is applied. To facilitate commun-

ication, the term "semi-open-loop, nonlinear, identification and

control strategy" is abbreviated to "solnic strategy. "

Having introduced the solnic concept and proposed that concept

as a possible means for exceeding accuracy obtainable with classical,

linear, time-invariant control strategy for a specific plant, Chapter 2

goes on to define a specific plant. Some of the limitations of the

classical, nonlinear friction model are discussed and a more detailed

though still imperfect model is constructed. Typical effects which

limit the bandwidth of physical plants are mentioned and two are

chosen for the plant model. Then, the total plant model is specified

in detail.







Chapter 3 presents a new technique for simulating nonlinear

systems with a digital computer. The prime prerequisite for using

this technique is that it must be possible to approximate the plant

by piecewise linearization of the equations in the time domain. The

plant model being studied satisfies the piecewise linearization

requirement. In fact, as will be shown, the nonlinear plant equations

can be partitioned into three sets of linear equations: one set to

describe the plant when the output shaft velocity is positive, a second

set to describe the plant when the output shaft is stuck, and a third

set to describe the plant when the output motion is negative. These

three operating conditions are called three plant modes. By parti-

tioning the plant equations into linear modes, determining the times

when mode changes occur, solving for the response from mode-

switching time to mode-switching time using state-transition matrices,

and matching the boundary conditions, the plant dynamics can be sim-

ulated with greater accuracy and with less computer time than when

using numerical integration. The new simulation technique is called

multimode linearization. It was necessary to add extra switching

points to account for the times at which the control variable was

switched, and it was necessary to develop techniques for determining

the mode-switching times; however, the multimode-linearization

technique worked very well in the digital simulation. The technique

should have wide application, because it was faster and more accurate

than numerical integration. Most nonlinear problems can be




5

approximated, if not represented precisely, by pieccwise

linearization in the time domain.

The technique developed in Chapter 4 is considered to be a

very solid contribution not only to the simulation art but also to

practicing error analysts. Using the multimode-linearization

technique combined with the fairly standard techniques of computing

covariance matrices for noise in linear systems, it should be a

trivially simple matter to solve for the statistics of the errors in

piecewise-linearizable nonlinear systems. In fact, that is exactly

what was done in the simulation of the multimode-modified Kalman

filter. The greater contribution is in the second half of the chapter.

Instead of simulating the noise by numerical integration using hundreds

or thousands of random samples from a normal population, random

vectorial samples can be generated to fit any physically realizable

covariance matrix using no more samples than the number of com-

ponents in the vector. The saving in the amount of computer time

spent merely in the generation of random samples is in itself

extremely significant. As a bonus, there are also the same savings

in time and improvements in accuracy which were found when applying

the multimode-linearization technique to noise-free problems. The

addition of noise alters the switching times at which mode changes

occur but a fast, accurate solution to the problem was found.

Once the multimode-linearization technique had been developed

for simulating the piecewise linear plant, it was probably inevitable







that an attempt would be made to apply the same multimode-

linearization concept to the estimation problem. It was not inevitable,

however, that the attempt would be successful. The prime obstacle

to the application was the task of estimating which mode the system

was in at any specific time. The general optimum solution to the

mode estimation problem was not worked out or applied because of

the computational difficulties in applying Baye's strategy decision-

making techniques in such a highly nonlinear system. A maximum-

likelihood decision maker was considered but abandoned because the

cost of rejecting truth seemed to be far less than that of accepting

falsehood. The decision-making scheme which was finally chosen

predicts mode changes using the estimated plant model and rejects

data when the time of the data observation is within three standard

deviations of a mode-change point. Because of the bandwidth limita-

tion, no usable observations were obtained when the plant was in

mode 1 or mode 3. No claims of optimality are made for the strategy;

but it is claimed that it took extremely little computer time and space,

and that it produced excellent results. Using the predictive mode

estimation scheme, it was possible to modify the Kalman-filtering

process into two separate parts which were connected by setting the

boundary conditions equal at the predicted nominal switching times.

The scheme for partitioning and connecting the modified Kalman

filters is simple in retrospect when explained in detail; however, it

will not be summarized here because of the large number of definitions

and simple steps involved.







Because of the complexity and extreme nonlinearity of certain

of the filtering equations, it was not feasible to obtain partial deriva-

tives of the observations with respect to the estimated state variables

(at some points the derivatives are infinite). As consequence of this

nonlinearity, some of the standard-Kalman-filtering equations had to

be modified by replacing the partial with quasipartials (first-order

partial differences). When sufficient nonlinear constraints are added

to make the estimation process convergent for large errors, then the

errors eventually become small enough that the linearized error

equations become accurate and the modified Kalman filter begins to

function optimally.

The task of defining a cost function for the control was trivial,

but the task of solving for the optimal strategy analytically presents

formidable computational barriers. It was decided to make the con-

trol system evaluate cost as it operated, and to make it readjust its

strategy periodically in order to operate at minimum cost. The

optimization system worked quite well but suffered from what seems

to be a universal defect. In order to avoid singularity problems in

the plant-identification process, it was necessary to perturb the

control strategy slightly about the optimum point and that increased

the cost. This will be called the identification singularity versus cost

nonoptimality dilemma.

Using all of the techniques just discussed, the stochastic,

nonlinear plant and the solnic system were simulated using FORTRAN




8

programming and an SDS 930 computer. The initial estimates for

stiction, coulomb friction, and the control strategy perturbation

amplitude were set in error by factors of two from their proper

values. All other estimates including that of the optimal corrective

action were set to zero. It took about 20 seconds for the solnic system

to obtain enough information to compute one set of cost functions and

modify its strategy once. At 54 seconds, the solnic system had iden-

tified the system parameters within 6 percent, changed strategy 13

more times and had effectively reached steady-state operation at the

optimum cost point. The 5 percent estimation error was caused by

the identification singularity versus cost nonoptimality dilemma. The

estimate for ks was 5. 7 percent low, that for kc was 1.4 percent high

and the effects of the errors effectively cancelled each other for

operation near the optimum cost point. The modified-Kalman-filter

equations had detected this singularity through estimating the covar-

iance between the errors and had reduced the filter gains accordingly.

The ks estimate had only improved 1 percent in the last 20 seconds.

The largest errors were two-sevenths as large as the minimum

obtainable error using classical, time-invariant, linear control.

When the cost ratio was changed to charge more for error and

less for control effort, stable consistent operation with peak errors

an order of magnitude below the classical limit was demonstrated.

The original objectives had been attained, and several unplanned by-

products had been developed. Simulation work was terminated at this

point with no effort being made to find the lower limit.











CHAPTER 2


BACKGROUND AND DEFINITION OF THE PROBLEM

2.1 Motivation

The prime objective of this study was to find a more accurate

means of controlling the angular position of a mechanical shaft so as

to follow a slowly moving commanded position with minimum error.

The theoretical techniques developed herein are by-products of that

effort. The application which first motivated this study was the labor-

atory calibration of inertial-guidance platforms and components such

as ESG's (electrically suspended gyros).. In a typical application, a

series of concentric gimbals are mounted on a cement pier and a set

of gyros and accelerometers are mounted in the center of the gimbal

system. The angular velocities which the gimbal servo systems must

track are relatively constant and seldom much greater than 15 degrees-

per-hour. In order to minimize errors in instrument calibration, it

is desirable to make the gimbal control servomechanisms as accurate

as possible. One of the prime causes of gimbal servo error is non-

linear friction and every effort is made to minimize this friction. In

some cases, pressurized oil is pumped into specially designed hydrau-

lic gimbal bearings so that the mechanical parts do not touch but float

on an oil film with respect to each other. Even so, nonlinear friction

9







remains because of the need for conducting electrical power and

signals to and from the inertial components. Having failed to elim-

inate nonlinear friction, the control system designer must learn to

minimize the angular error caused by that friction. The error mini-

mization problem would be trivial if all of the equations were linear

and well defined. The equations are neither linear nor well defined.

One of the first steps in solving a problem is to define it. In

this chapter, the classical definitions of the problem will be reviewed.

Then an equation stating the minimum friction-induced angular error

for classical, time-invariant, linear controllers will be presented and

the physical meaning of that equation discussed. At that point, intui-

tion takes over and a predictive, nonlinear, time-variant control

strategy presents itself. The classical error equations, nonlinear

friction models, and bandwidth specifications become insufficient with

respect to the new concept. The remainder of this chapter is spent

in redefining the friction model, choosing a new bandwidth limitation,

and defining a specific plant which can be used to test the new strategy.


2. 2 Classical Definitions and Solutions

Classically, the friction in servomechanisms is considered to

be made up of three components: viscous friction, coulomb friction,

and stiction (2, pp. 481-484; 3, pp. 82-83). The magnitude of the

torque exerted on the output shaft by viscous friction is equal to a

constant times the shaft velocity, and it is in such a direction as to







resist motion. Viscous friction is a linear effect. At the 15 degree-

per-hour shaft velocities considered in this study, the effect of

viscous friction is negligible.

Stiction and coulomb friction are complementary elements of

the classical, nonlinear friction model. When the shaft is moving,

stiction torque is zero by definition. When the shaft is stopped,

coulomb-friction torque is zero 'j definition. Let the symbols ks and

kc represent the stiction coefficient and the coulomb-friction coeffi-

cient, respectively. The values of ks and kc are positive constants.

When the shaft has stopped, then by the definition of stiction, the

shaft will not begin to move again until the magnitude of the motive

torque exceeds the magnitude of ks. When the shaft is moving, the

coulomb-friction torque is equal in magnitude to kc and in the proper

direction to oppose motion. The ratio of ks to kc is never less than

unity, and for bearings it is very seldom greater than two.

The magnitude of the static error of a linear servomechanism

is equal to the disturbance torque magnitude divided by the static

stiffness. In all physical servomechanisms, there is some frequency

above which the loop gain must be kept below one. This frequency

will be represented by the symbol, Ub, defined as the bandwidth limit,

and discussed in detail later. For an uncompensated linear servo-

mechanism with a pure inertial load, the static stiffness, and hence

the static error, is proportional to the square of the system bandwidth.

Therefore, the bandwidth limitation is of extreme interest to the servo




12

designer. For a fixed bandwidth, it is possible to increase the static

stiffness by adding a low-frequency lag network to produce integral

action (4, pp. 172-209; 5, p. 17). Even if the system is carefully

designed to appear conditionally stable under Bode-plot or root-locus

analysis, it will oscillate if excessive lag is used and nonlinear fric-

tion is present. An excellent article by Bohacek and Tuteur presents

a mathematical description of these oscillations in the time domain

and derives an inequality limiting the maximum allowable lag ratio (6).

The following equation is a free but accurate translation of the

Bohacek and Tuteur result when the corner frequencies of the lag

network are set low enough that the network does not contribute

excessive phase shift at the bandwidth limitation frequency, Wb*

2ks
L = (2-1)
max ks k

Lmax represents the maximum permissible magnitude for the ratio

of the two lag network corner frequencies. Attempts to use the

describing-function technique in examining the stick-slip oscillations

caused by the interaction of lag and nonlinear friction do not yield

quantitative accurate results because the waveform is far from

sinusoidal (7). The oscillations have been accurately analyzed using

phase-plane and phase-space techniques although the task is tedious

(8). Fortunately for the reader, it will not be necessary to reproduce

either the time-domain or phase-plane analyses. By combining

insights from the articles referenced above and a related article by




13

George Biernson, a simpler derivation can be constructed (9). The

maximum accuracy obtainable for a time-invariant linear controller

with a bandwidth, Ub' controlling a load with inertia, J, and non-

linear friction coefficients, kc and ks, can be derived by inspection.

The peak steady-state error, 0e, of a linear servomechanism as a

result of a stiction torque, ks, is given by
ks

=- s .((2-2)
e S. S.

where S.S. represents the static stiffness. For a pure inertial load

and a linear control with no compensation,
2
S.S. = JWb (2-3)

When lead networks are added to stabilize the system, clearly
2
s.s. J(jb (2-4)

When the maximum permissible low-frequency integration defined by

equation (2-1) is added,

2 2ks
S.S. J JW ks k (2-5)
k kc

Combining equation (2-2) and inequality (2-5)

9 ks kc 1
- -(2-6)
e 2 Jb 2

Inequality (2-6) gives the minimum obtainable peak error using a

time-invariant linear controller under the conditions specified above.

The derivation inequality (2-6) by the above process including the

derivation of equation (2-1) is a lengthy process punctuated by

numerous assumptions. Now the result will be derived by inspection.




14

Assuming that the load is moving along in a unidirectional

manner with stick-slip motion, the frictional torque will oscillate

in a sawtooth manner between the magnitude of ks to that of k When

the torque drops instantaneously from magnitude ks to that of k c a

step disturbance torque with a peak-to-peak amplitude equal to that of

(ks kc) is produced. Below the frequency, CJb, the servomechanism

can respond to counteract the step torque. At the bandwidth frequency

the loop gain is unity and the phase angle is usually greater than 90

degrees so the servo is seldom helpful and typically harmful for dis-

turbances of that frequency. Neglecting the efforts of the control

system at frequencies equal to and greater than Ub, the resulting peak

error, e', for a sine-wave disturbance torque with a peak-to-peak

amplitude equal to that of (ks kc) and a frequency of LJb acting on a

pure inertia, J, is given by

S k kc (2-7)
e- 2 j (2-)

The maximum value of 9e is obtained by setting equal to Lb. The

equation is turned into an inequality by examining the waveform of the

disturbance torque compared with that of a sine wave. Thus, inequal-

ity (2-6) is intuitively obvious--in retrospect. When classical, linear,

time-invariant control techniques are applied to the nonlinear friction

servomechanism problem, inequality (2-6) states the minimum obtain-

able peak error. Even if square-wave dither is added to the system,

the inequality still stands unless the dither frequency exceeds the

bandwidth limitation frequency, LJb.




15

2.3 The Classical Control System Versus a Solnic
(semi-open-loop, nonlinear, identification and control) System

The weakness of the classical, time-invariant, linear control

strategy becomes obvious after examining inequality (2-6). Neither

the magnitude of the step torque disturbance, (ks kc), nor the load

inertia, J, can be changed by the control designer; the only thing left

to change is Jb, the bandwidth limit. The addition of high-frequency

dither is one excellent means for increasing iJb; however, dither

expends control system energy, causes undesirable torque-motor

heating, creates noise, and can cause erratic gyro drifts through the

effect called "gyro coning. "

The irreducible frictional error bound, inequality (2-6), for

classical linear controllers results from the fact that these controllers

reduce the control effort too slowly once stiction has been overcome

and the load begins to move. Ideally, the control effort should be

instantaneously reduced to precisely the value of coulomb friction the

instant that the load reaches the desired velocity. If this were done

precisely and the plant were noiseless, then the load would continue

moving at exactly the desired velocity until some change were made.

Noise must be added to the plant model to keep this unrealistic solu-

tion from appearing. The exact reason for the slowness of time-

invariant, linear controllers in reducing the level of the control effort

follows as a consequence of the definition chosen for the bandwidth

limitation. That definition will be chosen later. However, from an




16

intuitive point of view, it is obvious that classical, time-invariant,

linear controllers respond slowly because they must wait until the

optimal linear filters have detected a significant load velocity relative

to the noise level before reducing the control effort. Rate networks

do not solve the problem because they increase the relative noise to

signal ratio.

All that is needed to speed up the reduction of control effort is

a little predictive action such as the farmer uses in fertilizing corn.

The farmer does not keep pouring ammonia on his corn crop until he

sees a change in the corn. He predicts how much ammonia will be

required, applies it, and waits for the result. If the result is not

satisfactory, he tries a different strategy on the next crop. Most

batch processes were controlled in this manner years before the

terms control theory, predictive control, or systems science were

ever coined. The concept is not new, the context is. Since it is not

possible to keep the output shaft of a servomechanism moving con-

tinuously at extremely low velocities when stiction, coulomb friction,

and frictional noise are present, the new strategy is to abandon

continuous, frequency-domain control techniques and treat the problem

as a series of batch processes (step corrections) in the time domain.

The new semi-open-loop, identification and control strategy

will be called a solnic strategy to save time. It is called semi-open-

loop because the entire sequence of corrective commands for a step

correction are truly completed before a feedback signal concerning




17

the results of that correction is received. In the example to be con-

sidered, the entire corrective action is completed between the sam-

pling instants of the sample-data-type position error signal. The

corrective actions therefore appear to be open loop if looked at within

a single corrective cycle; however, the controller gradually identifies

the plant by comparing predicted results with observed results. The

gradual identification of the plant enables the controller to optimize

the heretofore open-loop control sequences, hence the strategy is

called a semi-open-loop, identification and control strategy. The word,

nonlinear, was added because switched control action was utilized in

order to minimize fuel and error costs.

Having outlined the solnic strategy, a general strategy for

breaking the classical accuracy limit, inequality (2-5), the remainder

of this dissertation is concerned with the development, implementa-

tion, and testing of that strategy. One of the objectives is to find out

under what conditions, if any, the solnic strategy is superior to the

classical, time-invariant, linear strategy. The first three steps

toward doing this are: re-examination of friction models, a survey

bandwidth limiting factor, and then explicit definition of the plant

model to be used for comparison purposes. The remainder of this

chapter reports on those three steps.


2.4 The Friction Math Model

The classical math model for friction including viscous fric-

tion, coulomb friction, and stiction has already been described. The





18

necessity of adding noise to the model to prevent a trivial control

solution has also been explained. When viscous friction is neglected,

and white and correlated friction noises are added, the model becomes

essentially that which was chosen for plant model in section 2. 6;

nevertheless, a few other factors are worth mentioning.

Very little accurate data exists concerning stiction effects for

shaft motions of microradian amplitudes and 15 degree-per-hour

average velocities; however, on the basis of limited experimentation,

it has been possible to build the following picture of the nonlinear

friction effects acting in an inertial-platform gimbal structure. First,

because of structural flexibilities, small reversible, zero-hysteresis

motions occur before any sliding or rolling motion takes place.

Second, ball bearings break loose and begin to roll or slide before

electrical brushes in the torque motors and resolvers. Third, the

ball bearings, torque-motor brushes, and resolver brushes can be

moved back and forth through unexpectedly large angles before the

electrical "wipers" on the slip rings begin to slide. Fourth, the

scores of slip-ring wipers can come unstuck at different times creat-

ing a rather stochastic model. Fifth, the two gimbal bearings do not

move together; in fact, if a step torque is applied to the torque motor

at one bearing, the other bearing may even back up a slight amount

before beginning to follow. This reversal effect was not observed

until after it was predicted theoretically; it is not obvious either in

theory or practice. Sixth, neither ks nor kc is constant. Seventh,





19

magnetic effects within the torque motors are significant. No numbers

are given for the above effects because they vary from platform to

platform, and the numbers associated with any specific type of plat-

form are frequently considered to be company proprietary.

The above effects are pointed out merely to keep any readers

from taking the friction model used herein too literally. The friction

model with merely three effects, stiction, coulomb friction, and fric-

tional noise, is considered sufficient for development of the solnic

technique. Before attempting to apply the solnic technique to any

real physical plant, however, the reader is cautioned to consider the

friction model very carefully.


2. 5 The Bandwidth Limitation

The bandwidth limitation is a sufficiently broad topic for a

rather lengthy discussion. In the following paragraphs only a

few of the most salient features of this subject are mentioned.

It was decided that before any bandwidth Limitation would

be considered for the nonlinear plant math model, it would have

to satisfy four requirements. First, it would have to be a limita-

tion which occurs frequently in current practice. Second, the

mathematical model of the limitation would have to be simple enough

to be mathematically tractable. Third, the mathematical model

would have to represent the real physical system accurately enough

for the results to be meaningful, at least in a qualitative manner.





20

Lastly, to facilitate comparison between the linear, time-invariant

controller and the nonlinear, time-varying controller, the bandwidth

limitation would have to be compatible with the analytical techniques

used for synthesizing both types of controllers. Because the solnic

concept was still quite nebulous, and because no list of applicable

analytical techniques was available, conformance or nonconformance

to the last requirement was difficult to assess. What made the prob-

lem particularly difficult was that the plant was now not only nonlinear

but also stochastic and noisy. Booton's "quasi-linearization" tech-

nique was considered but rejected because it does not apply to systems

with lag or feedback (10). Fortunately, the desired list of analysis

techniques suitable for stochastic, nonlinear control systems was

discovered in Pervozvanskii's newly translated book (11, pp. 88-91).

Piecewise linearization in the time domain is the analysis technique

which was eventually chosen.

Concurrently with the study of analysis techniques, a list of

possible bandwidth limitations was prepared and examined. Seven

items from that list are worthy of mention here.

1. Transfer functions of miniature, floated, integrating

gyroscopes

2. Arbitrary specification of the frequency at which the

loop gain equals unity

3. Dead time or transportation lag

4. Limited carrier frequency for the feedback signal




21

5. Limited sampling rate for the feedback signal

6. Stochastic noise mixed with the feedback signal

7. Variable structural flexibilities

Conformity to precedent demanded the use of limitation 2;

however, that limitation was rejected because of the philosophical

problems involved in defining the loop gain of a semi-open-loop,

nonlinear, identification and control system. Finally, limitation 6

was selected because it seemed to be the one encountered most often

in practice. Limitation 5 was also added, which simplifies the task

of simulating one of the Kalman filters. The presence of two separate

bandwidth limiting factors in the same plant is quite reasonable since

most real physical plants have several significant bandwidth limiting

effects acting simultaneously.


2. 6 Definition of the Plant

The plant model which was selected for study is an ultra-

simplified version of the gimbal servo system for an inertial-guidance

platform using a sample-data-type error signal. A block diagram of

the math model and a mode-switching table is shown in figure (2-1).

The switches are in position 1, 2, or 3 depending on the velocity, x2'

and the torque, x3, as shown by the mode table. The switches mathe-

matically simulate the piecewise-linear behavior of the nonlinear plant.

There are no such switches in the real physical plant. The control

variable, u(t), represents the voltage applied across the inductive














> -I


(Y) 0

w

Q







o z


- w






rfI


O









I
m ry
r


w

u >

C9
c


0 0w


_0
(D

L
L
O
u
C




23

coil of the torque motor multiplied by the torque-motor scale factor

measured in inch-ounces of torque per volt; thus, u(t) and uax are

stated in inch-ounces of torque rather than volts. This normalization

eliminates the torque-motor scale factor coefficient from the problem.

Notice that r(t) and c(t) are not computed separately and subtracted

because that would involve the subtraction of two large numbers to

compute a very small one. By subtracting the r(t) from c(t), full use

is made of the computer accuracy and the outputs of all integrators

are bounded or have finite variances when the controller is operating.

The infinite-gain feedback path from x2(t) through the mode

switch to x2(t) is merely a reminder that x2(t) is clamped to zero

when the system is in mode 2. Once the math model had been reduced

to the three-linear-mode form shown in figure (2-1), the multimode-

linearization technique described in the next chapter was an obvious

next step in solving the problem.

Before going on to the next chapter, it is necessary to comment

on how the magnitudes of the math-model parameters were obtained

for table (2-1). The values of J, m, umax, kc, and k were obtained

from unclassified publications by companies other than Honeywell.

The value of Ts is fictitious but plausible. The statistics of x5 were

chosen in the range of earth rate and Schuler oscillations through

initial misalignment errors. The statistics of wl(n) and x5(t) are

fictitious but, hopefully, the reader will find them credible.







Table 2-1

Magnitudes of the Plant Parameters


Description Symbol AlMagnitude
Variance of position A 2 [ -6 12
measurement noise, wl(n) VAR(y-x) ra sample

Time between samples Ts 0.01 sec
2
Load Inertia J 0.5 ozf-in-sec

Torque-motor time constant "3 8(0) -3sec


Maximum torque umax 20 ozf-in

Average coulomb-friction 4
to k- = x 1. 4 ozf-in
torque c

Average stiction torque k= x7 2.1 ozf-in

Power spectral density of 2 0 o 2
white frictional noise
Power spectral density of 2 1. 8(10) zf 2in2sec
white noise, 5(t) 5-

Correlation period of 1, 000 sec
correlated frictional noise 5

Variance of correlated VA ) (0.3 ozf-in)2
frictional noise 5

Power spectral density of 2 -14 adsec
white noise, w4(t) 4

Correlation period of, 000 se
commanded velocity 4

Variance of commanded VAR(x 3(o0)6 rad/sec]2
velocity V (4


IA 2
Sozf = ounce of force = 1 ounce mass x 386 in/sec










CHAPTER 3

DEVELOPMENT OF THE MULTIMODE-LINEARIZATION
TECHNIQUE FOR THE PRECISE HIGH-SPEED SIMULATION
OF NONLINEAR SYSTEMS

3. 1 Introduction

The ideal equipment for simulating the nonlinear plant and

the solnic system would have been a hybrid computer facility with

interconnected digital and analog computers. Another fairly good

combination would have been to use a general-purpose digital com-

puter for flexible programming, decision making, and control com-

bined with the use of a digital differential analyzer to perform

numerical integration. Because of equipment reliability and sched-

uling problems, it was decided to perform the entire simulation with

a single general-purpose SDS 93, digital computer using the FORTRAN

language.

Having decided to perform the simulation using a general-

purpose computer, the traditional technique to have chosen would have

been numerical integration. The disadvantage of using numerical

integration is the speed versus accuracy dilemma. If the time incre-

ments for numerical integration are chosen too small, the simulation

runs too slowly. If they are chosen too large, the results are inaccur-

ate. Accuracy is not always improved by making the time intervals

smaller because the finite word length in the computer causes

25




26

roundoff or truncation errors. The multimode-linearization technique

was developed while attempting to find the optimum iteration interval

to use for the numerical simulation. The unanticipated result was

that one should partition the solution into a series of piecewise-linear

steps in the time domain and compute from boundary to boundary in

single jumps using the appropriate state-transition matrices. The

technique is called multimode 11i. "rization. The dynamic response

of the nonlinear plant is described using a series of linear math

models. Each linear math model is called a mode. Only three modes

were required to describe the plant defined in figure (2-1). By using

a sufficiently large number of modes, it should be possible to approx-

imate almost any real nonlinear plant; therefore, the technique is

considered to be quite general.

The advantages of multimode linearization over numerical in-

tegration are both speed and accuracy. The multimode-linearization

technique is faster because it can step from mode-switching boundary

to mode-switching boundary in a single iteration. It is more accurate

than numerical integration because it uses an exact, rather than an

approximate, set of equations and because the reduction in the number

of computations reduces the roundoff errors.

The disadvantages of multimode linearization are that the non-

linear equations must be partitioned into linear modes, the state-

transition matrices for these modes must be determined, and

mode-switching times must be found. Techniques for partitioning,




27

determining state-transition matrices, and finding switching times

are presented in the remainder of this chapter.


3. 2 Partitioning the System Equations

The math model for the plant which is to be simulated is

defined by figure (2-1) and table (2-1). The plant equations have

already been partitioned into three linear modes. All of the noise

sources are denoted by the small letter, w, with various numerical

subscripts. In this chapter, stochastic effects are ignored so the

magnitudes of all of the noise sources are assumed to be zero. (The

effects of the noise will be considered in the next chapter.) The plant

state vector contains seven scalars denoted by the small letter, x,

with various numerical subscripts. For any time period during which

the control effort, u(t), remains equal to zero, the plant equations are

linear and homogeneous so the magnitude of the plant state vector at

the end of that period may be determined by multiplying its initial

magnitude by the appropriate state-transition matrix. If u(t) were

always equal to zero, the simulation problem would be trivial.

If the control system for the nonlinear plant is optimized, then

the control effort, u(t), will be switched rather than varied continu-

ously. As stated previously, the control effort, u(t), is a normalized

measurement of the voltage applied across the coils of a d-c, gimbal

torque motor. The torque applied to the friction-inertia load is pro-

portional to the torque-motor current, which in turn follows the applied





28

voltage but lags behind because of the torque-motor inductance. The

cost of the control effort is measured in ampere-seconds divided by

the time over which the cost is measured; therefore, the cost is

proportional to the average current. By applying "the optimality

principle, it may be shown that the control strategy must be piece-

wise optimal in the time domain. It has already been stated that the

new strategy requires that the load be moved in a series of steps

rather than continuously. Assuming that the size and frequency of the

steps are determined at a higher level in the control hierarchy, the

local optimization problem is obviously that of moving the load a given

distance with a minimum number of coulombs from the power supply.

Application of Pontryagin's maximum principle, indicates that the

motor must be switched rather than operated linearly. This answer is

intuitively obvious to anyone who has designed class A, B, C and X

power amplifiers. Power dissipated in an amplifier is power wasted.

Now that the decision has been made to control the torque-

motor voltage with switches, a new problem arises. Something must

be done to control the inductive voltage across the torque motor when

the switches are opened. One efficient and practical solution is to

connect diodes between the inductive coil and the power supply so that

the inductive energy is returned to the power supply and so that voltage

across the torque motor reverses direction but changes very little in

magnitude when the turn-off transient occurs. Neglecting the voltage

drops across the diodes and switches, only three values of torque-




29

motor voltage need be considered in the simulation. The normalized

equivalents of these three voltages are: +u ax, zero, and -umax'

Since it has been established that the control effort has three

discrete values, a simplification becomes possible. The control var-

iable, u(t), can be thought of as a state-variable which is piecewise

constant. Points in time at which u(t) changes from one level to

another can be treated in the same manner as points at which plant

mode changes occur. Now that the only nonzero-valued forcing func-

tion, u(t), has been absorbed into the state vector, the system may be

described by equations which are linear with no forcing functions be-

tween any consecutive switching points. Under these conditions,

state-transition matrices may be used to give the complete solutions.

(Later, when the particular solutions for noise are determined, they

will be added to the homogeneous solutions. In this chapter, noise is

assumed to be equal to zero.)

It is now possible to state the plant equations concisely in

matrix form.

To simplify notation, several new symbols will be defined.

l /J (3-1)

3 / 1/r3 (3-2)

4 1/r4 (3-3)

A l/T5 (3-4)







xl(t)

x2(t)

x3(t)

X(t) x (t) (3-5)

x5(t)

x6
u(t)

Notice that x7 has been omitted since it is constant, and it affects

X(t) only at the switching times.

Let Fi be defined such that when the plant is in mode i'(where

i equals 1, 2, or 3) and u(t) is constant during the time interval which

the equation is written for, then

X(t) = Fi X(t) (3-6)

The elements of the matrices Fl, F2, and F3 may be determined by

inspection of figure (2-1).

0 1 0 -1 0 0 0

o o O( o CX C( o

0o o0 -3 0 0 0 /3

F 0 0 0 -4 0 0 0 (3-7)

S0 0 0 5 0 0

0 0 0 0 0 0 0
0 0 0 0 0 0 0
F1. 0 0 0 0o pq O00 0 3 7








I

-00

0

0

0

0

0


I
1

0

0

0

0

0

0


0

0


-A3
0

0

0

0


0

a

-13

0

0

0

0


3. 3 Derivation of


-1

0

0


-0

0

0

0


-1

0

0

-A

0

0

0


0

0

0

0

0

0

0


0

-O'

0

0


-P5

0

0


0

0

0

0

0

0

0


0

-0

0

0

0

0

0


0

0


3
0

0

0

0


0

0



0

0

0

0


(3-8)
















(3-9)


the State-Transition Matrices


The following equation will be tested to see if it is a solution

to equation (3-6). It is often called the matrizant equation, but it also

has other names (12, Vol. 2, pp. 149-160).


X(t) = I + F(z1) dzl + F(z) F(z2) dz dz2
o o o

f" f 71 z22 1
+ f 2F(zlI) F(z2) F(z3) dzl dz2 dz3 + . .X(t)

to to to
(3-10)


F =
















F3





32

Differentiating both sides of equation (3-10) with respect to t yields
t
X(t) = 0 + F(t) + F(t) F(z,) dz2
to
t z2

+ F(t) f F(z2) F(z3) dz2 dz3 X(to) (3-11)
to to

Reducing subscripts of z by one,
t t z
X(t) = F(t) I + dz + . X(to
to to to
(3-12)

Substituting equation (3-10) into (T-12)

X(t) = F(t) X(t) (3-13)

Thus, it may be seen that equation (3-10) is a solution to equation (3-6)

when the F matrix is a function of time. During the time between any

two switching points, the F matrix is constant. Let this constant

matrix be denoted by Fi where i equals 1, 2, or 3. Then equation

(3-10) can be simplified as follows assuming that to is the time of the

first switching and that t is less than or equal to the time of the next

switching.


X(t) = I + Fi dz + Fi Fi J dz dz2. X(to)
to oo
(F ih) (Fi h)n
= I + Fi h + 2 + + + X(t)
whe n!) (


where h = (t to) (3-14)






Define
Fi h (Fih)n
(e) = (3-15)
n.
n =
then equation (3-14) may be written more concisely as follows

X(t) = (e)Fi X(t) (3-16)

For notational convenience, the three desired state-transition

matrices will be represented by the symbols 1(h), (2(h), and <3(h)

where the subscript numbers correspond with the mode numbers.

i = (e)Fih

where i = 1, 2, or 3 (3-17)

A time when either the plant-mode-description numbers, i, or the

control effort level, u(t), changes is defined as a switching time. Let

tn represent any arbitrarily chosen switching time, let tn+ represent

the next consecutive switching time, and let i represent the plant-

mode number between those two switching times. Then

X(t = in(tn+l t ) X(t+) (3-18)

Note also that X(t+) will always be equal to X(tn) except for u(t) which

is the only element of X(t) that can change by a finite amount in zero

time. Once ((h), p2(h), and 3(h) have been determined, equation

(3-18) can be used for the plant simulation. The elements of the

state-transition matrices may be determined by computing the first

several terms of the series in equation (3-15) and then finding the

closed form solution by mathematical induction. Sometimes it is

easier to determine the elements by use of Laplace transforms.






-L

(i(h) = [(Is Fi) ] (3-19)

The series expansion method was used to obtain the results shown in

equations (3-20), (3-21), and (3-22).


q13

q23

q33
0

0

0

0


q14
0

0

q44

0

0

0


q5 (h2/2
q25

0 0

0 0

q55 0

0 1

0 0


1l(h) =







where,

q13

q23

q33

q14

q44

q15

q25

q55

q[7

q27

q37


qL7

q27

q37
0

0

0

1


(3-20)


exp (- 3 h)]


-2
03 [exp (-03 h) -1 +3 h]
-I
Sa/3 [I exp (-A3 h)]

Sexp (-,3 h)

=-/4- [1 exp (-/4 h)]

Sexp (-/04 h)
-2
S 5 [exp (-,5 h) -1 +/05 h]
/ c-L [ exp (-,05 h)]

Sexp (-/5 h)
-2 1 2
a 3 [3 1 /3 h + 2 3 h2 -
SC 3 [exp (-3 h) -1 +3 hj

= [ exp (-P3 h)]








33(h) = Same as lf(h) except that the four terms (3-21)

inside of the dotted line are multiplied by -1.


1 0 0 q14 0 0 0

0 0 0 0 0 0 0

0 0 q33 0 0 0 q37

2(h) = 0 0 0 q44 0 0 0 (3-22)

0 0 0 0 1 0 0

0 0 0 0 0 1 0

0 0 0 0 0 0 1


where the q's are the same as defined previously.


3. 4 Comments Concerning the Technique Implementation

Equations (3-20) and (3-21) are so similar that when the com-

puter program was written the mode I and mode 3 update operations

were performed with the same subroutine. The equations for mode 2

are so simple that no subroutine was written for that mode.

The problem of determining the switching times was compli-

cated by the stochastic noise. The solutions will be stated in section

4. 5. The problem of defining switching boundaries and system "states"

(modes) has been treated with great mathematical detail in a recent

article by H. S. Witsenhausen which is recommended reading for

anyone wishing to pursue the subject further (13).












CHAPTER 4


EXTENSION OF MULTIMODE LINEARIZATION
TO INCLUDE STOCHASTIC NOISE

4. L Introduction

The multimode-linearization technique developed in the

preceding chapter is faster and .more accurate than numerical inte-

gration for simulating the deterministic portion of the plant model

defined by figure (2-1). In this chapter, the multimode-linearization

technique is extended to include the simulation of the same plant with

the effects of stochastic noise added. An even greater saving of

computer time occurs when numerical integration is replaced by

multimode linearization when stochastic noise is present in the model.

The prime reason for this saving is that fewer noise samples are

needed, and it takes a fairly large number of computer operations to

generate a single normally distributed noise sample.

Since the multimode-linearization technique describes the

plant with equations which are linear between switching points, the

principle of superposition may be applied to solutions of the equations

over the time intervals between switching points. Using this super-

position principle, the effect of the noise will be determined in this

chapter and simply added to the deterministic solutions obtained in

the previous chapter. The noise simulation problem is solved in two

36




37

parts. First, the state-vector statistics are computed. Then, vector

noise samples with these statistics are generated. The technique is

sufficiently general that vector noise samples can be generated to fit

any physically realizable covariance matrix. The elements of the

vectors are normally distributed with zero means.


4. 2 Derivation of the Statistics of the Particular
State-Vector Solutions for Stochastic Driving Functions

When the noise vector W is defined as follows, the complete

solution for the state vector may be written in a simple form.

0

Ow2(t)

0

W(t) w4(t) (4-1)

w5(t)

0

0

Adding the noise vector W(t) to equation (3-6) yields

X(t) = FiX(t) + W(t) (4-2)

It may be readily verified by differentiation that the complete solution

to equation (4-1) may be obtained by adding a convolution integral to

the solution given in equation (3-18).

tn+l
X(tn+1) = Win(tn+l tn) X(tn) + J+ in(t+l v) W(v) dv
tn
(4-3)








For brevity, define

tn+l
N(n) = / (t v) W(v) dv (4-4)
tn

Clearly N(n) is the stochastic component which must be added to the

deterministic solution given by equation (3-18) in order to obtain the

complete solution for the state vector. Assume that the outputs of

the separate white-noise sources are uncorrelated, with zero means,

and the finite variances specified in table (2-1).

By taking the expected value of both sides of equation (4-4),

it may be easily shown that the mean of N(n) is equal to zero (all

elements of the vector equal zero).

MEAN [N(n)] = E [N(n)] = 0 (4-5)

For notational compactness, the symbol, Eij, and the term

"ij element matrix" will be used to represent a matrix of appropriate

dimensions in which the element in the ith row and jth column is equal

to unity and all other elements are zeros. The letter, E, without

subscripts represents the statistical expectation operation. The

transpose of a matrix or vector will be denoted by an apostrophe.

For example, the transpose of F will be represented by F'. Then

E [W(tl) W(t2)'] = Q 6(t-t2) (4-6)

where

Q E222 + E4 4 E55 (4-7)

and ( . ) represents the Dirac-delta function.




39

COV [N(n)] = E[N(n) N(n) '


= E1+ n ~ in(t+L- u) W(u) W(v)' in(tn+- v)' du dv
nI n


= i(v) Q in(v)' dv (4-8)

h tn+ t

Equation (4-8) is the general solution. To evaluate the elements of

the covariance matrix for the plant defined by figure (2-1), it is

convenient to separate equation (4-8) into three terms.

h
COV [N(n) = 2 22 in(v) E22 E (v)' dv


2 n
+ 4 i n(v) E44 Kin(v)' dv

h
+ 5 J in(v) E55 Cin(v)' dv (4-9)


The covariance matrix may be evaluated to any desired
degree of accuracy by hand, but the digital computer chosen for the

simulation is only accurate to about 9 decimal digits. By neglecting
terms which do not affect the six most significant figures, the follow-

ing approximate (only good to one part per million) solution was ob-

tained for COV[N(n) using the mode 1 state-transition matrix defined

by equation (3-20).







For mode 1,
3 2
COV [N(n) = CX2 h + (E1 + E2) + E h
3 2


+ [EL E -3E (E4 + E 2 + E44 h


2 h( 25 h4
+ O5 [El 20h + (E12 + E 21) 8


h3 23 O 3
+ (E5 + E51) h+ E22 3



Sh2 (
+ (E25 + E52 2 +E55h (4-10)


For mode 2,

COV[(n) =

For mode 3,

cov[(n)]


42 [h E14 1 + E44h]
3 2


(4-11)


Same as for mode 1, except that the signs

of the E15, E51, E25, E52 terms are

negative (4-12)


It is interesting to note that if the compact element-matrix notation

had not been used, equation (4-10) would take up a whole type-

written page. The distributions of the elements of N(n) are normal

because they are assumed to be generated by Wiener processes;

hence, their statistics have been completely determined.





41

4. 3 Generation of Stochastic Vectorial Samples in
Accordance with any Physically Realizable Covariance Matrix

The objective is to generate stochastic vectors N(i) such that

the elements of N(i) are normally distributed with means equal to zero

and that for any prespecified realizable covariance matrix C,

COV [N(i) = C (4-13)

Presume the existence of a source of independent normally

distributed samples with mean equal to zero and variance equal to

unity. Construct vectors, Z(i), the elements of which are randomly

chosen from the uncorrelated, zero-mean, unity-variance, normally

distributed population. Find a matrix, A (not necessarily square)

such that when vectors, N(i), are generated as follows, they will

fulfill all of the requirements given in the preceding paragraph.

N(i) = A Z(i) (4-14)

The elements of N(i) fulfill the requirement that they be

normally distributed because they are constructed by performing

linear operations on normally distributed random variables. Taking

the expectation of both sides of equation (4-14)

MEAN [N(i)] = E [N(i)] = AE [Z(i)] = A0 = 0 (4-15)

Hence, the first two of the three requirements are readily satisfied.

Satisfying the third requirement is not quite so trivial.

COV [N(i) = E [N(i) N(i)' = E [A Z(i) Z(i)' A'

= AE (i) Z(i) '] A' = AA' (4-16)

Now the problem is reduced to that of finding a matrix A, which is not

necessarily square, such that





42

A A' = C, a specified positive-semi-definite matrix (4-17)

But there is an infinite number of solutions. For example, if A is

one solution and G is any orthogonal matrix of appropriate order then

A G is another solution, which may be proven as follows. By the

definition of orthogonality,

G' = G-1 (4-18)

COV [(Ao G) Z(i) ]= A G E [Z(i) Z(i)']G' A

= AG GIG- A = Ao Ab = C (4-19)

Therefore, there are at least as many solutions as there are ortho-

gonal matrices of appropriate order, which means that there are

infinitely many solutions. The multiplicity of solutions hinders

rather than helps in the effort to find one general solution. The prob-

lem was to find a set of constraints sufficient to insure that the solution

is always unique, without constraining the problem so much that no

solution exists. Also, the solution should be in a form which mini-

mizes the computer time required for generating the vectors, N(i).

After some study, the following solution was found.

N(i) = R B Z(i) = A Z(i) (4-20)

The constraints are that: R is a rectangular matrix of order n x m

with only zeros and ones for elements which are chosen so that

R' R = I, an m x m identity matrix (4-21)

B is an m x m triangular matrix with elements bi

bij = 0 for i > j (4-22)

bij > 0 for i= j (4-23)




43

The reasons for choosing the constraints this way will become obvious.

The R matrix is used to reduce the n x n, positive-semi-definite

covariance matrix, C, to the m x m positive-definite matrix, D. The

constraint in equation (4-21) simplifies the arithmetic. The constraint

in equation (4-22) plus the reduction of C to D minimizes the number

of terms which must be included in the computer program. Equation

(4-23) is the additional constraint required to make the solution unique.

If equation (4-20) is to satisfy equation (4-13), then

COV [R B Z(i) = C (4-24)

R B B' R' = C (4-25)

R' R BB'R'R = R' CR = D (4-26)

B B' = D, a positive-definite matrix (4-27)

One definition of R which satisfies the above requirements is
i=m

R Ek(i)i
i=I
where

Ek(i)i is an n x m element matrix,

and

k(i) is the index number of the ith nonzero row of C

m is chosen equal to the number of nonzero rows in C.

(4-28)

Other trivially different R matrices may be defined by permuting the

columns of the R matrix defined above. Notice that when C is posi-

tive definite, R becomes an identity matrix.





44

Now, the procedure for computing B will be derived. The

small letters, r and c, will be used as subscripts to denote row and

column index numbers, respectively. Equations (4-27), (4-22), and

(4-23) will be used in that order.

j = min(r, c)

drc = bj bj (4-29)
j=l rj jc

d1 = bl1 bl1

bi, = positive V d (4-30)

dr = brl bll

br dr
brl r = 2, 3, . m (4-31)


d22 = b21 b21 + b22 b22

-----22
b22 = positive d22 b21 (4-32)

c -
drc =brc bcc + L brj bcj, < c r


c-I
drc brj bcj
brc = I=, c cc



bcc = positive d c b2, < c = r (4-34)
j--i
bccJ = 1 cI


Equations (4-29) through (4-34) were derived after studying some

similar equations in a book by V. N. Faddeeva (14, pp. 81-82). Using




45

these equations, the triangular matrix, B, may be found as

follows.

(1) Set brc = 0 wherever c exceeds r.

(2) Use equation (4-30) to find bll.

(3) Use equation (4-31) to find b21, b31, . bm

(4) Use equation (4-32) or (4-34) to find b22.

(5) Use equation (4-33) to find b32, b42, . bm2

(6) Use equation (4-34) to find b33.

(7) Use equation (4-33) . .


4. 4 Statement of the Specific Solutions

The covariance matrix for N(n) for mode I is given in equation

(4-10). Samples with the statistics of N(n) could be generated using

only four scalar, zero-mean, unity-variance samples per vector

sample because the covariance matrix has only four nonzero rows;

for conceptual clarity and algebraic convenience, the terms on the

right-hand side of equation (4-10) were treated separately.

For mode 1,

S 0 z (n)
N(n_) = h(ELL + E22)(C V"



-h 0 z z3(n)

+ (E1 + E42) z4

2 2 z4(n)











+ (Ell + E22 + E33) 05


25'

z61

z7'

For mode 2,

N(n) = (E +E42) +4


For mode 3,

N(n) = (same as for mode 1

negative signs in the


(n)

(n)

(n)

r h
h

V-3
2


(4-35)



-[
0 z (n)
(4-36)
L- z2(n)
2_


except b31 and b32 have

third triangular matrix)


(4-37)


4. 5 Comments Concerning Implementation of the Technique

A computer subroutine was written which generates uniform

random numbers and sums twelve of these to construct approximately

normally distributed, zero-mean, unity-variance, random samples.

The particular solutions defined by equations (4-35), (4-36), and (4-37)

are added to the homogeneous solutions defined in Chapter 3.

The switching time, tn+l, is a function of the state variables,

so that even when it can be stated as an explicit function of X(tn), the

noise N(n) can cause the deterministically predicted value to be in


46

Ch2
2\5

5Cgh
2
5c7
3


0


Oh
4 3

_3


0


0

9-8
3





47

error. The only case in which noise affects the switching time is in

seeking the time at which the velocity reaches zero. Using the

Newton-Raphson method, the original guess, g for tn+l, and

successive guesses are adjusted essentially as indicated in equation

(4-38).

final velocity (gi)
g~l = g (4-38)
gi+l gi final acceleration (gi)


In order to avoid bias by rejecting samples in non-random manner,

the values of the stochastic vectors, Z(n), are held constant during

the convergence process. This not only assures that N(n) will have

the precomputed statistics but also makes convergence of the itera-

tive procedure possible. The process is automatically terminated

as soon as the error becomes negligible. The process is usually

terminated at the end of the third iteration. It is necessary to switch

u(t) to zero at the instant that the torque-motor current decays to

zero. The time at which u(t) should be switched to zero can be

stated explicitly and solved deterministically. This switching time

is always computed first and used as the first guess in equation (4-38).

If the first correction computed by equation (4-38) is positive, then

the switching of u(t) to zero occurs before the velocity decreases to

zero. If the first correction is negative, then the velocity decreases

to zero first. In either case, the computer is programmed to make

the appropriate decisions and update the state vector from switching

point to switching point. Finally, when the prespecified sampling





48

time precedes any future switching times, that sampling time is

treated like a switching point and the plant is updated to that sampling

time.

The multimode-linearization equations are considerably

harder to program for a computer than the numerical-integration

equations; however, the resulting gains in speed and accuracy make

the extra effort worthwhile even for certain linear problems.












CHAPTER 5


SOLUTION OF A MULTIMODE-IDENTIFICATION PROBLEM
BY USING MODE ESTIMATORS, PARTITIONED AND MODIFIED
KALMAN FILTERS, AND BOUNDARY-CONDITION MATCHING

5. 1 Introduction

The purpose of the identification process is to estimate the

values of the elements of the plant state vector with sufficient accur-

acy to enable the controller to adequately predict the response of the

plant to various appropriate control actions. Unknown plant param-

eters such as stiction and coulomb friction are also considered as

plant-state-vector elements in the identification process. There

are many applications of the Kalman-filtering equations to plant-

parameter identification in current literature; however, the plants

are generally assumed to be linear. Detchmendy and Sridhar have

done some work on the estimation of states and parameters in noisy

nonlinear systems; however, the systems which they studied were

linear in the small (15). That is, the equations of the plants they

studied had continuous first and second derivatives. When the

assumption of linearity is made, the Detchmendy-Sridhar equations

can be reduced to the same form as the Kalman equations and equated

term by term; therefore, the Detchmendy-Sridhar equations may be

considered to be a generalization of the Kalman equations for

49





50

application to nonlinear systems which are linear in the small. The

plant equations which are being considered here have abrupt discon-

tinuities with infinite derivatives at the discontinuous points. The

only applicable reference to discontinuous-plant equations which was

found in the search of modern control literature was the previously

mentioned paper by Witsenhausen.

In this chapter, the Kalman-filtering technique is applied to

a nonlinear plant with abrupt discontinuities. The same multimode-

linearization techniques that were developed in Chapters 3 and 4 are

applied to the Kalman-filtering equations. The general principles

of the multimode-Kalman-filtering technique are described in the

next section, and the specific application of this technique to the plant

presently being studied is discussed in the remainder of this chapter.


5. 2 General Principles of Multimode-Kalman Filtering

The Kalman-filtering equations were derived under the assump-

tion that the errors in estimating the state vector are linearly related

to expected value of the difference between the predicted value and

the actual value of some scalar or vector observation. For brevity,

the actual observed value minus the predicted observation value is

commonly referred to as the "observation residual. When the

relationships between the state-vector errors and observation resid-

uals are linear and known, Kalman filtering is the optimum estimation

technique (assuming that the noise and errors are normally




51

independently distributed). When the relationships are mildly non-

linear and the partial derivatives of the observation residuals with

respect to the state vector are available, then Detchmendy and Sridhar

have shown that their filtering process yields estimates which con-

verge to the correct values; however, no claim of optimality was or

could be made. The Detchmendy-Sridhar filter can not be used if

the partial derivatives of the observation residual with respect to the

state vector are unavailable. When multimode linearization is used,

some of the partial are available and some are not. Examples will

be given later. For convenience, the state vector for the system can

be factored into three parts. The first part is deterministic and can

be predicted accurately without any need for filtering. The second

part is linearly related to the observation residuals (for a certain mode

or certain modes) and can be estimated using the standard-Kalman-

filtering equations (for that mode or those modes). The third part is

nonlinearly related to the observation residuals, but theoretical study

indicated and simulation proved that it can be successfully identified

by making the following four modifications to the standard-Kalman-

filtering equations.

1. The observations are predicted by using the nonlinear

plant model in the controller rather than by linear

equations. This is similar to the Detchmendy-Sridhar

technique.





52

2. The matrix relating the observations to the state

vector is replaced by a matrix of quasipartials

(first-order partial differences). These are ob-

tained by running the plant model in the controller

faster than real time, varying one parameter at

a time, and dividing the change in the predictions

by the change in the parameters.

3. The corrections to the state vector are applied

with full weighting, but the reductions to the co-

variance matrix are multiplied by some empirically

determined weighting factor, such as 0. 5, in order

to compensate for the fact that the corrections are

inaccurate because of the nonlinearities.

4. In certain cases, it is helpful or even necessary

to add certain nonlinear constraints to the correc-

tions because of the nonlinearities in the plant. No

rules have been formulated yet to aid in adding these

constraints. At present, the constraints are deter-

mined by the use of physical insight, logic, and

experience.

This modified-Kalman-filtering technique is not optimal, but

it does produce estimates which converge to the true values and, in

the simulation, the estimates converged quite rapidly.





53

The state-vector estimates and covariance matrices in the

Kalman equations are projected forward in time using the multimode-

linearization techniques developed in Chapters 3 and 4. The system

equations and filtering equations are partitioned into the same modes.

Although the multimode-linearization technique causes the system

equations to become piecewise linear, it does not necessarily line-

arize the filter equations. One source of nonlinearity in the filtering

equations is the fact that the plant mode is generally one of the param-

eters which must be estimated; hence, the filter must decide which

set of linear equations to use at any given time. Another reason for

the nonlinearity is that errors in predicting the observations in one

mode are often nonlinear functions of state-vector-estimation errors

made in some previous mode. Approximate solutions to both of these

problems have been found for the specific system being studied. It

would not be a trivial task to find a general solution or set of solutions.

The estimation of the system mode becomes an exceedingly

complex problem if an optimal solution is sought. If sufficient a

priori knowledge is available concerning the system equations, sta-

tistics, and cost functions, then the use of a Bayesian strategy would

be indicated (16, p. 43). Because of constraints in computer size and

speed and the difficulty of computing probability distribution functions

for nonlinear systems, it is generally not feasible to use a Bayesian

strategy for estimating the plant mode. One of the difficulties in

attempting to determine an optimal strategy is the problem of assigning






54

numbers to the cost of using an observation with the wrong set of

mode equations in an attempt to improve the estimate of the state

vector. Many observations used with the correct mode equations are

usually required to offset the effect of a single observation which was

improperly used. The situation will vary considerably depending upon

the nature of the nonlinearities, and in some cases the problem will

have a simple solution. One scheme which is applicable to a large

number of cases is to have as many Kalman filters as the system has

modes and to try the observation in all of the filters simultaneously.

The filter with the smallest observation residual or smallest weighted

observation residual would be selected as the "best estimator. The

"best estimator" would be used to update the estimated system-state

vector and then all of the other filters would be reset to agree with

the "best estimator. The strategy which was used for mode estima-

tion in the simulation was to determine the confidence level for each

mode estimate and to ignore observations when the probability of

error in estimating the mode was greater than a preset empirical

constant. The strategy did not require much computer space or time

and gave satisfactory results. Once the starting transients had

decayed, all observations were used because the filter was able to

predict the mode-switching times quite accurately.

A large amount of theoretical work remains to be done in the

development and optimization of multimode-filtering techniques;

however, sufficient knowledge is already available for obtaining




55

useful results in certain applications. The identification of the values

of the parameters in the plant defined in Chapter 2 is one such appli-

cation. The remainder of this chapter describes the filtering strategy

which was used for that identification process.


5.3 The Specific Multimode Kalman Filter Used in the Simulation

The function of the multimode Kalman filter is to identify

the values of the plant parameters sufficiently well to enable the

controller to adequately predict the response of the plant to various

appropriate control actions. To accomplish this, it is only necessary

to apply corrections to four estimated parameters: x,(t), x4(t), ks,

and k It is not necessary to adjust the estimate of the output shaft

velocity, x2(t), because the veloctiy estimate is set to zero every

time the controller model of the plant enters the passive mode (mode

2). No observations are available when the velocity is nonzero;

because, during normal operation, the corrective actions are applied

immediately after the receipt of a sampling pulse from the error

sensor, and these actions are completed, with the velocity back at

zero again, before the receipt of the next pulse. For this reason,

no provision is made for using observations taken while the system

is in mode I or mode 3. There is no need to adjust the estimates of

torque-motor torque, x3(t), or the control effort, u(t), because the

control effort is directly observable by the controller and x3(t) is a

noise-free deterministic function of u(t). Since the main concern in





56

the filter study was the starting transients, the simulation runs were

all less than 100 seconds in duration. Over that period of time, the

value of x5 was relatively constant so that it could not be separated

from the values of k and k by the filter. Since the separation was
c

impossible, it was not attempted. The estimate of ks estimates the

sum of ks plus x5. The estimate of kc estimates the sum of k. plus

x5. The addition of a separate estimate for x5 in the filtering

equations would be trivially simple from a theoretical point of view.

A large saving in computer programming was obtained by

partitioning the four-element system-error-state vector into two parts

of two elements each. When the confidence level for the estimate

that the system is in mode 2 has risen to approximately 0. 9987, then

the sampled-data measurements, y(n), of the system error, x (t), are

accepted and used for improving the estimates of x (t) and x4(t).

Since x2 is constant and equal to zero in mode 2, it is obvious from

inspection of figure (2-1), that the estimates of xl(t) and x4(t) are

sufficient statistics for predicting the observations, y(n), when the

system is in mode 2. In mode 2, the relationships between predicted

observations and the estimates of xl(t) and x4(t) are linear and the

observations are discrete, so the standard-discrete-Kalman-filtering

equations are used for that portion of the estimation process (17).

The task of improving the estimates of k and k is not quite

so simple as that of estimating x and x4. If the estimates of k and

k are in error when the controller plant model is in mode 1 or mode 3,




57

then the controller will predict x (t) incorrectly and will also switch

into mode 2 at the wrong time so that the initial value for xi(t) in

mode 2 will be in error by some nonlinear function of the nominal

values of all of the parameters in the preceding mode interacting with

the errors in estimating ks and kc. Clearly, the errors in estimating

ks, kc, and x,(t) are correlated. If the correlations were linear and

known, a four-element error-state vector and conventional Kalman

filtering would solve the estimation problem. In this particular non-

linear problem, the computations are simplified and the nonlinear

constraints are more easily added when the linear and nonlinear

estimation processes are separated. The statistics used for coupling

the two estimation processes are the means and the variances of the

estimated shaft motions, Ac, which occur during the active phases

(mode 1 or mode 3) of the control cycles in response to commanded

corrective actions. If the errors of the estimates were normally

distributed, then the means and variances of the /c estimation errors

would be "sufficient statistics" for the estimation process.

Suppose that the selected corrective action is to apply control

effort, umax, for a period of time, Atu. By applying this control

effort to the plant model in the controller, it is predicted that this

corrective action will cause the output shaft to move by an angle,

Ac The error in predicting Ac is not normally distributed but

usually the distribution is close to normal. The error in estimating

Ac is estimated with the aid of the quasipartials which were

discussed earlier.








E [Acp c] = V Hb, c b b (5-1)

S(quasipartial of ACp with respect to k )
Hb u a (5-2)
(quasipartial of Acp with respect to ks)
At u constant
A A
k = the estimate of k, (5-3)
A A
k the estimate of ks (5-4)

E [(k k)2] E [(k kc)(ks ks)]
P[ C c9 (5-5)
E [(k k)(ks -k) Ek)(k k5 )

The means and variances of the c predictions are used to

reset the estimates of x (t) and the covariance matrix for estimation

error in the mode 2 linear Kalman-filter equations. An additional

variance term was computed and added to equation (5-1) to account

for the large estimation errors which would occur if the output shaft

did not move at all. As the control-effort-application period, Atu,

is gradually decreased, the magnitude of the output motion, Ac, will

gradually decrease to a minimum value, cmin, and then drop

abruptly to zero. When there are stochastic factors present, the

value of c min and the corresponding minimum useful control-

effort-application period, tu min, are not deterministically

predictable. Values of Atu are never chosen smaller than t u, m

by intent; however, during the first few seconds of system operation,

when the estimate of ks is still fairly inaccurate, such errors are

common. The error in estimating Ac is not distributed normally;

however, Detchmendy and Sridhar showed that the convergence of





59

the filtering process does not depend upon the distributions being

normal or even upon having the correct estimates for the variances.

The variance approximated by equation (5-1) is the variance of Acp

estimated under the condition that Ac 4 0. For convenience, it is

referred to simply as "the conditional variance of ACp. When the

conditional restriction is removed, the variance is estimated by the

following approximation and called "the total variance of Acp.

Define

cp, cond prediction of Ac given Ac i 0 (5-6)

cp, total Acp,cond Prob(Ac 4 0) (5-7)

Then,

E [(cp, total Ac)2] -b Pb H + (Acp, cond)2

Prob(Ac = 0) Prob(Ac 1 0) (5-8)

E [(Acp, cond Ac)2] Hb Pb H + (Acp, cond)2

Prob(/c = 0) (5-9)

Ec)2 H(5-10)
E [,cp, cond c2]cond Hb b Hb (5-)
If the objective is to minimize the error in estimating xl(t) and

x2(t) for all cycles, then equations (5-7) and (5-8) should be used for

resetting the mode 2 filter. On the other hand, if the object is to
A A
obtain the best estimates possible for correcting ks and kc, then
A A
equations (5-6) and (5-9) should be used because ks and kc are not

updated in any particular controller cycle unless the confidence level

for the hypothesis that Ac i 0 is equal to or greater than approximately

0. 98. Equations (5-6) and (5-9) were chosen because errors





60

in estimating k and k would degrade the system performance more

severely and for longer periods of time than corresponding errors in
A A
xL(t) and x4(t).

The following method is used for estimating the time, tmax,

at which the confidence level becomes 0. 9987 for the assumption that

the system has returned to the passive mode (mode 2).

t t + 3 VHP H (5-11)
max nominal 3 tb b tb -

(quasipartial of nominal with respect to k S)
Htb = (5-12)
L(quasipartial of tnominal with respect to kc) t=const.

The values of Hbt are found as a by-product of determining those of

Hb; the only extra computer operations required are two division and

two storage operations.

The observations in mode 2 are used to improve the estimates
A A
of xL(t) and x4(t). At the end of mode 2, x (t) and x4(t) are used to

construct an estimate, \cm, which is an indirect measurement of

the output motion, /c, which occurred as a result of the last correc-
A
tive action. Suppose that x (ta) is the estimate of x (t) which was made
A A
just before the last corrective action and that x (tb) and x4(tb) are the

estimates made just prior to the next corrective action. Since x4(t)

is effectively constant for such short time periods, the shaft motion,

Ac, may be estimated as follows.
A A +
ce = xl(tb) x(ta) + x4(tb) (tb ta) (5-13)

VAR [Ace VAR [l(tb)] + VAR [x(ta)] + 2 COV

[1(tb), x(ta)] (tb ta) + VAR 4(tb)] (ta tb2 (5-14)




61

Using the parameters defined above and continuing to follow

the notation of R. C. K. Lee, the modified-Kalman-filtering technique

for identifying ks and k may be outlined using four matrix equations

(17).

Pn+L n = Pn n + COV N(n) (5-15)

The value of COV [N(n)] is given by equation (4-10) or (4-12).

Kn+1 Pn+lln H [Hb Pn+lln Hb + VARL ce (5-16)
A A

Sn+ nl [Lce A cond (5-17)
SA k cond
ks -n1 ks n

Pn+ln+l = [I a Kn+ Hb] Pn+I n (5-18)

The scalar, a, in equation (5-18) would be equal to unity for ordinary-

linear Kalman filtering; however, the nonlinear filtering equations

make use of certain approximations so that the covariance matrix of

estimation error, Pn+1 n+f is not reduced as much by the (n+l)th

correction as it would have been if the correction had been exactly

optimum. Empirically, it was found that when the value of the

coefficient, a, was set equal to 0. 5 that the filter operated very

satisfactorily when started with initial errors which were in error

by factors of two. As the errors become smaller, the filtering

equations become more accurate; therefore, the coefficient, a, should

probably be made to vary as some function of the magnitude of the

observation residuals.

If no nonlinear constraints are added to the estimation equa-

tions, it is conceivable that ks could be "corrected" to a magnitude





62

larger than the maximum torque which was applied during the cycle

in which the "correction" occurs. Such a "correction" would be falla-

cious, because it has already been decided that motion occurred but

motion can not occur if the magnitude of ks exceeds that of the maxi-

mum applied torque. If any such fallacious "corrections" occur, they

are detected by an inequality test in the computer, the magnitude of

the quasipartial for ks is reducedC oy an appropriate amount and the

estimates are recomputed exactly as before except for the adjustment

of Hb. This corrective procedure may be repeated several times.

There are two reasons why readjusting the quasipartial for kg is

superior to merely operating directly on the estimate for ks. First,

it gives a better correction for k Second, it produces a more

realistic correction to the covariance matrix. Another nonlinear

constraint was added to prevent ks from being reduced below the

magnitude of k

If large initial errors are expected during the starting period,

constraints should be added to limit the magnitudes of the individual

corrections because the linear approximations are inaccurate for

large increments. Constraints must not be added indiscriminately or

the covariance matrix, P, will become inaccurate. If the covariance

matrix becomes inaccurate, the efficiency of the filter deteriorates.

One way to reduce the gain without invalidating the covariance matrix

is to insert a fictitiously large number into equation (5-16) in the place

of VAR[Ac ].




63

5. 4 Conclusions Concerning MultimodL Kalman Filterine

Although the techniques for nonLinear-multimode Kalman

filtering are not as simple to apply as those of ordinary Kalman fil-

tering, they are still tractable. When used along with multimode

linearization, the multimode-filtering technique becomes another

method for estimating state vectors and identifying parameters in

nonlinear systems. A vast amount of theoretical exploration of the

technique remains to be done; however, it is already sufficiently well

developed to have immediate practical applications. The nonlinear-

multimode Kalman filter descrlDea in the preceding section worked

the first time it was tried. The only thing that was changed thereafter

was the value of the coefficient, a, in equation (5-18).












CHAPTER 6


DEFINING, EVALUATING, AND MINIMIZING
THE COST FUNCTION

6. 1 Defining, Evaluating, and Minimizing
the Cost Function Given the Control-Effort Duration, At

It is necessary to define a cost function in order to establish

a basis of comparison between various control strategies. If there

were no constraint on power, the solnic system would make correc-

tions so frequently that the resulting control-effort signals would look

very similar to square-wave dither except that the switching times

would be precisely governed by the controller rather than synchronized

in a relatively inflexible manner to the waveform of an external dither

oscillator. The study of solnic systems under such operating condi-

tions would be of great theoretical interest; however, the application

motivating this study is the design of gimbal-control servomechanisms

for inertial-guidance platforms. For such applications, the expendi-

ture of large amounts of energy is objectionable, not only in that it

wastes energy, but also because it creates thermal gradients which

degrade the performance of the inertial components. In order to

progress toward the original objective, it was necessary to assign a

cost to control effort as well as to control error. The cost function

was defined as the sum of two terms: the first term represents the

64




65

cost of positional error and the second represents the cost of control

effort.

The solnic system applies corrections to the plant on a cyclic

basis and changes strategy from ycle to cycle; therefore, it is

desirable to define the cost function on a single-cycle basis. First,

it is necessary to define what is meant by the term "a cycle. In

section 3.2, it was explained that the optimum control strategy for

controlling the plant under consideration here is to switch the control

effort, u(t), to plus or minus the maximum allowable magnitude,

umax, and after some precomputed time switch it back to zero, allow-

ing the arc-suppression diodes to apply reverse voltage until the

torque-motor current decays to zero. The nth cycle will be defined

as the period of time between the instant at which u(t) is switched

from zero to the magnitude of u ax for the nth time and the instant

at which it is so switched for the (n+l)th time. For the sake of com-

pleteness, the definition of the cycle period will be extended to include

the former instant and exclude the latter.

During any control cycle, there are just three parameters of

the control variable which the controller may adjust: the switch-on

time, the sign of u(t), and the switch-off time. The sign of the cor-

rective action is obviously chosen so as to drive the value of xl(t)

toward zero. Before a control action is taken, a prediction, /cp,

is made of the amount that the output shaft will be moved by that

corrective action. If a series of identical corrections are to be made,





66

then in order to minimize the positional errors it is necessary to

control the timing of the corrections so that xl(t) will have an average

value of zero. This may be accomplished by the very simple proce-

dure of waiting until the magnitude of the error becomes half as large

as that of Acp before initiating the corrective motion. When the

system is operated in accordance with this strategy, the peak errors

will be approximately half as large as the magnitude of the step cor-

rections. If the solnic system estimators are inaccurate, the peak

errors will be larger than half the magnitude of the step corrections,

Ac; however, the difference was negligible after the first 20 seconds

for the cases simulated on the computer. Even when identification

errors are significant, the above strategy minimizes the expected

value of the errors. Now that the optimum strategies have been for-

mulated for the direction and timing of the control-effort applications,

the only unspecified parameter remaining is the control-effort dur-

ation, Atu. The optimization of Atu is discussed in section 6. 2.

Before considering the optimization of Atu, the cost function will

be defined in terms of equations which can be evaluated within the

controller using the estimated values of the plant variables and

parameters.

To keep the controller operation optimal in spite of stochastic

variations of the environmental and plant parameters, the controller

computes the cost of various control strategies and changes its strat-

egy accordingly. The best data available to the controller concerning




67

the state of the system and the environment is the information from

the estimators described in the preceding chapter; therefore, the cost

function is defined in terms of those estimated parameters. Since the

parameter estimates are obtained by filtering over a relatively long

period of time compared with one or a dozen control-cycle periods,

the cost estimates are far more accurate than they would be if the

costs were estimated on the basis of direct measurements taken only

during the period when the system was operated in accord with the

strategy in question.

From previous discussions, it is obvious that when noise is

neglected the best estimate for the magnitude of the peak-to-peak

error is equal to the absolute value of Acp. The exact effect of

various noise sources on the cost of control is difficult to determine

because of the nonlinearity of the problem; however, the following

equation gives a relatively accurate estimate of the peak-to-peak

error cost, CL, which is easy to compute.


Cl VI( Cp)2 + VAR(LCp) + VAR xt) (6-1)

The variances in the above equation were negligible and were neg-

lected for the examples run in the simulation. From log-log plots of

C1 versus the control variable /tu, it was noted that the magnitude of

Cl increases proportionally with the sixth, fifth, fourth, and third

powers of Atu over the range of interest. The larger the magnitude

of Atu became the smaller the exponent became.




68

In establishing a cost function for control effort, it was

decided that the average current from the power supply should be

minimized. The average current is estimated by computing the

number of coulombs which flow from the battery during the control-

effort-application period, At,, and dividing this number by the

duration of the control cycle measured in seconds. The first number

may be readily computed as follows since the supply voltage, E, and

the torque-motor resistance, R, are assumed to be known.


Q = [EXP(_-f%) -L t 1 I 3 u (6-2)
R u u
Assuming that the system is caused to cycle repetitively with the same

strategy, then the duration of the control cycle, T, may be estimated

as follows.


T = (6-3)
X4

Defining a constant, ac, which is used to express the ratio of posi-

tional error cost to power cost, the cost function for control effort,

C2, and the total cost, J, may be computed as shown in equations

(6-4) and (6-5).


C2 ac (6-4)
T

J = C + C2 (6-5)

The value of the coefficient ac was selected so that, if the solnic

system were able to operate optimally with respect to that cost func-

tion, then it would be operating with peak errors about an order of

magnitude smaller than the classical limit stated in equation (2-6).




69

6. 2 Optimizing the Control-Effort Duration, At,,

If the estimated values of all of the system variables and

parameters were correct, then the optimum operating strategy could

be determined inside of the controller without operating the system.

If, however, the control strategy were optimized internally without

experimentation, then only a single control strategy would be used.

Using only a single strategy, it would be impossible to separate the

errors in estimating ks from those in estimating kc; therefore, these

two errors would become highly correlated and both estimates would

be inaccurate. Without accurate estimates of ks and kc, the cost

functions could not be estimated accurately for alternate strategies;

and therefore, the optimization process would be inaccurate. The

control strategy must be varied sufficiently to avoid singularity in

the identification process or the optimum strategy will not be cor-

rectly identified. Varying the strategy away from the optimum point

in order to improve identification, increases the cost of control. This

is the identification-singularity versus cost-nonoptimality dilemma

mentioned in section 1. 2.

In order to converge quickly toward the optimum operation

point, avoid identification singularities, and follow any changes; the

controller in the computer simulation was programmed to vary strat-

egy and reoptimize frequently. It was estimated a priori that a

two-to-one variation in the size of the output motion, Ac, would be

sufficient to avoid the singularity problem. Provision was made in




70

the computer program for changing this ratio, but it was never

changed. During normal operation, the controller operates for four

cycles with Ac about equal to 0. 707 times the computed optimal size,

four cycles at the optimal size, and four cycles at about 1. 414 times

the optimal size. After each set of twelve cycles, it computes the

cost functions for each size and then recomputes the optimal point.

All of the optimization computations and corrections are evaluated

and applied in terms of tu because Ac is not directly controllable.

Two terms are used to define the three sizes of Atu which are used.

The computed optimal value of Ltu is represented by the symbol,

Atu, opt The symbol tu, inc represents the incremental difference

between adjacent sizes of Atu.

Atu, small Atu, opt tu, inc (6-6)
At At (6-7)
u, medium tu, opt (6

Au, large = tu, opt + tu, inc (6-8)

The value of Atu, inc is adjusted every twelve cycles as shown below.

The coefficient, again, was set equal to 0. 3 and the coefficient, aratio,

was set equal to two.


At, inc(n + 1) Atu, inc(n) I again cp, slare rai
L-cp, small a

(6-9)

The equation for revising the estimate of tu, opt was derived by

plotting the costs versus the duration of Atu for the three different

sizes, fitting a parabola through the three points, and differentiating





71

to find the minimum point. Equation (6-10) sets the new value for

tu, opt at the minimum point on the parabola (or the maximum point

if the cost function is convex upward). Nonlinear constraints were

added to limit the change in tu, opt to no larger than tu, inc for

any one correction and to make a maximum-size correction in the

downward direction when the sampled portion of the cost curve is

convex upward. These constraints are needed because the cost curve

is highly nonlinear and does have sections which are convex upward.

J -J
At A( ) At (n) + large ~ small
u, opt(n + ) = tu, opt(n) +2(J + ) -4J
small large medium


Atu, inc(n) (6-10)

The subscripts, large, medium, and small, refer to the size of Atu

used in computing the cost functions. These algorithms were tested

directly on the deterministic cost function before the rest of the sim-

ulation was completed. The magnitude of the coefficient, again, in

equation (6-9) was changed from 0. 2 to 0. 3 to hasten the convergence.

No other changes were made. The algorithms converged in all cases

as long as the initial value of Atu, inc was greater than zero and that

of tu, opt was not negative.












CHAPTER 7


THE DIGITAL SIMULATION

7. 1 Introduction

The primary objective of the simulation was to determine

whether or not the solnic strategy would enable the controller to

maintain the positional error of the plant below the minimum error

level obtainable using classical, linear, time-invariant control

strategies. The simulation program was written in FORTRAN using

the multimode-linearization techniques described in Chapters 3 and

4. As a by-product of the prime objective, it was established that

the multimode-linearization technique for simulating nonlinear

stochastic systems is practical, fast, and accurate. It was also

shown that the multimode Kalman filter quickly and accurately iden-

tified the unknown parameters of the stochastic, nonlinear plant.

The solnic system was shown to identify the optimal strategy and

converge to it quickly for a wide range of initial conditions and sev-

eral cost functions. Once it had definitely been established that the

solnic system could consistently maintain the positional error of the

plant an order of magnitude smaller than the minimum error level

obtainable using classical, linear, time-invariant control strategies;

the simulation work was terminated with no effort to find the lowest

72




73

obtainable error level. The prime objective and subsidiary

objectives had been reached.


7. 2 Mechanics of the Simulation

The digital computer program for the simulation was written

using a modified version of the FORTRAN II computer language. The

instructions were fed directly into an SDS 930 computer from a card

reader, and the results were typed out by an on-line printer as quickly

as they were computed. The computer program ran faster than the

line printer, so it was necessary to curtail type-out considerably when

all of the pieces of the program were assembled into a working whole.

To facilitate the detection and correction of errors, the program was

divided into six major pieces plus two subroutines and the equivalent

of a third subroutine. The parts of the program were tested and

corrected individually using a large number of print statements to

assure that each of the parts was functioning properly. When the

program was run as a whole, the type-out was limited to two points

and one set of filter statistics per control cycle; even so, the line

printer was slower than the computer. Use of the multimode-

linearization techniques made it possible to simulate the nonlinear,

stochastic plant and the identification and control system for that

plant at a speed greater than the line printer could type and with an

accuracy limited only by the round-off errors of the computer. The

SDS computer words have 23 significant bits plus exponent and sign;





74

therefore, the round-off noise should be negligible compared with the

noise which is generated and introduced intentionally.

The general operation of the computer program can be under-

stood from a study of the highly simplified computer flowchart shown

in figure (7-1). The procedure for updating the values of the plant

variables is summarized in section 4. 5. The filtering techniques are

explained and the filtering and estimation equations are defined in

section 5. 3. The evaluation of the cost functions and the algorithms

for adjusting tu, opt and tu, inc are covered in detail in Chapter 6.

One thing which has not been mentioned previously, is that the test to

see whether or not corrective action is required is an unsymmetrical

test.

In section 6. 1, it was stated that in order to minimize the peak

magnitudes of the errors, the corrective motions should be initiated
A
when the slowly increasing, estimated error, x (t), reaches half of

the magnitude of the estimated size of the correction, c That

statement defines one limit for the unsymmetrical test to determine

whether or not corrective action is required. If the controller pre-

dictions are unbiased, then the probability is 0. 5 that the magnitude

of xl(t) will be greater than half that of Ac at the instant that a ran-

domly chosen corrective motion ceases. It would not be desirable to

correct for such an error unless it were unusually large, because the

effect of x4(t) will usually correct for such errors quite rapidly with

no expenditure of control effort. Such errors were called overshoot

















YES


NO-J


Estimate ce and
VAR( ce)
Was Ace Significant ?


I
YES
I1


INO->-


Modified Kalman Filter for
ks and kc
SIZE = -1 ?


NO


--YES-->


NCYCLE = NCYCLE + I
NCYCLE = LIMIT ?
YES NO

NSIZE = NSIZE + I
NSIZE > 3 ?


YES
I


L---NO>


Evaluate Lcp and Cost
Versus Size
Find: At opt
Find: At, inc
NCYCLE = I
NSIZE = L
Au= Atu, opt tu, inc


Update Plant
By 7s
Kalman Filter
for xL and x4
Is Corrective Action
Required ?


Stu + t tu, inc
CYCLE = 0





Given Ltu and Estimates
of Plant Parameters,
Find: A C, tmax,
Quasipartials, and All
Variances
_ Update All Estimates,
Variances, and
Covariances By
tmax s
Command Corrective
Action
Update Plant By
tmax 's


Greatly Simplified and Abbreviated Computer Flowchart
(Stressing Logic for Adjusting At and NSIZE)
Figure 7-1


Atu= Atu + 2tu inc
NSIZE = -1





76

errors. The second limit in the unsymmetrical test was set so that

overshoot errors are not corrected for unless the magnitude of xl(t)

becomes greater than 1. 5 times the magnitude of /Cp. The only time

that an overshoot error large enough to exceed the second limit

occurred was when the system was starting and the initial estimates

for ks and k had been intentionally set at twice the size of the true

values.

Although only three sizes of corrections are used for the cost-

optimization procedure described in section 6.2, figure (7-1) shows

five sizes: -1, 0, 1, 2, and 3. QTne last three sizes are the three

sizes used for cost optimization. The first two sizes, -1 and 0, are

normally used only during starting transients when the estimates of

the plant parameters and variables are inaccurate and tu, opt and

Atu, inc not yet properly adjusted. Whenever the magnitude of the

estimated value of Ac is less than twice that of its standard deviation,
A A
the controller does not use that estimate for updating ks and k ;

instead, it switches to cycle size -1. When the cycle-size designating

symbol, NSIZE, is set to -1, the controller repetitively increases

Atu, applies corrections to the plant, and tests for significant motion.

When significant motion is detected, NSIZE is set to size 0 and Atu

is decreased; however, it is decreased less than it was increased.

Once the value of Atu for size O'has been increased sufficiently to

produce significant motion, the system returns to the normal, three-

size mode of operation described in Chapter 6. As the filters receive




77

information, the estimates improve and the variances decrease so

that smaller values of Ace become significant. The cycle-size

incrementing scheme described' above is definitely not optimal.

However, the scheme was good enough to get the controller started

from a wide range of initial conditions. When the starting transients

caused by the intentionally erroneous initial estimates decayed, cycle

sizes -L and 0 were no longer used so that the system behaved in the

more nearly optimal manner described in Chapter 6.


7. 3 Description of a Typical Computer Run

The values of the plant parameters specified in table (2-1) were

used for all of the computer simulations. Only the cost ratio, a ,
A A A
in equation (6-4), and the initial values of k kc, \tu opt, and

Atu, inc were changed. The initial estimates of x (t) and x4(t) were

always set to zero. As long as the cost function remained unchanged,

the different simulation runs all converged to the same operating

conditions.

For one typical run, the initial estimates of ks and kc were

set equal to half the magnitudes of the correct values. The value of

Atu, inc was set four times too large in size. The magnitude of

Atu opt was initialized at zero. The minimum obtainable peak

error for this plant using a classical, linear, time-invariant con-

troller is given by equation (2-6) as about seven micro-radians.

The results for the simulated solnic system are shown in figures





78

(7-2), (7-3), (7-4), and (7-5). The largest positional error was about

25.7 micro-radians, and it occurred about five seconds after the

controller was started. It took the controller about ten seconds to

get away from cycle sizes -1 and 0 and begin the first set of three

cycle sizes as described in Chapter 6. The first cost-function compu-

tation and tu, inc adjustment occurred at 20 seconds. The controller

was cycling relatively slowly during the first 20 seconds because it

was making large Ac corrections largee 45 micro-radians).

The true value of x4(t) at 20 seconds was 29. 387 micro-radians per

second, and the estimated value was 29.376 micro-radians per

second. (The estimate for x4(t) had already converged with less than

I percent error in the first two seconds.) The estimates for ks and kc

were 71 and 102 percent of their correct values, respectively.

At 54 seconds, the controller computed the cost functions for

the fourteenth time. The estimates for x4(t) and kc were about the

same as they were at 20 seconds, except that their variances had

been reduced. The estimate for k had improved greatly because the
s

reduction of the size of the positional corrections, Ac, made the

effect of stiction more significant. The estimate for ks was equal to

about 95 percent of its true value. The controller had identified

tu, opt to four significant figures, had adjusted the size ratio for

the large and small positional corrections to 2. 1. The system was

operating consistently with peak errors of 0.94, 1.4, and 2.0 micro-

radians for the small, medium, and large Ac's, respectively. It is













Cd

H-
O

LO


I-



0
u

y_


0 n)
C\J )

0
u
Li
Z

In
OLJ
U


0 0 0 0 0
SNVIGVd-OdODIlN NI (;)Lx C'














L/) ___
N )

-w



< 0

K C)
_J










-T-









0L.)c


-I I


SNV1iQVd-OdDAi, NI (.)lx


0 Cu


..x



--=









---~--r

~---~-
_c~-----I~-


0 Z
O 0
U



OO








o
r-





Z I
U-
Li






ln


O


T1T~-














U-


S3H3NI--23NnO NI jniOlOi NOIlDIdd


-'
0


1





82




0


O



< o0
nu u
/(/U
D D


+


O0 >


Q <
un-m

-o







O O o


W LL LiL
N N N U

10 D


->-
(9 ( (9






Q (0

SS3 -N01i N 3 NIC




83

very significant that the errors in estimating ks and k had opposite

signs. The correlation between the two estimates was -0.65 accord-

ing to the covariance matrix in the filter. The two errors were

cancelling each other out sufficiently well that the predicted and

observed values of the positional corrections, A~c and c were

agreeing with each other consistently to three significant figures for

all three sizes. This is a specific illustration of the identification-

singularity versus cost-nonoptimality dilemma which was first

mentioned in section 1. 2.


7.4 Conclusions from the Simulations

The first conclusion reached was that the advantage in simula-

tion speed obtained by using the multimode-linearization techniques

rather than numerical integration justified the increased time spent

in writing the computer program.

As had been anticipated, the modified-Kalman-filtering equa-

tions for estimating the values of ks and kc produced better estimates

when the magnitude of the coefficient, a, the gain coefficient for the

adjustments to the covariance matrix in equation (5-18) was reduced

from unity to one half. The cases in which one half was obviously

superior to unity were the cases in which the initial estimates of ks

and k were intentionally set in error by factors as large as two.

When the initial estimates were approximately correct, the magnitude

of unity may have been superior, but this was not obvious from the

simulation.





S84

From the computer run described in section 7. 3, and from

other runs made with different cost functions, it was concluded that

the solnic strategy enabled the controller to consistently identify the

parameters of the nonlinear plant, and that, after a brief learning

period, the controller could maintain the peak errors an order of

magnitude lower than could be maintained using the classical, time-

invariant, linear control strategies. No attempt was made to deter-

mine how much lower the error could be reduced by further changes

in the cost function. The original objective had been accomplished.











CHAPTER 8


CONCLUSIONS, APPLICATIONS, AND TOPICS
FOR FURTHER STUDY

8.1 Conclusions

The plant with the inertial load and nonlinear friction described

in Chapter 2 can be controlled at least ten times more accurately by

use of a solnic strategy than by use of the most accurate, time-

invariant, nonlinear control strategy. At the same time that the

solnic system is performing with less error, it is also operating at

a lower average power level than the linear, time-invariant system.

The ultimate accuracy attainable with the plant defined in Chapter 2

using a solnic strategy has not been determined. The solnic system

was able to identify and operate in conformance with the optimum

control policy for the cost function stated in equation (6-5); however,

in order to avoid singularities in the identification process, nonoptimal

policies were also used. That is, to avoid identification singularities,

a mixed rather than a pure optimal strategy was used.

The multimode-linearization technique developed in Chapters 3

and 4 for simulation of stochastic, nonlinear systems saved so much

computer time that it is worthy of consideration even for much simpler

simulations.





86

The multimode Kalman filter used in conjunction with the

multimode-linearization techniques identified the parameters of the

highly nonlinear plant quickly and accurately in the presence of noise.


8. 2 Applications

The specific solnic system simulated during this study was

designed to control the plant defined in Chapter 2. As was pointed out

in section 2.4 and elsewhere, real plants are generally far more

complicated than the plant which was chosen for the simulation. The

specific controller developed in this study does not necessarily have

a direct application; however, it did demonstrate that a solnic system

can outperform the best classical, linear, time-invariant controller

by at least an order of magnitude in accuracy. It was also demon-

strated that the solnic system consumed less power. Therefore, the

solnic concept should be considered for immediate application to non-

linear systems which can not be controlled adequately using classical

techniques.

The multimode-linearization technique for the digital simula-

tion of stochastic, highly nonlinear, differential systems was developed

merely as a tool for testing the solnic concept; however, it is directly

applicable to many other problems. The speed and accuracy advan-

tages of this technique make it attractive even for much simpler

applications. The same multimode-linearization techniques can be

used to perform error analyses by applying them to the covariance

matrices of system error.




87

The multimode-Kalman-filter technique has applications in

many diversified areas such as coding, game theory, and controls.

When using along with the multimode-linearization technique, it

provides another means of identifying the parameters of a highly

nonlinear plant in the presence of measurement and plant noise.


8. 3 Topics for Further Study

No attempt has been made to determine how accurately the

specific solnic system described in this study would control the

position of the output shaft if the cost for control effort were reduced

to zero. One reason for this is that the specific solnic system

described here was designed under the assumption that the cost of

control effort would be significant, and therefore certain assumptions

were built into the simulation which would not be valid for the zero-

cost-for-control-effort case. For, one example, the last two terms

of equation (6-1) were neglected. For another, the cost-optimization

scheme which is outlined in section 6. 2 and figure (7-1) would be

inappropriate if the cost of control effort, ac, in equation (6-4) were

set to zero. It would be an interesting task to determine what the

ultimate accuracy limit would be if the solnic system were modified

in an optimal manner for the case in which a has a value of zero.

In particular, it would be interesting to determine how the solnic

system would compare with a dithered linear system.

The mode-identification decision makers used in the simula-

tion were not optimal but they produced good results. The question





88

as to how much better the results would have been if the decision

makers had been optimal remains unanswered. The whole area of

mode estimation is relatively unexplored.

There is a need for some theorems and solutions defining the

optimal compromise in relationship to the identification-singularity

versus cost-nonoptimality dilemma.

A study which could have tremendous practical potential is

that of extending the solnic concept to apply to real physical plants

which have stochastic torques as well as nonlinear friction at the

output shaft.











KEYED BIBLIOGRAPHY


(1) Hamming, R. W., Numerical Methods for Scientists and Engineers,
New York: McGraw-Hill, 1962.

(2) Smith, Otto, Feedback Control Systems, New York: McGraw-Hill,
1958.

(3) Korn, G. A., and T. M. Korn, Electronic Analog Computers,
New York: McGraw-Hill, 1956.

(4) Lauer, H., R. Lesnick, and L. E. Matson, Servomechanism
Fundamentals, New York: McGraw-Hill, 1947.

(5) Gibson, J. E., and F. B. Tuteur, Control System Components,
New York: McGraw-Hill, 1958.

(6) Bohacek, P. K., and F. B. Tuteur, "Stability of Servomechanisms
with Stiction and Friction in the Output Element, IRE
Transactions on Automatic Control, May 1961, pp 222-229.

(7) Swamy, M. N. S., "The Steady State Response of a Servosystem
Taking Stiction and Coulomb Friction into Consideration, "
Journal of the Franklin Institute, Sept 1965, pp 205-221.

(8) Zheletsov, N. A., "Metod Tochechnovo Preobrazovaniya i Zadacha
Ovynuzdennykh Kolebaniyakh Ostsillyatora c 'Kombinirovannym'
Trenim [Transform Method and Problems Concerning Fluctuat-
ing Oscillations with Coulomb Friction], Applied Mathematics
and Mechanics 13, 3-40, Institute of Mechanics Academy, Nauk,
U.S.S.R., 1949.

(9) Biernson, George, "Relation Between Structural Compliance and
Allowable Friction in a Servomechanism, IEEE Transactions
on Automatic Control, Jan 1965, pp 59-66.

(10) Booton, R. C. Jr., "The Analysis of Nonlinear Control Systems
with Random Inputs, Proceedings of the Symposium on Non-
linear Circuit Analysis, New York, N.Y., April 23, 24, 1953
Volume II, New York: Interscience Publishers, Inc., 1953,
pp 369-391.







(11) Pervozvanskii, A. A., Random Processes in Nonlinear Control
Systems, New York: Academic Press, 1965.

(12) Gantmacher, F. R., Applications of Matrix Theory, New York:
Interscience Publishers, Inc., 1959.

(13) Witsenhausen, H. S., "A Class of Hybrid-State Continuous Time
Dynamic Systems, IEEE Transactions on Automatic Control,
April 1966, pp 161-167.

(14) Faddeeva, V. N., Computational Methods of Linear Algebra,
New York: Dover Publications, Inc., 1959.

(15) Detchmendy, D. M., and R. Sridhar, "Sequential Estimation of
States and Parameters in Noisy Non-Linear Dynamical Systems,
1965 Joint Automatic Control Conference (Preprint), pp 56-63.

(16) Sworder, David, Optimal Adaptive Control Systems, New York:
Academic Press, 1966.

(17) Lee, R. C. K., Optimal Estimation, Identification, and Control,
Cambridge, Mass.: The M. I. T. Press, Research Monograph
No. 28, 1964.











ADDITIONAL BIBLIOGRAPHY


Chang, S. L. S., Synthesis of Optimum Control Systems, New York:
McGraw-Hill, 1961.

Chestnut, H., and R. Mayer, Servomechanisms and Regulating
System Design, New York: John Wiley & Sons, 1951.

Fisher, J. R., Optimal Nonlinear Filtering, Los Angeles, University
of California Department of Engineering Report No. 66-5,
January 1966.

Ho, Y. C., and R. C. K. Lee, "A Bayesian Approach to Problems
in Stochastic Estimation and Control, IEEE Transaction on
Automatic Control, October 1964, pp 333-339.

Karplus, W. J., and W. W. Soroka, Analog Methods Computation and
Simulation, New York: McGraw-Hill, 1959.

Lavi, A., and J. C. Strauss, "Parameter Identification in Continuous
Dynamic Systems, 1965 IEEE International Convention Record,
Part 6, March 1965, pp 49-61.

Mayer, Arthur, "Analysis of Gyro Orientation," IRE Transactions on
Automatic Control, Dec 1958, pp 93-101.

Minorsky, Nicholas, Nonlinear Oscillations, Princeton, New Jersey:
D. Van Nostrand Company, Inc., 1962.

Newton, Gould, and Kaiser, Analytical Design of Linear Feedback
Controls, New York: John Wiley, 1957.

Parvin, Richard H., Inertial Navigation, New York: D. Van Nostrand
Company, Inc., 1962.

Sage, A. P., and B. R. Eisenberg, "Experiments in Nonlinear and
Nonstationary System Identification via Quasilinearization and
Differential Approximation, 1966 Joint Automatic Control
Conference (Preprint), pp 522-530.

Tou, Julius T., Digital and Sampled-data Control Systems, New York:
McGraw-Hill, 1959.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs