Title Page
 Table of Contents
 Theoretical development
 Application to target tracking
 Summary and recommendations
 Biographical sketch

Title: Autoregressive moving-average (ARMA) model identification for degenerate time series with application to maneuvering target tracking
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00082439/00001
 Material Information
Title: Autoregressive moving-average (ARMA) model identification for degenerate time series with application to maneuvering target tracking
Physical Description: vi, 90 leaves : ill. ; 28 cm.
Language: English
Creator: Speakman, Norman Owen, 1949-
Publication Date: 1985
Subject: Time-series analysis   ( lcsh )
Autoregression (Statistics)   ( lcsh )
Stochastic processes   ( lcsh )
Tracking radar   ( lcsh )
Electrical Engineering thesis Ph. D
Dissertations, Academic -- Electrical Engineering -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
Thesis: Thesis (Ph. D.)--University of Florida, 1985.
Bibliography: Bibliography: leaves 87-89.
Statement of Responsibility: by Norman Owen Speakman.
General Note: Typescript.
General Note: Vita.
 Record Information
Bibliographic ID: UF00082439
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: aleph - 000863402
oclc - 14281429
notis - AEG0115

Table of Contents
    Title Page
        Page i
        Page ii
        Page iii
    Table of Contents
        Page iv
        Page v
        Page vi
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
    Theoretical development
        Page 9
        Page 10
        Page 11
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
    Application to target tracking
        Page 48
        Page 49
        Page 50
        Page 51
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
        Page 72
        Page 73
        Page 74
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
        Page 82
    Summary and recommendations
        Page 83
        Page 84
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
    Biographical sketch
        Page 90
        Page 91
        Page 92
Full Text







This dissertation is dedicated to my wife, Donna, and our children,

Erik, Timothy, and Amy.


The author wishes to express sincere appreciation to the chairman of

his graduate committee, Dr. Thomas E. Bullock, for the guidance,

patience and, above all, the dedication shown during the course of his

doctoral program. Thanks are also extended to Dr. Giuseppi Basile, Dr.

Charles V. Shaffer and Dr. Pramod P. Khargonekar for the help they

willingly and professionally provided.

The author is also indebted to Dr. William P. Albritton and Dr.

Donald C. Daniel at the Air Force Armament Laboratory for the tremendous

encouragement provided during the writing of this dissertation.

The author wishes to offer a special thanks to his wife and children

for the seemingly endless patience and understanding shown during the

more trying times of this endeavor.




ABSTRACT . . . . . . . .

INTRODUCTION . . . . . . .

General Description of the Problem . . .
Survey of Past Approaches . . . .


Background . . . . . . .
ARMA Model Parameter Identification . . . .
ARMA Model Identification Algorithm . . . .
ARMA Model Parameter Identification (Degenerate Case) . .
ARMA Model Identification Algorithm (Degenerate Case) .. ..
Relationship to the Kalman Filter . . . .
The Multichannel Burg Algorithm . . . .
The Multichannel Burg Algorithm (Degenerate Case) . .
Numerical Example . . . . . .


The Problem . . . .
Engagement Geometry . . .
Trajectory Generation . . .
Target Dynamic Model Synthesis (Off-line) .
Target Dynamic Model Synthesis (On-line).
Maneuvering Target Tracking Algorithm .
Numerical Results and Predictor Performance


Summary . . . . . . .
Recommendations for Further Research. . . .


* .
. .

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Norman Owen Speakman

December 1985

Chairman: Dr. Thomas E. Bullock
Major Department: Electrical Engineering

Research was conducted in the general areas of time series analysis

and stochastic realization. Results were then applied to the specific

problem of tracking a highly maneuverable aircraft target.

An algorithm was developed to identify the order and parameters of

the minimum order autoregressive moving-average (ARMA) model of a multi-

variable system given the output autocorrelation sequence. Studies were

also conducted in the area of degenerate time series rmdeling. It was

found that degeneracy in vector-valued time series is caused by the pre-

sence of one or more deterministic relationships in the time series.

ARMA models for degenerate time series can be identified by finding and

extracting the deterministic relationships from the time series. The


result is a reduced dimension stochastic model of the system. The model

found will have fewer white noise inputs than outputs. An ARMA iden-

tification algorithm for use with degenerate times series was developed.

The equivalence between the ARMA model found and the Kalman filter

innovations representation was discussed. A thorough numerical example

demonstrating this equivalence as well as degenerate time series

modeling was presented.

Application of the ARMA model identification procedure to the target

tracking problem was investigated. It was discovered that the time

series formed from target inertial velocity vectors is degenerate. This

fact has physical significance and allows one to determine the orien-

tation of the maneuver plane (the plane containing all the target

motion) in inertial space. The ARMA modeling technique was used to deve-

lop an adaptive modeling and tracking algorithm. Operation of the

algorithm was demonstrated using a target trajectory simulation.


General Description of the Problem

The ability to accurately estimate aircraft target motion is a cru-

cial factor in the accuracy of modern weapon systems. A great deal of

attention has recently been devoted to the problem of tracking aircraft

targets which are highly maneuverable (i.e., capable of achieving and

maintaining relatively large accelerations). The level of attention has

risen primarily because rapidly advancing microcomputer technology per-

mits missile guidance systems to utilize more information about pre-

dicted target motion and secondarily because modern fighter aircraft are

more maneuverable. Sustained acceleration on the order of 9 g's is com-

mon. Effective tracking basically requires that the orientation of an

observer-based coordinate system (usually associated with the centerline

of a seeker) is maintained relative to the line-of-sight vector between

the observer and target as the target moves relative to the observer.

Improved tracking results in enhanced estimates of target kinematic

variables which are critical parameters in the computation of guidance

commands for missiles using advanced guidance laws.

The mathematical science of estimation theory lead to the develop-

ment of optimal (in the minimum mean-square-error sense) estimators

which accept noise corrupted measurements, relates them to the system

states and accounts for the propagation of the states between



measurements. The celebrated Kalman filter is probably the most widely

used product of estimation theory and has been applied to the target

tracking problem for several years. A key element in the design of a

Kalman filter is the construction of a statistical model which accounts

for the behavior of the system between measurements, i.e., a system

dynamic model.

Several major technical difficulties are encountered when for-

mulating a dynamic model for the tracking problem when it is known that

the target is capable of executing abrupt maneuvers. First, there is no

direct measure of target acceleration although acceleration, a state

variable, is observable from measurable parameters. Second, there are

uncertainties in the measurements and in the system dynamic model.

Uncertainties exist in all estimation problems but the uncertainty asso-

ciated with the target dynamic model is somewhat unique. A very agile

and highly maneuverable aircraft can change its direction of flight very

quickly and in an erratic fashion. As a result, tracking filters require

high bandwidths to be effective during maneuvering flight. Lower band-

widths can of course be used when it is known that the target is not

maneuvering. Since the transition from non-maneuvering flight to

maneuvering flight is sudden and, as far as the tracking filter is con-

cerned, occurs at random times, tracking is usually done using high

filter bandwidths. Tracking performance is therefore degraded when the

target is not maneuvering. Finally, measurements (angles and ranges) are

usually best expressed in spherical coordinates whereas the dynamics

(kinematic variables such as position, velocity and acceleration) are

best written in cartesian coordinates. Linearization of the measurement

equations is required which leads to errors.


A class of theoretical problems which has application to statisti-

cal modeling is contained in the area known as stochastic realization

theory. The stochastic realization problem is to determine a Markovian

representation for a rational, stationary time series given the

covariance sequence of the time series. A common technique used to model

a system which has unknown dynamics is to examine the time correlation

of cinematic variables such as velocity and acceleration. Once the

covariance sequence is defined, results from the theory of stochastic

realization can be used to formulate a statistical model of the system

under consideration.

Problems to be solved in this research effort are two-fold. The

first objective is to formulate an innovative dynamic model of an

aircraft target suitable for use in a tracking filter. Second, to

accomplish the first goal, a new development in stochastic modeling will

be explored which extends the theory of stochastic realization. The

latter objective has applications far beyond that of target dynamic

modeling and some interesting theoretical implications will be pursued.

Survey of Past Approaches

Research in the maneuvering target tracking problem can be divided

into three major categories: (1) modeling, (2) tracking schemes and (3)

maneuver detection. Although research efforts can be accomplished in any

one of these discrete areas, they are actually quite closely related.

Some researchers have dealt with more than one of the above categories

s imultaneous ly.


The most commonly used discrete target dynamic model in use today

is credited to Singer [i1. The model is based on the assumption that the

target undergoes random accelerations in each of the three inertial axes

and that the acceleration is exponentially correlated. This results in

the well-known first-order Gauss-Markov model, a zero mean, time-

correlated Gaussian noise process. Fitts [2] and McAuley and Denlinger

[31 adopted the Singer approach to target modeling.

There are certain heuristics associated with the Singer model. The

model requires a priori knowledge of the acceleration time constant and

the acceleration probability density. Singer claims that the simple

model proposed is a good one provided that these parameters are

correctly chosen.

Another method of modeling target motion was proposed by Gholson

and Moose [41 and later extended by Moose, Van Landingham and McCabe

[51. It basically involves the use of semi-Markov processes to fodel

major changes in trajectories. Target motion is modeled as being driven

by a time-correlated process which has a randomly varying mean. This is

actually a combination of Singer's time-correlated input and semi-Markov

modeling. The input, chosen randomly from a finite set of possible

discrete levels, is applied for a random time interval after which a new

input level is chosen. The motivation for this model is that it more

closely approximates real-world maneuvers.

Kendrick et al. [6 extended the state-of-the-art in target modeling

by developing a new statistical model for aircraft normal acceleration.

The basic premise is that the dominant acceleration is normal to the

wings. The acceleration magnitude is modeled as an assymetrically


distributed time-correlated random process. This is done by modeling

acceleration magnitude as a deterministic nonlinear function of a zero-

mean time-correlated random variable and has the advantage of allowing

limits on the acceleration level which cannot be done using the simpler

Singer model. Non-normal acceleration, resolved into components along the

three inertial axes, are modeled by the familiar first-order

Gauss-Markov process with a symmetrically distributed white noise input.

Obviously this method, while providing more latitude in modeling,

requires parameter values to define the nonlinear acceleration function.

Again, as in the Singer model, certain parameters must be specified a

prior even though the parameters are chosen to be constants for a par-

ticular class of aircraft.

An innovative approach to modeling target motion was developed by

Bullock and Sangsuk-Iam [7]. In this nonlinear model it is assumed that

the aircraft trajectory is composed of circular arcs in a maneuver plane

and the velocity magnitude and turning rate are constant. A model writ-

ten in polar coordinates follows from this formulation. An interesting

point to note here is that acceleration has been removed from the state

vector and is therefore not modeled explicitly. Using the usual model in

linear cartesian coordinates requires piecewise constant approximations

to the natural sinusoidal variation in inertial acceleration components.

Difficulties encountered in estimating sinusoidal variations have been

removed using the nonlinear model. Marked improvement over the Singer

model has been shown.

Hull, Kite and Speyer [8] derived a new stochastic target model by

assuming that the angle defining the orientation of the acceleration


vector in inertial space is a random process. The model is then written

in spherical rather than cartesian coordinates. They studied two cases:

(1) acceleration magnitude is constant and (2) acceleration magnitude is

a random process. Only the two-dimensional case was considered. Results

indicate that the variable magnitude model exhibits better tracking per-

formance than the constant magnitude model. Under certain conditions it

was observed that the constant model was comparable to the Singer imdel.

One advantage that the circular model has over the conventional car-

tesian model is that the acceleration magnitude can be limited to a

maximum value. The Singer model, which assumes the inertial components

of acceleration are independent, can result in magnitudes which exceed

maximum achievable acceleration levels.

Several studies have been done to compare various proposed tracking

methods. Vergez and Liefer [91 compared four commonly used models: (1)

first-order Gauss-Markov, (2) second-order Gauss-Markov, (3) zero acce-

leration and (4) constant acceleration. The first-order Gauss-Markov

model was found to perform the best for the conditions considered. Lin

and Shafroth [101 also performed a similar study. They, however, con-

sidered different tracking algorithms: (1) Singer's approach, (2) Input

Estimation, (3) Bayesian and (4) Re-initialization, which employs

maneuver detection. Each tracking method implied a different accelera-

tion model and, as expected, performed well for certain given trajec-

tories. Lin and Shafroth concluded that the tracking algorithm which

performed best during maneuvers was the Bayes tracker.

Many researchers have sought solutions to the tracking problem by

devising new tracking schemes. Most of the algorithms proposed use the


Singer model for target dynamics. Pearson and Stear [11i developed a

radar tracker which models target motion in the rotating coordinate

frame of the tracking system, i.e., the line-of-sight coordinate frame.

Two acceleration components normal to the line-of-sight vector are

modeled as independent first-order Gauss-Markov processes. Two

approaches were derived: angle and range tracking systems. The angle

tracker estimates angular variables such as line-of-sight rate and the

range system provides estimates of linear scalar variables such as range

and range rate. It should be noted that any model written in a rotating

coordinate system is inherently complex due to the necessity of

including non-neglible Coriolis accelerations.

Spingarn and Weidemann [12] applied linear regression techniques to

both the non-maneuvering and maneuvering tracking problems. The non-

maneuvering problem results in an expanding memory filter. When the same

technique is applied to maneuvering targets it becomes necessary to

truncate older data and form what is known as a fading memory filter.

Filtering in both line-of-sight and inertial cartesian coordinates

revealed that filtering should be done in the inertial frame to obtain

smaller prediction errors.

The use of a combination of electro-optical sensor measurements and

pattern recognition algorithms to deduce target aspect angle was pro-

posed by Kendrick et al. [6]. The aspect angle is the angle between the

missile's velocity vector and the target velocity vector. Estimates of

target orientation can be obtained from the aspect angle measurement.

Target motion is then estimated using a well-known relationship between

attitude and kinematic states.


Many tracking schemes involve the use of maneuver detectors.

Maneuvers are usually detected by monitoring the innovations sequence of

a tracking filter which has been designed for a non-maneuvering target.

When a bias is encountered in the residuals which exceeds a predeter-

mined threshold, a maneuver is assumed to have occurred. The strategy

used after detection varies. Thorp [13] used a likelihood ratio test for

maneuver detection and a binary random variable in the target state

equations. The tracking filter combines estimates from two Kalman

filters using coefficients obtained from the likelihood ratio. Bullock

et al. [71 and Chan et al. [14] use the statistic X2 to detect maneuver

occurrence. When a maneuver is detected by performing a statistical

hypothesis test, the tracking filter is reinitialized and the filter

gain is adjusted accordingly. The process continues each time a

measurement is made. Chan et al. [14, 151 use a least squares technique

to estimate the target's input acceleration. At each measurement the

magnitude of the estimate is compared with a threshold. When a detec-

tion is made, a Kalman filter which assumes constant velocity target

motion is updated. The purpose of the update is to remove the bias

caused by the target deviating from straight-line motion. The filter is

thus able to maintain track even in the presence of maneuvers.

Although numerous methods using combinations of modeling, tracking

and maneuver detection techniques have been devised and implemented, the

problem of tracking a highly maneuverable aircraft still remains a key

technological challenge. Maybeck [16, p. xiii] states that "system

modeling is a critical issue and typically the 'weak link' in applying

theory to practice." It is with this same conviction that this research

effort is undertaken.



Time series analysis is a very useful and powerful method used in

applications such as prediction and control. In many of these applica-

tions, a continuous signal is sampled at some regular time interval to

form a discrete-time signal known as a time series. Having obtained a

discrete time history of the signal, one may use the time series to

determine a parametric model of the system under consideration and use

the model to predict future behavior of the system. A general model

which is commonly used in time series analysis is based on the assump-

tion that a signal can be modeled as the output of a system which is

driven by a Gaussian white noise input. In such a model the present out-

put is a linear combination of past outputs and present and past inputs.

This model is usually written as a discrete-time difference equation:

AOYk + Al1k-1 + ... + AnYk-n = Boek + Blek- + ... + Bmek-m (1)

where yk(k>O) is a p-dimensional vector sequence of outputs (time

series) and ek (k>O) is a p-dimensional zero-mean vector sequence of

Gaussian white noise. The matrices Ai (Oi
square. Equation (1) can also be written using z-transform notation,

A(z)Y(z)=B(z)E(z) (2)



A(z)= I Aiz-i (3)
B(z)= X B z-J (4)

and z-1 is the unit delay operator.

Equation (2) is termed a pole-zero model. There are also two special

cases of equation (2) which are useful: the all-pole model where Bj=0

known as an autoregressive (AR) model and the all-zero model is called a

moving-average (MA) model. The more general pole-zero nmdel is termed an

autoregressive moving-average (ARMA) model. In the remainder of this

dissertation we will be primarily concerned with ARMA models but may

occasionally have need to use AR models.

ARMA Model Parameter Identification

Much effort has been spent devising suitable methods for identifying

ARMA models [17, 18, 19 to name a few]. Most of these methods are

restricted to the time domain. We shall also concentrate our attention

on time domain model identification.

Suppose we have a stationary output time series, yk, and we wish to

construct an ARMA model of the time series. It is well-known that an

ARMA model of finite order (n,m) can be written as an infinite order AR

model [17, 20]. This equivalence exists if we assume the AIMA model is

invertible. The condition for invertibility is [20]

det[B(z)]#0 IzI>l.


If the invertibility condition is met we can write equation (2) as

B-l(z)A(z)Y(z)=E(z), (6)

which can also be written as an AR model if we let

y(z)=B-l(z)A(z) (7)

so that

y(z)Y(z)=E(z). (8)

A major problem associated with multivariable ARMA model iden-

tification is that of uniqueness of the realization termed iden-

tifiability. That is, given that Yk can be modeled as an ARMA process,

is it possible to determine unique values for n and m and the coef-

ficient matrices Ai (O
sequence of yk? To see the difficulty with identifying a unique ARMA

model from the output autocorrelation sequence, one need only multiply

both sides of the model by the same matrix polynomial in z and observe

that the autocorrelation structure is unaltered. This gives rise to the

notion of "classes" of models for a given autocorrelation sequence. The

problem now becomes one of determining conditions to apply to A(z) and

B(z) so that one APRMA model from the class of models will be identified.

This will insure that, given an autocorrelation sequence, only one model

will result which fits the data and meets the constraints. Hannan [211

discusses this problem and presents three sufficient conditions for

identify ability:

(1) A0 = BO = I


(2) A(z) and B(z) have no common left divisors

except unimodular ones

(3) det[A(z)]0 I|z >1

det[B(z)] 0 |z| >1.



We note that condition (1) can be met without loss of generality as

long as the covariance matrix of ek, I, is allowed to be general sym-

metric non-negative. To show this, consider an ARMA model which has

AO0I and B0:I,

AOYk + Alyk-1 + ... + AnYk-n = Boek + Blek.- + ... + Bmek-m.


We can pre-multiply equation (12)



by AO-1 to put the equation in reduced

A0-1 Ai i=0,l,2,...,n. (13)

The lead coefficient on the right hand side can be made identity by a

proper change of input basis. Let

(Bi)new = A0-1 Bi BO-1 AO

(ek)new = AO-1 BO ek



We now have an equivalent ARMA model in which (AO)new = (BO)new = 1.

One may ask, what happens if det[AO]=0 or det[BO]=0? We can

that these conditions cannot occur for a stationary, invertible

series. The determinant of A(z) is a polynomial in z-1,

det[A(z)] = fo + flz-l + f2z-2 + ... + fNz-N .







Equation (16) can also be written

det[A(z)] = det[AO + Alz-1 + ... + Anz-n] z (17)

By setting z-1 = 0 and combining equations (16) and (17) we can solve

for fo:

fo = det[Ao]. (18)

If we assume that det[AO]=0 and det[A(z)]=0 we find from equation (16)


z-1 (fl + f2z-1 + ... + fNz-N+ ) = 0. (19)

One solution to equation (19) is z= We know that det[A(z)]+0 for

zl >1 if the time series is stationary. Therefore, a necessary condition

for stationarity is det[Ao010. An analogous argument exists which says

that a necessary condition for invertibility is det[Bolg0.

We can modify the constraint on B0 and specify conditions which

must be met by II. It has been shown [20] that requiring BO to be lower

triangular and ==I also results in the elimination of redundant para-

meters and thus allows the unique determination of an ARMA model. We

shall choose the former condition, i.e., that BO=I since this gives us

latitude in selecting II. This becomes important later in this develop-

ment. Condition (2) is imposed to eliminate the possibility that a can-

cellation may occur when the product B-l(z)A(z) is formed. The require-

ment imposed by condition (3) amounts to no more than requiring that the

model be both stationary and invertible. Having stated conditions which

insure the unique identification of ARMA model parameters, we can

proceed by deriving an expression for B-l(z).

B(z) = Bjz- BO=I (20)

B-1(z) = kz-k (21)

Performing the multiplication B-l(z)B(z) we find

B-l(z)B(z)= 80 + fk(i.,Bj)z- i,j k=l

Also we require that

B-l(z)B(z)=I (23)

Equating like powers of z in equations (22) and (23) it is obvious that

O = I (24)


fk(i,Bj)=O k>0 (25)

Each Ck(Bi,Bj) is linear in Bk. We can therefore solve each equation

for Sk in tens of Bi(i

0O = I (26)

01 = -B1 (27)

82 = -1B B2 = B12 B2 (28)

03 = -2B1 -B1B2 -B3 = B2B1+BIB2-B13-B3 (29)

Referring to equation (7), we see that by post-multiplying B-l(z) by


A(z) we obtain a matrix polynomial which is the coefficient matrix poly-

nomial for an equivalent AR model. That is,

B-(z)A(z) = I (B1 Al)z-1 + (B12 B2 + A2 BAl)z-2 +... (30)


y(z) = B-l(z)A(z) = I + ylz-1 + Y2Z-2 + ... (31)

Equating coefficients of like powers of z in equations (30) and (31)

yields equations which relate the AR model coefficients to the ARMA

model coefficients. We can also use these equations to write the MA

part of the ARMA model in terms of the AR part and the coefficients of

the pure AR model. The result is

B0 = I (32)

B1 = A1 Y1 (33)

B2 = A2 Y2 B1 Y1 (34)

B3 = A3 Y3 B1 Y2 B2 Y1 (35)


In general,

B0 = I (36)

BN = AN I Bk YN-k N>0. (37)

The coefficients in the pure AR model can be found by solving the

normal, or Yule-Walker, equations [22, 23, 24, 25, 26]. An efficient

recursive method used to solve the Yule-Walker equations is the Levinson

algorithm [27, 28, 291. The Levinson problem is one of determining


the coefficients in an AR model such that the one-step-ahead prediction

is optimal in the minimum mean-square-error sense.

That is,

YNIN-1 =- YN-i Yi


where YNI1N- is the one-step-ahead prediction of YN and N is the number

of data points in the sequence being modeled. Application of the projec-

tion theory leads to

EI(YN YNIN-1) Yj'] = 0

j=0,1,2, ..., N-I


which, when combined with equation (38), results in

E[yNYj'] = YN-i E[yiYj']

j=0,1,2, ..., N-1.

Equation (40) can be written as

CN-j = I YN-iCi-j



by letting

Ci-j = E[yiYj'] .


The minimized mean-square-error of the prediction can be shown to be

N = CO + YN-i Ci-N (43)


To find A(z) we post-multiply equation (1) by Y'k-r and take expec-

tations to give

n m
Cr = AiCr-i + i BjDrj ,
i=l j=l






Since a future input cannot affect the present output of a causal system

we can write


Dr-_ = 0

which further leads to

n m
Cr = I AiCr-i + Z BjDr-j
i=l j=l



Cr = C AiCr-i r>m.

We can determine the Ai (l
solving the set of linear equations,


I A1 A2 *An ]I



Cm+1 ...

Cm ...

Cm-n-l Cm-n **




S= [Cm+l *** m+nI- (50)

Equations (50) are commonly referred to as the modified, extended [23],

or shifted Yule-Walker equations.




ARMA Model Identification Algorithm

The purpose of the algorithm is to find the autocorrelation matrix

at the smallest lag which is a linear combination of other autocorrela-

tion matrices. The model order is related to the position of this

matrix in a larger block Toeplitz matrix formed from the autocorrelation

sequence. The matrix is found by searching the block Toeplitz for a

reduction in rank of submatrices. Rank reduction is determined by com-

puting eigenvalues. When the rank of a submatrix is reduced by the

dimension of the output vector, p, we know that p columns (or rows,

since the matrix is Toeplitz) can be written as a linear combination of

the other columns (or rows). We can write

Ci = -AICiI A2Ci-2 ... AnCin. (51)

We determine i by keeping track of our position within the large block

Toeplitz. Comparing equation (51) with equation (49) we see that m is

related to the subscript i in equation (51). More precisely,

m = i-1. (52)

Furthermore, we see from equation (50) that n is determined from the

size of the submatrix in which the rank reduction occurred. By per-

forming the search in a systematic fashion, the values computed for m

and n are minimum. Determination of AWMA model coefficients is rather

straightforward using results presented in the previous section once the

order of the model is known.

The algorithm:

1. Construct the Toeplitz matrix of autocorrelations,

CO C1 C2 Ck

C_1 CO Cl Ck-i

C_2 C_1 Co Ck-2
T = (53)

Ck Cl-k C2-k. CO

Note that this is the same Toeplitz matrix used in the

Levinson algorithm to solve the Yule-Walker equations. We

have therefore not imposed any additional computational burden

(e.g., computing autocorrelations) beyond that necessary to

generate an AR model of the data.

2. Set q=2.

3. Systematically examine submatrices within T. Begin with the

northwest corner qpxqp submatrix and proceed to the right. For

example, when q=2 we will have

CO C1 Cl C2 Cr-1 Cr

C-1 CO CO Cl Cr-2 Cr-1

Causality requires that n>m. It is therefore not necessary to

examine every submatrix within T, but only those which admit.

causal realizations. The condition

r<2q-l (55)

will insure causality.


4. Compute the eigenvalues of each submatrix. If p eigenvalues

are found which equal zero, stop the search and go to step 6.

5. If p zero eigenvalues are not found, increment q and go to

step 2.

6. The submatrix with p zero eigenvalues will be

Ci Ci+l C2i-j

Ci-l Ci 2i-j-1

.. (56)

Cj Cj+I Ci


n=i-j (57)


m=i-1. (58)

Having determined the values for m and n, we can find Ai (I
and Bj (1

ARMA Model Parameter Identification (Degenerate Case)

The vast majority of literature devoted to the modeling of time

series is concerned with the nondegenerate case. This case requires that

the Toeplitz matrix of autocorrelations is full rank. Some of the

literature dealing with degeneracy of time series is applicable only to

the scalar case but this is a rather trivial case since scalar dege-

neracy implies that the system generating the time series is deter-

ministic. The AR model identified when the singularity occurs is the

deterministic model of the process. The fitting procedure stops because


the next value in the sequence being modeled can be predicted with no

error using the model found.

A more difficult problem is encountered when an attempt is made to

model a degenerate vector time series. Techniques used to make AR models

for vector processes also break down but the reason for failure is not

as obvious. Degeneracy results when the rank of the covariance matrix of

the one-step-ahead prediction error is less than the dimension of the

vector sequence being modeled. The vector Levinson algorithm fails when

this condition is encountered because the inverse of the error

covariance matrix must be computed. A singular error covariance indica-

tes that a deterministic relationship exists between two or more com-

ponents of the vector. Inouye [30] has extended the vector Levinson

algorithm to handle the degenerate case by using the pseudoinverse of

the error covariance when required. With the exception of this modifica-

tion, the algorithm is unaltered. This method does not treat the real

cause of the degeneracy.

In this research we have also developed a procedure which can handle

degenerate time series using the Levinson algorithm. This approach is

more pragmatic in that it takes into account the actual cause of the

degeneracy. This basically involves the identification of the deter-

ministic part of the system, reducing the dimension of the original vec-

tor and modeling the reduced dimensional vector sequence in the usual


We begin by writing the AR equation which results after completing N

steps of the Levinson algorithm,

Yk + Yk-1 + + YNYk-N = ek. (59)

From the algorithm we also compute

HN = E[ekek'I.


Suppose that det[InNI = 0. The time series is therefore degenerate.

Since HN is a real symmetric matrix, it can be diagonalized by an ortho-

gonal transformation,



where A = diag (XO,X 1,. ..,pI) and Xi (0
I[N. The columns of M are the normalized eigenvectors of HN. Let the

eigenvector associated with the zero eigenvalue be denoted by V0. From

equations (60) and (61) we have

VO'E[ekek']VO = 0


E[(VO'ek)(VO'ek)'] = 0 => VO'ek = 0.

vk (63)

Using this result in equation (59) yields

V'( I Yi Yk-i ) = 0

Y = I.


Equation (64) is a deterministic difference equation relating the p com-

ponents of the output. The problem now becomes one of modeling the

remaining stochastic part of the output. This is done by transforming



the original data sequence (or equivently the original autocorrelation

sequence) using M. Let

M = I v 2 v -1 V1 (65)

where Vi (l
eigenvalues of uiN. The transformed autocorrelation sequence will be

Cj = M'Cj M[4 jo0. (66)

The Cj sequence can now be used in the ARMA model identification

algorithm described in the previous section. If a singular I[ is again

encountered, the process is repeated with a further reduction in dimen-

sion. The important thing to note here is that the dimension of the

stochastic model has been reduced by the identification of a deter-

ministic relationship in the output. After the stochastic model is iden-

tified it is combined with the deterministic model to form a model with

p outputs and less than p inputs.

ARMA Model Identification Algorithm (Degenerate Case)

Before an ARMA model can be constructed using the algorithm pre-

sented earlier, an AR model must be found by solving the normal

equations. If the time series is degenerate, a singular error covariance

matrix will result as discussed in the previous section. The following

algorithm allows one to determine the ARMA model parameters for a dege-

nerate stationary time series.

The algorithm:

1. Set v=p.

2. Use the Levinson algorithm to determine the AR model coef-

ficients and II. Also, at each step compute the eigenvalues and

eigenvectors of ]I.

3. If rank (II) = v, use the nondegenerate ARMA identification

algorithm with the values computed for yi (l
step 6.

4. If H is singular, use the eigenvector, V0, associated with

the zero eigenvalue to transform the AR model to the null space

of II. Decrement v.

5. Form M_ from the eigenvectors normal to VO. Transform the

original autocorrelation sequence using M*. Go to step 2.

6. The stochastic and deterministic models found are combined

to form an ARMA model with v inputs and p outputs.

Relationship to the Kalman Filter

In this section we discuss the equivalence between the ARMA model

identified and the innovations representation of the Kalman filter.

Given an output autocorrelation sequence, the ARMA model identified has

the same transfer function and driving noise covariance as the innova-

tions model (Kalman filter) of the system which generated the watocorre-

lation sequence.

The innovations model is a Kalman filter which has the zero-mean,

white innovations sequence as input and the measurement sequence as out-

put. The innovations model is sometimes termed an inverted Kalman


filter. The relevant equations and a block diagram are given below:

Xk+lIk = F xk k-l + K yk

Yk = k + Yk = H xklk-1 + Yk



Figure 1. Innovations model

where K is the Kalman gain matrix, F and H have the usual meaning and

Yk is the innovations sequence. The transfer function of the innovations

model is

HK(z) = I + H(zI-F)-1K.


The ARMA model given by equation (2) has the transfer function

HA(z) = A-l(z)B(z). (70

We observe that the AHMA model with transfer function given by

equation (70) is equivalent to the inverted Kalman filter with transfer

function given by equation (69). Furthermore, E[eeke']=E[1kYk']. These

observations are based on the following facts:


1. The innovations model and the AR model both solve the one-

step-ahead prediction problem with minimum error variance.

2. The minimum variance estimate is unique.

3. The ARMA model is equivalent to the AR model.

By using the orthogonality principle, the Levinson solution yields
an estimate, yklk-l which is the minimum variance estimate of Yk given

the sequence Yk-l, Yk-2,**' YO' That is,

Yklk-l = E[yk Yk_-1 (71)

ElyklYk-l = k Yk-iYi (72)

and Yk-l denotes the sequence Yk-l, Yk-2, *** Y0'

The minimized mean-square-error of the ARMA model output is

II = E[(yk Yklk-1)(k YkIk-1)'I = E[kek'l (73)

The Kalman filter innovations sequence is given by

A (74)
Yk = Yk Yklk-l

Since the minimum variance estimate is unique we conclude

E[ekek'l = E[ykYk'l. (75)


E[ekek']=HEH'+R (76)

where Z is the steady-state solution to the discrete-time Riccati



The ARMA model found using the identification method presented

earlier is equivalent to the AR model found using the Levinson

algorithm. The systems described by equations (59) and (60) are linear

systems driven by whibe noise processes. The output sequences of the two

systems with equivalent white noise inputs are identical. We conclude

that HA(z) = HK(z).

This result is significant because, unlike the inverted Kalman

filter, the ARMA model does not require the solution to a Riccati type

equation. All equations necessary to find HA(z) are linear and have ana-

lytic solutions.

The Multichannel Burg Algorithm

In practice, when the Levinson algorithm is used in conjunction with

a finite length time series, estimates of the lagged autocorrelations

must be used. These estimates are frequently obtained from

Ck = 1/N C Yk+iYi' k=0,1,2,...,N-1 (77)

which yields a biased estimate of the autocorrelation but is attractive

in that T>0, where T is the block Toeplitz of autocorrelations given by

equation (53). The unbiased autocorrelation estimate, obtained by

replacing N with N-k in the denominator of equation (77), sometimes

results in T<0. This result is, of course, undesirable. Methods which

rely on these types of lagged autocorrelation estimation are referred to

as Yule-Walker estimation [31].


An autoregressive modeling method which has received considerable

attention in recent years is the Burg reflection coefficient technique

[31, 32, 33, 34]. The original work, which covered the single-channel

complex case, was done by Burg [35, 36]. Strand [31] generalized the

method to include multichannel complex time series. The Burg process

avoids the problems of autocorrelation estimation by determining the AR

coefficients directly from the data. This is a major advantage of the

reflection coefficient method over Yule-Walker estimation. An inter-

mediate step in the procedure involves the calculation of the reflection

coefficients, from which the AR coefficients can be found recursively.

A brief description of the multichannel Burg algorithm is presented

here for convenience.

We begin by writing the N-element forward filter,

Yk + 1,N Yk-l + + YN,N Yk-N = ek (78)

and likewise the N-element backward filter,

Yk + 01,N Yk+l + ... + ON,N Yk+N = bk (79)

where yk (k>O) is a p-dimensional vector-valued time series, Yi,N

backward pxp matrix coefficients and ek and bk are forward and backward,

respectively, zero-mean white noise prediction errors.


Burg derived the following recursion for the forward and backward

filter coefficients:

FN = [FN-1 01 + RN [0 B*N-l] (80)


BN = R [FN-[ 1 0] + [10 B*n1 (81)


FN-1 = II Y1,N-1 ** YN-1,N-1i (82)

B N-1 = [N-1,N-1 *'* 1,N-1_ I] (3)


FO = B* = I. (84)

In this notation a denotes time reversal. The matrices RN and R N are

the forward and backward reflection coefficients, respectively. It

should be noted here that yN,N = RN and 0N,N = R*N. It is generally the

case that the reflection coefficients are chosen which yield optimal (in

the minimum inean-square-error sense) forward filters. A derivation of

the RN which minimizes the expected mean-square-error of the forward

filter is presented later in this section.

Using equations (78) through (81) it can be shown that

N = "N-l RN 1N-1 (RN)' (85)



FN = NI R*N HN-1 (RN)'


where HN and TN are the forward and backward prediction error covariance

matrices, respectively.

If we post-multiply equations (78) and (79) by Y'k-N and y'k,

respectively, and note that for an N-element backward filter,

E[bk'k] = rN (87)

then we can show, using equations (80) and (81),

CN + YI,N-1CN-1 + ... + YN-1,NI-1C + RNFN-1 = 0, (88)

where CN is defined by equation (45). Likewise we can obtain

CN + 1,N-1C'N-1 + ..* + ON-1,N-lC'l + R*NIIN-l = 0 (89)

by using

E[ekY'k] = [N (90)

for an N-element forward filter. By defining

AN = CN + Yi,N-lCN-i (91)

A*N = C'N + 6 i,N-1C' -i (92)

and using a result attributed to Burg [351, namely that,

A' = A*N (93)

we obtain the expression for the backward reflection coefficient,

R*N= rN-1 R'N (~"N-1)-1 (94)

Define the (N+l)pxl vector

ym(N) = I y'm+ y'm+ N-l1 *** I
Y'nN- .. Yr I '


m=1,2, .


where Nd is the number of data points in the time series and M=Nd-N.

Using equations (78) through (81) we can write the forward filter and

backward filter mth errors as

um = [(FN-1 0) + RN (0 B*I~-)] ym(N)

Vm = [ RN (FN-1I 0) + (0 B*N-1)] yIm(N).



Equations (96) and (97) can be written

um = el(N) + RN bm(N)


vm = bm(N) + (R*N) em(N)

em(N) = (FN_1 0) ym(N)



bm(N) = (0 JB-1) ym(N)


are the forward and backward residual sequences, respectively.

The objective now is to find the forward reflection coefficient

which minimizes the trace of the forward filter mean-square-error.



The quantity to minimize is

J(HN) = (1/M) I (u'mu)


which, using equation (96), can be written

J(RN) = (1/M) ) [e,n(N)+ RN bl(N)]' le(N)+ RN bm(N)]. (103)

Using the identities x'y=tr(x'y)=tr(yx'), tr(AB)=tr(BA) and tr(A')=tr(A)

for vectors x and y, square matrices A and B and where tr is the trace

operator, we can write

J(RN) = tr(E) + 2tr(RNG) +tr(RNBRN),

E = (1/M) I em(N)e'm(N),

G = (1/M) m bm(N)e'm(N)





B = (1/M) 7 bm(N)b'm(N).

To find RN which minimizes J(RN) consider any two general real square

matrices X and Y. Using equation (104) we can write

J(Y)-J(X) = 2tr[(Y-X)G] + tr(YBY'-XBX').


Now note that

tr[(Y-X)B(Y-X)'] = tr(YBY'+XBX'-XBY'-YBX')




Using equations (109) and (110) we can write

J(Y)-J(X) = 2tr[S(G+BX')] + tr(SBS')



S = Y-X.


A necessary and sufficient condition for X to minimize J(X) is

G + BX' = 0.


For sufficiency note that if G+BX'=0 then, since tr(SBS')>0, J(Y)>J(X)

for any choice of X and Y. Now consider the case G+BX'*O. We can choose

Y = X-k(G+BX')


where k is a real positive constant. Substituting this expression for Y

into equation (111) results in

J(Y)-J(X) = k2 tr[(G+BX')B(G+BX')']- 2k trl(G+BX')2]

J(Y)-J(X) = k2ql kq2

:q = tr[(G+BX')B(G+BX')']





q2 = 2tr[(G+BX')2].





Since G+BX'1O we have ql>0 and q2>0. Therefore, if




Thus, X minimizes J(X) iff G+BX'=0.

If we substitute RN for X in equation (113) we obtain

RN = -G'B-1,

which is the RN which minimizes J(HN).

The residuals are updated by

em(N) = em+l(N-1) + (RN-)bm+1(N-1) m1=,2,

bm(N) = bm(N-1) + (R*N-_)em(N-l).




Algorithm initialization is accomplished by setting

em(1) = Ym+1, m=1,2,...,Nd-1

bm(l) = yi



HO = 0 =(1/Nd) I YmYm.









. .,M


It is interesting to observe that the Levinson algorithm and the

Burg algorithm produce the same model. When equations (124) and (125)

are combined with equations (106) and (107), we note that B and G' are

the biased estimates of the zeroth lag and first lag autocorrelations,

respectively. For N=l, equation (121) becomes

RI = -CCO-1. (127)

This is also the expression for the first AR coefficient using the

Levinson algorithm. The reflection coefficient found using the Levinson

algorithm is

Yk+l = k(k)-l, k>l (128)


ak = E[(yk+1-yk+llk) (o-Yo k)' (129)


Fk = E[(yo0-oIk)(yo-ok)']- (130)

Equation (129) is the cross-covariance of the forward and backward

residuals. This is the same quantity as G' (for N>1), where G is given

by equation (106). Likewise, rk corresponds to B (for N>1) from equation

(107). We conclude that the Burg algorithm produces the same realization

as the Levinson algorithm.

The Multichannel Burg Algorithm (Degenerate Case)

The multichannel Burg algorithm is subject to the same problems

exhibited by other identification techniques when degenerate time series


modeling is attempted. As before, we find that det[lN]=0 and the

algorithm fails. The solution for the Burg algorithm is the same as

discussed previously; identify the deterministic relationship in the

data and reduce the dimension of the output vector. This requires a

modification to the algorithm to handle the possibility that a dege-

nerate time series is encountered. The modified Burg algorithm is pre-

sented below:

1. Set N=l.

2. Initialize using equations (124),(125) and (126).

3. Compute E, B and G. Use equations (105),[(106) and (107).

4. Solve equation (121) for RN and compute R* from equation


5. Compute HN and FN Use equations (85) and (86).

6. Compute the eigenvalues and associated normalized eigenvec-

tors of HN. If det[HNI=0, go to step 11.

7. Update FN and BN using equations (80) and (81).

8. Increment N.

9. Update em(N) and b,(N) using equations (122) and (123).

10. Go to step 3.

11. Transform the Nth order AR model using VO, the eigenvector

associated with the zero eigenvalue of 11N. The result is

V'0 ( Yk + Y1,N Yk-l + *** + YN,N Yk-N ) = 0. (131)

This is the deterministic relationship in the output.

12. Transform the output vector to a new basis using M, As

before, the columns of M are the eigenvectors of HN normal

to V0. The reduced dimension output vector is

m=l,2,...,Nd (132)

ym(new) = M' Y1m.


Of course, all subsequent output data obtained are also

transformed to the new basis.

13. Using ym(new), go to step 1.

When the stochastic portion of the time series has been identified,

it is combined with the deterministic part to form a complete model of

the system.

Numerical Example

In this section a numerical example is presented which illustrates

the major results discussed in previous sections.

Consider a dynamical system described by the vector difference


Xk+l = Fxk + Wk (133)

with measurements given by

Yk = Hxk + Vk (134)

where xk is a qxl state vector, yk is a pxl output vector, wk and vk are

vector sequences of Gaussian white noise and F and H are ratrices of

appropriate dimension. The output autocorrelation sequence is given by

Ci = HFipH' + R60i i>O (135)

where P is the steady-state solution to the discrete-time Lyapunov


Pk+l = FPkF' + Q ,


E[wjWk'] Q&jk

E[vjvk'] = Rjk.

For this example let
















Q= diag (10.0


0.0 0.0) ,

R= null matrix.

The steady-state solution to equation (136) is
































The resulting output autocorrelation matrices at the zeroth and first

lags are


Co = 448.902



CI = 451.145
















Initial estimates


of YI and I, the error covariance matrix, are given

Y1 = -CICo-1


III = CO CIC0-1C' .


These values are used to initialize the Levinson recursion. Substituting

equations (144) and (145) into equations (146) and (147) yields


Y1 = -4.6693





















which has eigenvalues,

e0 = 0.0 (150)

el = 9.3802 (151)

eg = 90.670 (152)

Since H1 is singular we have a degenerate time series. The normalized

eigenvector of 1l associated with e0 is


V = 0.0 (153)


and the matrix Mi is

-0.3563 -0.6108

ML = 0.8638 -0.5038 (154)

-0.3563 -0.6108

We are now ready to identify the deterministic relationship in the out-

put and transform the output to a new basis. Referring to equation (64)

we can write

VO'(Yk + Y1 Yk-1) = 0 (155)

from which we obtain

(3) = k(1) Yk(1) (156)

Equation (156) describes the deterministic relationship which resulted

in the singularity of I[. We obtain the reduced dimension autocorrela-

tion sequence by the following transformation:

j>0 (157)

Cj = 1,4_LCjm-L


We find that the Cj(j>0) sequence is generated by a non-degenerate

time series. Using the algorithm for non-degenerate AIMA models, we

find that the stochastic portion of the system can be described by an

ARMA(2,1) model. The coefficients found are

A = -2.5635

A2 -0.2265
A2 =


l -2.8504

1.8772 1


0.1153 1







The error covariance matrix found is



The ARMA model transfer function is

HA(z) = A-l(z)B(z)


A(z) = I + Alz-1 + A2z-2





B(z) = I + Blz-1.

Carrying out the operations called for in equation (162) yields

5 8.8080





HA(z) =

1.0 1.5z-1 + 0.99z-2 0.345z-3 + 0.05z-4


h11(z) = 1.0 2.9327z-1 + 5.5983z-2 + 0.4347z-3

hl2(z) = -1.899z-1 + 0.4123z-2 + 0.0381z-3

h21(z) = -0.2869z-1 4.4101z-2 + 0.7452z-3

h22(z) = 1.0 + 0.9327z-1 0.5780z-2 + 0.0653z-3

The poles of the system are found to be

zl = 0.3 + j0.4

z2 = 0.3 j0.4

z3 = 0.4


z4 = 0.5 -

The ARMA model identified has two inputs and three outputs. The

intermediate vector u(z)=M 'y(z). A complete description of the system

is displayed in figure 2.







Figure 2. Block diagram of
system in numerical example.












The equivalent innovations model for the system can be found which

has the transfer function,

HK(Z) = I + H(zI-F)-1K. (174)

To find HK(z) we must compute K, the steady-state Kalman gain matrix.

This matrix is found from

Kk = FEkk-lH'(HEklk-lH'+R)-1 (175)

where Eklk-_1 is found by solving the discrete-time Riccati equation,

Ek+llk = F[EkJk-l- Eklk-1H'(HEklk-lH'+R)-HEZklk-l]F'+Q (176)

Initializing equation (176) with

EO|-1 = P (177)
results in

det[HE110H' + R] = 0. (178)

This result is expected since I[1, from equation (147), is equivalent to

HEIOIH'+ R. To see this set k=0 and 201-1 = P in equation (176).

Rewriting equation (176) we obtain

Eli0 = FPF' + Q FPH'(HPH' + R)-1HPF'. (179)

Recognizing that in steady-state

P = FPF' + Q (180)

we have

ElIO = P FPH'(HPH' + R)-HPF',


HEl1OH'+ R = HPH' + R HFPH'(HPH' + R)-1HPF'f'.

Using equation (135) in conjunction with equation (182) we find

HE1|OH'+ R = CO CICo-1C1'


and from equation (147),

HEI IOH' + R = 1[1.


Since we have defined R to be the null matrix we have

HE1IOH' = I[I.


Of course the eigenvalues and eigenvectors of HE|IOH' are the same as

those of i[1. If we make the same transformation that we did in the ARMA

modeling procedure, i.e., transform to the null space of HEIlO', we


Vo' [ I O' I Vo = 0


which is equivalent to

E[(V01y-J)(V01-yl)1J= 0.


Equation (187) implies

Vo' 1 = 0.


The Kalman filter equations can be manipulated in a way such that

yk+l = Yk+l HKkYk H(F-KkH)xkIk-1.




If we set k=0 and xO_-1 = x0 = 0 in equation (189) we obtain

Yl = Yl HKO YO (190)

Combining equations (188) and (190) we can write

VO'( yi HKO YO ) = 0. (191)

This result is identical to equation (155) if

Yl = -HKO. (192)

This equivalence is easily obtained from equations (135),(146),(175) and


For this specific example we find

-1.4393 0.2609 -0.2225

HK0 = -4.6693 0.7609 -0.2725 (193)

-0.4393 0.2609 -0.2225

which checks with equation (148). Substituting equation (193) into

equation (191) and using V0 as defined in equation (153) yields

(3) = y(1) yo(1) (194)

which agrees with equation (156) for k=l.

To complete the transformation of the output to a new basis we write

Ynew = M' y (195)


Ynew = ML'Hx.


By defining

Hnew = MH'H

and using Hnew to obtain the steady-state solution to equation (176), we


Hnew =









E = diag ( 10.0 5.0 0.0 0.0 ) .

From equations (193) and (194) we compute

HnewEH'new =

8.8080 6.5283

6.5283 16.1920


and note that, using equation (161),

H = HnewvH'new" (200)

We can now use E to compute the steady-state Kalman gain matrix and

substitute fHnew for H in equation (174) to find the transfer function

for the innovations model. Doing so we find that

HK(z) = HA(z)


where HA(z) is given by equations (165)-(169).


Through this numerical example the following major results have been


(1) The modeling of degenerate time series,

(2) ARMA model identification for vector-valued time series


(3) The equivalence between the ARMA model found and the

innovations model of the system.


The Problem

In most missile-target engagements involving very agile targets, the

missile is by necessity steered or guided to the target wit, the aid of

a tracking system. A target tracker located on the missile is called a

seeker. In many surface-to-air defense systems, the tracker is located

on the ground and missile and target inertial position are maintained by

tne tracker. In either case, the function of the tracker is to piurov.de

relative position information (angles, and in some cases, range and

range rate) to the missile guidance system so steering commands can be

issued to the control system. The control system then commands aerodyna-

mic control surface deflections to maneuver the missile onto a collision

course with the target.

Many guidance schemes developed using modern optimization techniques

rely on estimates of future target states to be effective [37]. This

naturally leads to the utilization of a tracking filter. At present,

tracking filters use target dynamic models which have been devised

assuming a priori statistical knowledge of the target's behavior. These

models work well on the average but show severe deficiencies in certain

specific cases. The research presented here involves the use of sta-

tistical modeling techniques developed in the preceding chapter to

synthesize a target dynamic model. This idea is novel in that the model



can be made to adapt to changes in the target's trajectory by utilizing

a time history of past measured kinematic variables. The measurements

come, quite naturally, from the tracker.

Engagement Geometry

The definition of the coordinate frame in which the missile-target

engagement occurs is a rather important aspect of the problem

formulation. The coordinate frame chosen dictates the cinematic

variables which describe the target's motion and also defines the frame

of reference for the measurements made by the tracker.

The target trajectory is described by a time-varying range vector in

three-space. The range magnitude is defined relative to the origin of

an inertial reference frame. Inertial, in this case, means neither

translating nor rotating with respect to a point fixed on the earth's

surface. Three mutually perpendicular unit vectors, i,j and k, which are

colinear with the inertial x-axis, y-axis and z-axis, respectively, are

used to establish the coordinate system. Henceforth, underlined quan-

tities are vectors. The x-y plane is tangent to the earth's surface at

the origin of the inertial frame. The z-axis is directed away from the

earth's surface and completes the right-handed triad.

Orientation of the range vector is determined by two angles: azimuth

and elevation. Azimuth is the angle between the inertial x-axis and the

projection of the range vector onto the x-y plane. Elevation is the

angle between the range vector and the range vector projected onto the

x-y plane.


It = xti + Yti + ztk (202)

as the target range vector in inertial space. Then we can write

Rtxy = xt2 + Yt2 (203)

at = tan-1 (Yt / Xt) (204)


Et = tan-1 (zt / Rtxy) (205)

where Rtxy is the magnitude of the projection of It onto the inertial

x-y plane, at is the azimuth angle and et is the elevation angle. These

quantities are shown in figure 3.



x \ i

,xy _

Figure 3. Target trajectory


Trajectory Generation

To illustrate the use of the modeling technique in the tracking

problem, a maneuvering target trajectory was generated using elementary

kinematic equations. For simplicity we assume a point-mass target and

therefore model only the motion of the center-of-gravity.

Aircraft flight can be categorized as (1) non-accelerating or (2)

accelerating. The resulting trajectory in each case is quite different.

In non-accelerating flight we obtain a trajectory which is a straight

line in inertial space. If the target is accelerating, as in the case of

a target performing evasive maneuvers, two effects are observed. The

component of acceleration along the velocity vector increases the target

speed (magnitude of velocity) and the component normal to the velocity

vector rotates the velocity vector relative to the inertial frame, thus

changing the flight path. The result is a circular arc in inertial space

for constant speed targets. The turning rate is a function of speed and

acceleration magnitude and can be calculated from

a = at / Vt (206)

where at is the magnitude of the acceleration component normal to the

velocity vector, Vt is the target's speed and W is the turning rate of

the velocity vector relative to the inertial frame of reference. At this

point we assume that the target acceleration vector is normal to the

velocity vector. This is a good assumption for aircraft targets and also

simplifies the kinematics.


For this illustration, consider a trajectory composed of segments of

non-accelerating and accelerating flight. Furthermore, assume that the

non-accelerating portions are at constant altitude (z component of posi-

tion in the inertial frame). The accelerating portion is in a maneuver

plane defined by the velocity and acceleration vectors. In general the

maneuver plane is skewed to the principal inertial planes. The flight

segments are defined as

non-accelerating 0.0
accelerating 10.04
non-accelerating 31.5

where t is time in seconds. Transition from one flight segment to the

next occurs instantaneously. Target initial position and velocity are

(Rt)o = ( 1299 i 750 + 2598 k ) m. (207)


(Yt)o = ( 150 i + 259.8 j ) m./sec. (208)

At t= 10.0 seconds the aircraft executes a turn in a maneuver plane

inclined 30 to the inertial x-y plane. The aircraft completes the 3600

turn in 21.5 seconds. This corresponds to an acceleration of 87.672

m./sec.2 (8.9 g's). The trajectory is shown in figures 4 through 7.

Target Dynamic Model Synthesis (Off-line)

A statistical model of the target trajectory presented in the pre-

ceding section will be developed. We begin by noting that the measure-

ments made by the tracker can be used to create a time history of target


inertial position. The transformation is relatively simple for trackers

with fixed inertial position. However, for missile-borne seekers the

transformation is more complicated since missile attitude and position

relative to the inertial frame must be taken into account. These quan-

tities are measured by an inertial navigation systems (INS) onboard the

missile. For the present example let us assume that we have available a

time series of target inertial position. The position vector is three-

dimensional with components xt, Yt and zt. Let us further assume that we

have the entire time series available at the outset. This will allow us





-1 0 1 2 3 5
Inertial x-axis (km.)

Figure 4. Target trajectory in the inertial x-y plane


to develop a dynamic model off-line. Of course, off-line modeling is not

practical for real-time tracking but is useful when investigating model

characteristics such as model order, error covariances and coefficient

values. Later in this chapter an on-line modeling technique will be

demonstrated. Finally, for this example we assume that the measurements

are deterministic. This means that the errors in prediction are due to

modeling errors and not measurement errors. The error covariance iden-

tified will therefore approximate the process noise covariance.


F 4


c 2


^ ----------- -- -- --

0 10 20 30 40 50

time (seconds)

Figure 5. Target position versus time (x-axis)


Since the target motion is planar we can describe the trajectory in

a new basis in which one component of the position vector is constant.

That is, one axis of the new coordinate frame will be normal to the

maneuver plane. If we denote the position vector components in the new

basis with a *, for example xt and let the z-axis of the new basis

be normal to the maneuver plane,then, since (zt*)k is constant for all

k>-, we can write






20 30

time (seconds)

Figure 6. Target position versus time (y-axis)

I I I _I




But (zt*)k can also be written as a linear combination of the position

vector components in the original basis,

zt*(z) = Vo'_.t(z). (210)

Combining equations (209) and (210) yields

(l-z-1)yVO'R(z)=0. (211)

Therefore, we anticipate finding that the position data forms a dege-

nerate time series and furthermore the degeneracy occurs in the first

difference of the transformed data. Interpreted physically this means







3 ----__ -------_ --------------


0 10 20 30 4o 50

time (seconds)



0 10 20 30 40 50

time (seconds)

Figure 7. Target position versus time (z-axis)


that the velocity component normal to the maneuver plane is zero.

Equation (211) can be found using the ARMA modeling algorithm for dege-

nerate time series if we assume we can precisely, or at least with

insignificant error, identify the model for a constant.

Let us consider the problem of modeling a constant scalar using the

autocorrelation method with biased estimates of the autocorrelation

sequence. We find that

Co = K2 (212)

CI = (N-1)K2/N (213)


l = -(N-1)/N (214)

where K is the constant we wish to model. Furthermore, the error

covariance matrix is given by

H = (2N-1)K2/N2. (215)

For the trajectory under consideration here K = 3 km (z component of

position in the new basis) and N = 161 for a sample rate of 4 Hz. Using

these values in equations (214) and (215) yields

YT = -0.99379 (216)


H = 1.11454 x 105 m2. (217)

Ideally we should get Y- = -1.0 and H = 0. Note that the expressions for

yl and I given by equations (214) and (215) converge to the true values


as N oo However, the error associated with using a finite length

data sequence is excessive.

Box and Jenkins [261 suggest a technique to handle nonstationary

situations such as this. We place a pole at z=l by differencing the ori-

ginal data sequence and then model the difference data in the usual

fashion. Thus, we obtain the new sequence,

A(Rt)k = (t)k (Rtk-1 k>0. (218)

The model of the original data can easily be constructed from the model

of the difference data.

The time series now consists of 160 vectors sampled at 4 Hz. The

biased sample autocorrelation matrices for the zeroth through nine-

teenth lag were computed. We find that CO is singular. This means that a

linear combination of the components of A(Rt)k is zero. Since the time

series used to compute CO is the first difference of the original time

series, we can write

(l-z-l)V'Rit(z)=0 (219)

where V0 is the eigenvector associated with the zero eigenvalue of CO.

Comparing equations (211) and (219) we see that we have found the deter-

ministic relationship that we expected to discover earlier.

The components of V_ are the direction cosines of the unit vector

normal to the maneuver plane. This normal unit vector uniquely defines

the plane of the maneuver in the original coordinate frame. The fact

that we can determine the maneuver plane by processing position measure-

ments in the fashion described here is significant.


Having found the deterministic relationship in the data, we can

proceed with the modeling process by transforming the autocorrelation

sequence to the new basis using Mi. For this example we find

Vo'= (0.4330 -0.2500 0.8660)

=-0.5000 -0.8660 o.ooo0
M .75' =-0.5
0.7500 -0.4330 -0.5000



When the transformed autocorrelation sequence is used in the Levinson

algorithm, we find that the reduced dimensional time series is also

degenerate. The eigenvalues of HI, corresponding to an AR(1) model, are

eo = 4.9907

el = 69.9572 .



However, the eigenvalues of 112 are

e0 = 0.0186

el = 69.8288 .

The eigenvector associated with e0 is

Vo'= (0.0000 1.0000)

and the deterministic relationship is found from




= 0

1O' ( I + ylz- + y2z-2 )



1 =

Y 0.9812
Y2 =

1 Axt*(z)

-0.0003 ]






Ayt*(z) P' = M I' A.(z) .


Substituting equations (226), (228) and (229) into equation (227) yields

the second deterministic relationship,

( 1 1.97598 z-1 + 0.9812 z-2 ) Ayt*(z) = 0.


The problem has now been reduced to one of modeling a scalar. The

new transformation matrix is

M = ( 1.0000 0.0000 )'


The remaining variable to model is (Axt*)k. Performing the transfor-

mation on the autocorrelation sequence -- which in this case amounts to

no rxre than picking out the (1,1) element -- we find

( 1 0.99407 z-1) Axt*(z) = w(z)


n = E[wk21 = 78.0 m.2




The AR(1) coefficient in equation (233) is approximately -1.0 indi-

cating that the time series is possibly nonstationary. Therefore, it may

be prudent to again difference the data and model the resulting time

series. In this case the data sequence to difference is (Axt*)k. The new

sequence is

A2(xt*)k = (Axt*)k (Axt*)k-l k>l. (235)

We now find that a deterministic AR(2) model fits the A2(xt*)k sequence.

The model found is

( 1 1.97595z-1 + 0.9812z-2) A2xt*(z) = 0 (236)

This is the final equation we need to complete the model.

Using equations (221) and (230) we can write the following discrete-

time difference equations:

(Axt )k = -0.5 (Axt)k -0.866 (Ayt)k (237)


(Ayt*)k = 0.75 (Axt)k -0.433 (Ayt)k -0.5 (Azt)k (238)

From equation (209) we have

(Azt*)k = 0. (239)

We can solve equations (237) through (239) simultaneously to obtain

(Axt)k = 0.75 (Ayt )k -0.500 (Axt*)k, (240)

(Ayt)k = -0.433 (Ay*t)k -0.866 (Axt*)k (241)

(Azt)k = -0.500 (Ayt*)k.



From equations (231) and (236) we have

(Yt*)k = 1.97598 (Ayt*)k-1 -0.9812 (Ayt*)k2 (243)


(Ab*)k = 2.97595 (Axt*)k-1 2.95715 (Axt*)k2 + .9812 (Axt*)k-3.(244)

Estimates of (Axt)k, (Ayt)k and (Azt)k are obtained from equations

(240) through (242) by placing hats on the variables on the right hand

side of these equations.

Our objective is to model xt, Yt and zt but the above equations

involve Axt, Ayt and Azt. Using equation (218) we can write

(.t)k t)k-1 + (At)k



since (ARt)k is conditioned on (Rt)k-l'

Combining equations (237) through (245) we obtain

(_t)k= I Fi


FI = 0.6469



F2 = -1.6087




















1.5365 1.3867 -0.3680

F3 = 1.3867 3.1375 0.2124 (249)

-0.3680 0.2124 0.2453


-0.2453 -0.4249 0.0000

F4 = -0.4249 -0.7358 0.0000 (250)

0.0000 0.0000 0.0000

It should be emphasized that equation (246) is a deterministic

equation. Therefore, for this particular example, we can predict

(At)k from measurements of (Qt)k-l, ... (Rt)o with no error.

Several points concerning this model should be mentioned. First, the

trajectory modeled is very idealistic. We have assumed that the aircraft

can accelerate instantaneously and that constant speed and acceleration

are maintained in the turn. These assumptions lead to a perfectly cir-

cular trajectory when the aircraft is accelerating. Second, we have

assumed that the maneuver occurs in a plane. This is actually a rather

good assumption for piloted aircraft executing a sustained high acce-

leration maneuver. Also, deterministic measurements were used to build

the model. This, of course, helps significantly in determining the pre-

sence of degeneracy in the time series and the correct model order.

Finally, the computations were done on a Cyber 176 a 60 bit wordlength

digital computer. Computational precision is extremely high. With these

facts clearly in mind, we should not be surprised that the model found

is deterministic.


Target Dynamic Model Synthesis (On-line)

In actual practice the measurements made by a radar tracking system

are, of course, noisy. We assume that the measurements are corrupted by

additive, white, zero-mean Gaussian noise. The quality of the measure-

ments is a function of many factors such as transmitted/received power,

pulse repetition frequency and the sizes of the range and angle cells,

to mention only a few. Our goal here is not to model or analyze a par-

ticular radar system but rather to illustrate a tracking concept. We

shall therefore choose noise characteristics which approximately repre-

sent state-of-the-art radar performance. The standard deviation of the

noise in the range measurement is typically on the order of 10 meters.

Angular measurement noise has a standard deviation on the order of a few

tenths of a degree. For this example we choose

or = 10 m. (251)

a, = aF= 5 mrad. 0.3* (252)

where or is the range noise standard deviation and oa and ap are the

azimuth and elevation noise standard deviations, respectively. These

statistics are assumed to remain constant during the entire tracking

process. The noisy measurements are presented in figures 8 through 10.

In order to make the proposed tracking scheme practical for real-

time use, we must first convert the measurements of range, azimuth and

elevation to inertial position. To do this let us assume that the

tracker is located at the origin of the inertial frame. This is by no

means restrictive since a simple coordinate frame translation can be

made to put the origin at the tracker.


Once we have the range and angle measurements we can obtain inertial

position using the inverse transformation,

xt = Rt cos(et)cos(at) (253)

Yt = Rt cos(et)sin(at) (254)


zt = Rt sin(st). (255)

These equations can easily be obtained from figure 3.

Now that we have a time series of position vector components we can

create a dynamic model of the trajectory in the same manner that we did







0 10 20 30 0O 50

time (seconds)

Figure 8. Range measurement versus time.


in the off-line case. The difference now is that the output sequence is

corrupted with measurement noise -- not deterministic.

Since the measurement noise is relatively small we might expect to

observe the same degenerate time series phenomenon that we did in the

deterministic case. Before proceeding any further with real-time model

identification, let us model the entire noisy output time series to see

if we can identify the same deterministic relationship that we found in

the deterministic model. As before, we can check for linear dependence

in the difference data by computing the eigenvalues of CO. Using noisy







U.3 /



0 10 20 30 40 50

time (seconds)

Figure 9. Azimuth angle measurement versus time.


measurements we know CO>0 and, therefore, no eigenvalue will be iden-

tically zero. What we must do in this case is compare the relative

magnitudes of the normalized eigenvalues of Co. The normalized eigen-

values are defined as

e0 = eO/e2 (256)

el = el/e2







J 0.5.--

+ + -4--

20 30

time (seconds)

Figure 10. Elevation angle measurement versus time.

I I I r~


If at least one of the normalized eigenvalues is significantly less than

unity we conclude that one or more deterministic relationships exist in

the data. Displayed in figure 11 are the normalized eigenvalues versus

time for the noisy trajectory data. We see that prior to the initiation

of the maneuver (t<10 seconds) that e0<
that there are two directions in which the velocity components are zero.

We readily understand this result since the trajectory is rectilinear

for t<10 seconds. We also note from figure 11 that el increases rapidly

for 10

1 .01 1 1 I



> 0.

- 0.





20 30

time (seconds)

Figure 11. Normalized eigenvalues of CO versus time.



relatively small. This means that for t>10 seconds only one direction

can be identified in which the velocity component is zero. This direc-

tion is obviously normal to the maneuver plane. The direction cosines of

the unit vector normal to the maneuver plane are the components of V0,

the eigenvector associated with e0. Plots of the components of V0 are

shown in figures 12 through 14. The true values shown in figures 12

through 14 are the deterministic values given by equation (220).




0 0.





8 -




2 --------------------

20 30

time (seconds)

Figure 12. First component of V0 versus time.



To do the model identification and tracking in real-time we must

update the sample autocorrelation sequence with the addition of each

position measurement. The autocorrelation sequence is updated using the

following recursion:

Cj(k) = (l/k)[(k-l)Cj(k-1) + (ARt)k(ARt)'k-.jl, j=0,l,...,m

(ARt)k = (t)k (t)k-l k>O (260)

and m is the number of lags required to model the time series.








a -0.4
----- value


0 10 20 30 40 50

time (seconds)

Figure 13. Second component of Vo versus time.


Once we have the updated autocorrelation sequence we can compute eO,

e1 and M. If the time series is degenerate we transform the autocorrela-

tion sequence to the new basis using MI, thus reducing the dimension of

the time series to be modeled. Performing the transformation we obtain

Cj = ML'Cj M j=0,l,...,m-k-l. (261)

The model found by using the transformed autocorrelation sequence will

be valid for the time series written in the new basis. We must now

transform the AR model to the original basis because the measurements


o0.8 -----f---- ------ ---




0.2 L,

0 10 20 30 40 50
time (seconds)

Figure 14. Third component of VO versus time.


are in this coordinate frame. The transformation matrix relating the two

coordinate frames is M. Therefore we can write

(A*it)k = M' (Rt)k k>O (262)

and since M is orthogonal

(At)k = M (ARt*)k k>0. (263)

By using the Cj sequence in the modeling process, we obtain a model

for the time series

(Axt* Ayt*)'k = M,' (At)k k>0. (264)

The finite order AR(n) model found can be written

Axt n
A = TYi M' (ARt)k-i k>l. (265)
AYt k i=l

Using equation (263) and the fact that (At*)k=O (k>O) we can write

I Ti M,' (A-Rt)k-i
(At)k = M - - k>n. (266)

Estimates of (Rt)k are obtained from equation (245).

Maneuvering Target Tracking Algorithm

Using the equations developed in the preceding sections we can for-

mulate the following target tracking algorithm:

1. Measure azimuth, elevation and range at discrete times k=0,

k=l and k=2.


2. Convert measurements to (Rt).k Use the inverse transfor-

mation given by equations (253) through (255).

3. Compute (ARt)k = (.t)k (t)k-l'

4. Update Cj(k). Use equation (259).

5. Compute eg and el using equations (256) and (257) and com-

pute M. Since we have defined the zt -axis as being normal

to the maneuver plane we must arrange the columns of M so


M = [ V 1 1ol.0

6. If e0<O) and M=[ 2 l Vi.

7. If el<0) and MI= V2.

8. Compute C (k) = M 'Cj(k)M.

9. Compute Yi (i=l,2,...,n) using the Levinson algorithm and

the Cj(k) sequence. Alternatively we can find the AR(n)

model coefficients using the Burg algorithm and the data

sequence (ARt*)k = M_'(ARt)k.
10. Compute (ARt)k and (Rt)k+1. Use equations (266) and (245).

11. Increment k.

12. Measure azimuth, elevation and range at time k.

13. Go to step 2.

Numerical Results and Predictor Performance

Using the algorithm just presented with the noisy measurements

displayed in figures 8 through 10, we can adaptively change the tracker

model and thereby more accurately predict target inertial position.

Models computed for a few discrete-time points along the trajectory


and the prediction errors resulting from use of the tracker are pre-

sented in this section.

When the autocorrelation sequence found after each measurement

update is used in the Levinson algorithm, we find that an AR(2) model

fits the data well. This determination is made by using the ABMA model

identification algorithm presented earlier. Therefore, we use n=2 in

equation (266) and in step 9 of the tracking algorithm.

Referring again to figure 11, we note that for k<40 (t<10 seconds)

e0<<1 and el<
the data and the remaining stochastic variable to model is a scalar.

From the tracking algorithm we find for k=40

Yl = 0.60829 (267)

Y2 = 0.34687 (268)

M = y2'= (-0.50420 0.86358 -0.00054) (269)


-0.14210 +0.85181

[V11 IV = +0.08235 -0.49743 (270)

+0.98642 +0.16423

For k>50 only e0<
dimensional vector. The model parameters for k=80 are

+0.39993 -0.02337]
8i = 0(271)
-0.04890 +0.23703_

+0.56281 -0.005351
Y2 = .1186 +.63050
+0.11486 +0.63050

M' =

Vo' = ( +0.47109



A 0.0









-0.25219 +0.84527 ).

10 20 30 40

time (seconds)

Figure 15. Diagonal elements of yl.




For k=120 tre parameters found are

Y1 =
-0 .01610

2 +0.55403
Y2 =9444




0.0k i









20 30

time (seconds)

Figure 16. Diagonal elements of Y2.






V' = ( 0.46786 -0.25138 +0.84730 ).

Finally, for k=160 the parameters are

S= [+0.39404
Y1 =



-0.50719 -
M =+
+_0.71938 -










v_' = ( +0.47461 -0.24765 +0.84464 ).

We see from figures 15 and 16 and from equations (271) through (282)

that the model changes very little in the last 20 to 25 seconds of the

trajectory. The transient phase which starts with maneuver initiation is

relatively short in duration; lasting approximately 5 seconds. At a 4 Hz

sample rate this represents 20 samples. Model parameters are quite dyna-

mic during the transient period.

Plotted in figures 17 through 19 are the inertial position predic-

tion errors versus time. The errors are found from

(- A
(it)k = (Rt)k (t)k (283)

where (Rft)k is the position vector derived from the angle and range







measurements. The first 10 measurements were used only to compute the

autocorrelations to get the model identification started. No tracking

was done during this period. The tracking algorithm was started at k=10

(t=2.5 seconds). We observe from the error plots that the tracker is

working well until maneuver onset. At that time the errors begin to

increase and there is a noticable bias in the error sequence, par-

ticularly in the x-axis and y-axis. As the tracker adapts the model

parameters in response to the maneuver, the errors once again decrease

and the mean is once again near zero.






-100 -----
0 10 20 30 O0 50

time (seconds)

Figure 17. Error in predicted position versus time (x-axis)


The error sequence bias which began at maneuver initiation results

from the same phenomenon that Bullock and Sangsuk-Iam [71 used to detect

the occurrence of maneuvers. The non-maneuvering target model used to do

the tracking prior to maneuver onset is no longer valid after the

maneuver starts and we observe a bias in the innovations sequence. The

difference is, of course, that here we are not applying any statistical

tests to determine if a maneuver has occurred.

The important point to note here is that the tracker adjusted the

model parameters in response to an unanticipated abrupt maneuver. There







0 10 20 30 40 50

time (seconds)

Figure 18. Error in predicted position versus time (y-axis).


is no explicit maneuver detection scheme required for the algorithm.

However, by using information already needed to do the tracking (for

example, the noticable increase in el and the accompanying change in VO)

or by monitoring the mean of the error sequence, we should be able to

detect maneuvers with a reasonable degree of confidence. Of course, the

objective of this research was not maneuver detection but the close

relationship between detecting maneuvers and tracking highly agile eva-

sive targets makes the maneuver detection function an attractive poten-

tial by-product.





N -50

0 10 20 30 40 50

time (seconds)

Figure 19. Error in predicted position versus time (z-axis).


It is reasonable to assume that maneuver detection will enhance

tracking performance. This is because we can reinitialize the tracker

and include only the measurements made subsequent to the detected

maneuver. We therefore remove the premaneuver measurements and totally

delete the influence of these data from the model. Implementation is

rather straightforward. An auxiliary, constant length data file of posi-

tion vectors is maintained to perform the reinitialization. Prior to

maneuver detection the oldest position vector in the auxiliary file is

deleted each time a new measurement is added. This is typically referred

to as the "sliding window" concept. The length of the window depends on

the number of points required to detect the maneuver after the maneuver

is actually initiated. Subsequent to maneuver detection the tracking

algorithm is reinitialized using only the auxiliary file and then con-

tinues to operate as described earlier.

We can use the maneuver detection/reinitialization idea on the

example trajectory considered here and determine if the tracker perfor-

mance is actually improved. To do so let us assume that 10 postmaneuver

measurements are required to detect the maneuver. The auxiliary file is

therefore 10 measurements long. Since the maneuver began at k=40, the

auxiliary file contains data for k=40 to k=49.

The performance of the two methods is compared using the sample mean

and sample standard deviation of the position errors in each of the

three inertial axes. For the non-maneuver detection case (refer to

figures 17 through 19), we find for k=50 to k=160 (time= 12.5 seconds to

40.0 seconds)

[ Px Py VUz = -1.1621 1.0116 0.5674 1 m. (284)


Sxx yy zz = [ 20.724 18.092 20.358 1 m. (285)

where P and a are the sample mean and sample standard deviation,

respectively. For the maneuver detection/reinitialization case we find

[ Px Py 1z 1 = [ -3.5220 1.4499 2.4523 ] m. (286)


Soxx uyy Uzz 1 = [ 24.200 20.632 22.700 1 m. (287)

The performance of the non-maneuver detection tracker is slightly

better than the maneuver detection/reinitialization tracker. Tais result

is counterintuitive but can be attributed to the small number of samples

used to reinitialize the tracker. Ten postmaneuver points are not enough

data to obtain a good sample autocorrelation estimate. All of the dif-

ferences in the models and in the error sequences for the two cases were

in the first few seconds following maneuver detection. By the end of the

trajectory the model parameters (AR coefficients and M) and prediction

errors for both cases were essentially indistinguishable. This is, of

course, because the first 40 measurements (deleted in the maneuver

detection/reinitialization case) became an ever diminishing percentage

of the total number of points. The effect of discarding the premaneuver

points soon became negligible. There is apparently a trade-off which

can be made between the length of the sliding window and tracker perfor-

mance. Too many premaneuver points in the window will lead to a sluggish

tracker and too few will not be enough to provide a good estimate of the

autocorrelation sequence.



In this research several theoretical issues have been investigated.

A major result is the development of an algorithm for finding the coef-

ficients and order of the minimum order ARMA model of a multivariable

system from the output autocorrelation sequence. The model order is

found by examining the ranks of submatrices within a block Toeplitz of

output autocorrelation matrices. Once the order is known, the AR coef-

ficients are determined in the usual way by solving the modified, or

extended, Yule-Walker equations. The MA coefficients are then found from

a set of linear equations involving the coefficients of the AR model

equivalent to the ARMA model and the coefficients of the AR portion of

the ARMA model. The algorithm is a new result.

The ARMA model identified using the new algorithm is a linear mini-

mum variance one-step-ahead predictor. This result follows from applica-

tion of the projection theorem. The ARMA model found is equivalent to

the Kalman filter innovations representation of the system modeled. This

fact was discussed and demonstrated through a numerical example. The

equivalence is significant because all equations necessary to realize

the system using the ARMA model are linear and can therefore be solved

analytically. There is no Riccati-type equation solution required in the

ARMA model but there is in the innovations representation.



It has been shown that degenerate vector-valued time series can be

modeled by identifying deterministic relationships in the data, thereby

reducing the dimension of the stochastic model. This results in system

models with fewer white noise inputs than outputs. The deterministic

relationships are found by transforming the data to the null space of

the error covariance matrix. The AR model which fits the data at the

lag the degeneracy is encountered is used to find the deterministic

relationships. Not only does this technique identify the presence of

deterministic processes in the data, but also provides the equations

describing the deterministic relationships. Other solutions to the dege-

nerate time series modeling problem have been reported in the literature

but none suggest the identification of deterministic relationships and

dimension reduction of the stochastic model as a solution.

It has been shown that the theory is applicable to the specific

problem of tracking a highly maneuverable aircraft target. Measurement

vectors consisting of range, azimuth angle and elevation angle, typi-

cally available in radar tracking systems, are used to construct a time

series of target position vectors in an inertial reference frame. The

position data are nonstationary and the first difference must be formed.

The difference data is a time series of target inertial velocity com-

ponents multiplied by the measurement update time. This time series is

degenerate. This is a particularly interesting result since it has a

physical interpretation. It means that a linear combination of the com-

ponents of the velocity vector is zero. Therefore, along some direction

in inertial space, the velocity component is zero. This direction is

normal to the plane which contains all the notion, that is, the maneuver

plane. The direction cosines of the vector normal to the maneuver plane


are found from the eigenvectors and eigenvalues of the time series

autocorrelation. This is a new and significant development in target

tracking technology.

The ARMA modeling technique can also be used for adaptive imdeling

and tracking. This is accomplished by updating the model after each new

measurement is made. The model thus contains the influence of the latest

measurement. In this research an algorithm has been developed which

adaptively models the dynamics of a maneuvering target in response to

abrupt, unanticipated motion. Traditional tracking filters are usually

constructed under the assumption that the target dynamic model

characteristics are constant. Severe performance degradations can occur

if insufficient bandwidth is provided. The adaptive method developed in

this work circumvents this problem by changing the bandwidth when

required. The bandwidth variation is a natural result of the modeling

technique and is done automatically without the aid of any special

detection techniques. However, the adaptive nature of the tracker also

makes it useful as a device to detect maneuvers. Drastic changes in

target dynamic behavior can be detected by monitoring the prediction

residuals. When residual biases are detected it is assumed that the

model being used for prediction is no longer valid due to some unan-

ticipated change in dynamics. Detection is not a requisite for adaptive

tracking -- the tracker adapts to maneuvers automatically -- but the

maneuver detection function can perhaps be used to enhance tracking

performance. This is also a new development in the tracking area.

Recommendations for Further Research

It is recommended that further research be done to refine the adap-

tive tracking procedure. The sliding window concept should be investi-

gated further to determine the minimum amount of data required for

accurate identification and adequate response. Concepts along the lines

of tracker reinitialization should be pursued in connection with adap-

tive modeling to enhance performance. The adaptive tracking method pro-

posed here should be considered as a maneuver detection device and com-

bined with a rigorous statistical test to improve maneuver detection.


1. Robert A. Singer, "Estimating Optimal Tracking Filter Performance for
Manned Maneuvering Targets," IEEE Transactions on Aerospace and
Electronic Systems, vol. AES-6, no. 4, pp.473-482, July 1970.

2. John M. Fitts, "Aided Tracking as Applied to High Accuracy Pointing
Systems," IEEE Transactions on Aerospace and Electronic Systems,
vol. AES-9, no. 3, pp. 350-368, May 1973.

3. R.J. McAuley and E. Denlinger, "A Decision-Directed Adaptive Tracker,"
IEEE Transactions on Aerospace and Electronic Systems, vol. AES-9,
no. 2, pp. 229-236, March 1973.

4. Norman H. Gholson and Richard L. Moose, "Maneuvering Target Tracking
Using Adaptive State Estimation," IEEE Transactions on Aerospace and
Electronic Systems, vol. AES-13, no. 3, pp. 310-317, May 1977.

5. R.L. Moose, H.F. Vanlandingham and D.H. McCabe, "Modeling and
Estimation for Tracking Maneuvering Targets," IEEE Transactions on
Aerospace and Electronic Systems, vol. AES-15, no. 3, pp. 448-456,
May 1979.

6. J.D. Kendrick, P.S. Maybeck and J.G. Reid, "Estimation of Aircraft
Target Motion Using Orientation Measurements," IEEE Transactions on
Aerospace and Electronic Systems, vol. AES-17, no. 2, pp. 254-259,
March 1981.

7. Thomas E. Bullock and S. Sangsuk-Iam, "Maneuver Detection and
Tracking with a Nonlinear Target Model," Proceedings of the 23rd IEEE
Conference on Decision and Control, Las Vegas, Nevada, December 1984.

8. D.G. Hull, P.C. Kite and J.L. Speyer, "New Target Models for Homing
Missile Guidance," Proceedings of the AIAA Guidance and Control
Conference, Gatlinburg, Tennessee, August 1983.

9. P.L. Vergez and R.K. Liefer, "Target Acceleration Modeling for
Tactical Missile Guidance," Journal of Guidance, Control and Dynamics,
vol. 7, pp. 315-321, May-June 1984.

10. C.F. Lin and Marc W. Shafroth, "A Comparative Evaluation of Some
Maneuvering Target Tracking Algorithms," Proceedings of the AIAA
Guidance and Control Conference, Gatlinburg, Tennessee, August 1983.

11. John B. Pearson and Edwin B. Stear, "Kalman Filter Applications In
Airborne Radar Tracking," IEEE Transactions on Aerospace and
Electronic Systems, vol. AES-10, no. 3, pp. 319-329, May 1974.

12. K. Spingarn and H.L. Weidemann, "Linear Regression Filtering and
Prediction for Tracking Maneuvering Aircraft Targets," IEEE
Transactions on Aerospace and Electronic Systems, vol. AES-8, no. 6,
pp. 800-810, November 1972.

13. James S. Thorp, "Optimal Tracking of Maneuvering Targets," IEEE
Transactions on Aerospace and Electronic Systems, vol. AES-9, no. 4,
pp. 512-518, July 1973.

14. Y.T. Chan, J.B. Plant and J.R.T. Bottomley, "A Kalman Tracker with a
Simple Input Estimator," IEEE Transactions on Aerospace and
Electronic Systems, vol. AES-18, no. 2, pp. 235-240, March 1982.

15. Y.T. Chan, A.G.C. Hu and J.B. Plant, "A Kalman Filter Based Tracking
Scheme with Input Estimation," IEEE Transactions on Aerospace and
Electronic Systems, vol. AES-15, no. 2, pp. 237-244, March 1979.

16. Peter S. Maybeck, Stochastic Models, Estimation and Control, Vol. 1,
Academic Press, Orlando, Florida, 1979.

17. D. Graupe, D.J. Krause and J.B. Moore, "Identification of
Autoregressive Moving-Average Parameters in Time Series," IEEE
Transactions on Automatic Control, vol. AC-20, no. 1, pp. 104-107,
February 1975.

18. Joseph C. Chow, "On Estimating the Orders of an Autoregressive
Moving-Average Process with Uncertain Observations," IEEE
Transactions on Automatic Control, vol. AC-17, no. 5, pp. 707-709,
October 1972.

19. Will Gersch, "Estimation of the Autoregressive Parameters of a Mixed
Autoregressive Moving-Average Time Series," IEEE Transactions on
Automatic Control, vol. AC-15, no. 5, pp. 583-588, October 1970.

20. M.B. Priestley, Spectral Analysis and Time Series, Academic Press,
London, 1981.

21. E.J. Hannan, "The Identification of Vector Mixed Autoregressive
Moving-Average Systems," Biometrika, 56, pp. 223-225, 1969.

22. John Makhoul, "Linear Prediction: A Tutorial Review," Proceedings of
the IEEE, vol. 63, no. 4, pp. 561-580, April 1975.

23. Steven M. Kay and Stanley L. Marple, "Spectrum Analysis A Modern
Perspective," Proceedings of the IEEE, vol. 69, pp. 1380-1419,
November 1981.

24. Martin Morf, Augusto Vieira, Daniel Lee and Thomas Kailath,
"Recursive Multichannel Maximum Entropy Spectral Estimation," IEEE
Transactions on Geoscience Electronics, vol. GE-16, pp. 85-94,
April 1978.

25. Raman K. Mehra, "On-line Identification of Linear Dynamic Systems
with Applications to Kalman Filtering," IEEE Transactions on
Automatic Control, vol. AC-16, no. 1, pp. 12-21, February 1971.

26. G.E.P. Box and G.M. Jenkins, Time Series Analysis: Forecasting and
Control, Holden-Day, San Francisco, 1976.

27. B.D.O. Anderson and J.B. Moore, Optimal Filtering, Prentice-Hall,
Englewood Cliffs, New Jersey, 1979.

28. Ralph A. Wiggins and Enders A. Robinson, "Recursive Solution to the
Multichannel Filtering Problem," Journal of Geophysical Research,
vol. 70, 1965.

29. Norbert Levinson, "The Weiner rms (root-mean-square) Error Criterion
in Filter Design and Prediction," Journal of Mathematical Physics,
vol. 25, pp. 261-278, January 1947.

30. Yujiro Inouye, "Autoregressive Model Fitting and Levinson Algorithm
in the Multichannel Degenerate Case," IEEE Transactions on Automatic
Control, vol. AC-28, no. 1, pp. 94-95, January 1983.

31. Otto. N. Strand, "Multichannel Complex Maximum Entropy
(Autoregressive) Spectral Analysis," IEEE Transactions on Automatic
Control, vol. AC-22, no. 4, pp. 634-640, August 1977.

32. John Makhoul, "Stable and Efficient Lattice Methods for Linear
Prediction," IEEE Transactions on Acoustics, Speech and Signal
Processing, vol. ASSP-25, no. 5, pp. 423-428, October 1977.

33. Tad J. Ulrych and Thomas N. Bishop, "Maximum Entropy Spectral
Analysis and Autoregressive Decomposition," in Modern Spectrum
Analysis, edited by Donald G. Childers, pp. 54-71, IEEE Press,
New York, 1978.

34. M.D. Srinath and M.M. Viswanathan, "Sequential Algorithm for
Identification of Parameters of an Autoregressive Process," IEEE
Transactions on Automatic Control, vol. AC-20, no. 4, pp. 542-546,
August 1975.

35. John P. Burg, "Maximum Entropy Spectral Analysis," in Modern
Spectrum Analysis, edited by Donald G. Childers, pp. 34-41, IEEE
Press, New York, 1978.

36. John P. Burg, "Maximum Entropy Spectral Analysis," Ph.D.
Dissertation, Department of Geophysics, Stanford University,
Stanford, California, May 1975.

37. G.J. Nazaroff, "An Optimal Terminal Guidance Law," IEEE Transactions
on Automatic Control, vol. AC-21, no. 3, pp. 365-377, June 1976.


Norman 0. Speakman, son of Paul O. Speakman and Norma D. (Pittman)

Speakman, was born in Cullman, Alabama, on February 10, 1949. He attended

the public schools of Birmingham, Alabama, and graduated from West End

High School in 1967. He entered the University of Alabama in Birmingham

in September, 1967, where he completed one year of pre-engineering.

In September, 1968, he transferred to Auburn University where he

participated in the Cooperative Education Program and was employed by

Arnold Research Organization in Tullahoma, Tennessee. He earned the

degrees of Bachelor of Aerospace Engineering and Master of Science in

Aerospace Engineering in June, 1972, and June, 1973, respectively.

In June, 1973, he was commissioned a second lieutenant in the U.S.

Air Force. He served a four year tour of active duty at Eglin Air Force

Base, Florida, from 1973 to 1977. Upon leaving the Air Force, he accepted

an aerospace engineering position at the Air Force Armament Laboratory.

In August, 1981, he entered the University of Florida and began doc-

toral work in electrical engineering. After completing residency

requirements in 1982, he returned to Eglin Air Force Base. He is

currently chief of the Advanced Guidance Concepts Section at the

Armament Laboratory.

He married Donna Marie, daughter of Don and Marie (Deerman) Caldwell,

in December, 1970. They have three children, Erik, Timothy and Amy. He

is a member of Sigma Gamma Tau, Tau Beta Pi, Phi Kappa Phi and the

American Institute of Aeronautics and Astronautics.


I certify that I have read this study and that in my opinion it con-
forms to acceptable standards of scholarly presentation and is fully
adequate, in scope and quality, as a dissertation for the degree of
Doctor of Philosophy.

Thomas E. Bullock, Chairman
Professor of Electrical Engineering

I certify that I have read this study and that in my opinion it con-
forms to acceptable standards of scholarly presentation and is fully
adequate, in scope and quality, as a dissertation for the degree of
Doctor of Philosophy.

Charles V. Shaffer
Professor of Electrical Engineering

I certify that I have read this study and that in my opinion it con-
forms to acceptable standards of scholarly presentation and is fully
adequate, in scope and quality, as a dissertation for the degree of
Doctor of Philosophy.

P aj LJ2A-
G seppe esile
Professor of Electrical Engineering

I certify that I have read this study and that in my opinion it con-
forms to acceptable standards of scholarly presentation and is fully
adequate, in scope and quality, as a dissertation for the degree of
Doctor of Philosophy.

Edward W. Kamen
Professor of Electrical Engineering

I certify that I have read this study and that in my opinion it con-
forms to acceptable standards of scholarly presentation and is fully
adequate, in scope and quality, as a dissertation for the degree of
Doctor of Philosophy.

P.V. Rao
Professor of Statistics

This dissertation was submitted to the Graduate Faculty of the
College of Engineering and to the Graduate School and was accepted as
partial fulfillment of the requirements for the degree of Doctor of

December 1985

Dean, College of Engineering

Dean, Graduate School

Internet Distribution Consent Agreement

In reference to the following dissertation:

AUTHOR: Speakman, Norman
TITLE: Autoregressive Moving-Average (ARMA) model
Identification for Degenerate Time

I, Norman Owen Speakman as copyright holder for
the aforementioned dissertation, hereby grant specific and limited archive and
distribution rights to the Board of Trustees of the University of Florida and its agents. I
authorize the University of Florida to digitize and distribute the dissertation described
above for nonprofit, educational purposes via the Internet or successive technologies.

This is a non-exclusive grant of permissions for specific off-line and on-line uses for an
indefinite term. Off-line uses shall be limited to those specifically allowed by "Fair Use"
as prescribed by the terms of United States copyright legislation (cf, Title 17, U.S. Code)
as well as to the maintenance and preservation of a digital archive copy. Digitization
allows the University of Florida to generate image- and text-based versions as appropriate
and to provide and enhance access using search software.

This grant of permissions prohibits use of the digitized versions for commercial use or

Signature of Copyright Holder

Norman Owen Speakman
Printed or Typed Name of Copyright Holder/Licensee

Personal InformationBlurred

23 April 2008
Date of Signature

University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs