
Citation 
 Permanent Link:
 http://ufdc.ufl.edu/AA00040880/00001
Material Information
 Title:
 Linear timevarying systems representation and control via transfer function matrices
 Creator:
 Poolla, Kameshwar Rao, 1960
 Publication Date:
 1984
 Language:
 English
 Physical Description:
 vii, 153 leaves : ; 28 cm.
Subjects
 Subjects / Keywords:
 Algebra ( jstor )
Automatic control ( jstor ) Factorization ( jstor ) Feedback control ( jstor ) Input output ( jstor ) Integers ( jstor ) Matrices ( jstor ) Polynomials ( jstor ) Subrings ( jstor ) Transfer functions ( jstor ) Discretetime systems ( lcsh ) Dissertations, Academic  Electrical Engineering  UF Electrical Engineering thesis Ph. D System analysis ( lcsh ) Transfer functions ( lcsh )
 Genre:
 bibliography ( marcgt )
nonfiction ( marcgt )
Notes
 Thesis:
 Thesis (Ph. D.)University of Florida, 1984.
 Bibliography:
 Bibliography: leaves 147152.
 General Note:
 Typescript.
 General Note:
 Vita.
 Statement of Responsibility:
 by Kameshwar Rao Poolla.
Record Information
 Source Institution:
 University of Florida
 Holding Location:
 University of Florida
 Rights Management:
 The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for nonprofit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
 Resource Identifier:
 030502097 ( ALEPH )
11699319 ( OCLC )

Downloads 
This item has the following downloads:

Full Text 
LINEAR TIMEVARYING SYSTEMS: REPRESENTATION AND CONTROL VIA
TRANSFER FUNCTION MATRICES
By
KAMESHWAR RAO POOLLA
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1984
To my parents
ACKNOWLEDGMENTS
I wish to express my sincere gratitude to all those who contributed towards making this work possible.
Professor Edward Kamen, the chairman of my dissertation committee, has over the past three years been a constant source of encouragement for me. He has, through hours of invaluable discussions, been
instrumental in advising me through all phases of this project. It has been a privilege and a pleasure to have been his student. Without the financial support he has arranged for me, this work would not be possible. To him I would like to express my deepestgratitude.
I cannot in words express my thanks to my dissertation committee cochairman and kind friend, Professor Pramod Khargonekar. He has over
the last decade been my mentor and my source of inspiration, and has always been there when I needed him. I will hold the fondest memories of our association forever.
I shall long cherish my association with Professor Allen Tannenbaum. Indeed, his tireless enthusiasm and optimism have left an indelible mark on me. Adieu, Allen.
I would also like to express my most sincere appreciation to the other members of my supervisory committee, Professors T. E. Bullock,
iii
D. Drake, and R. L. Long for their guidance and help during the course of my studies.
I am especially grateful to Ms. Carole Boone for her excellent typing and patience through illegible manuscripts and many revisions.
Thanks go also to Radhika (for everything), to Seema (for lifesustaining khana), to my friends Amitava, Corrine, and Prakash (for moral support).
This work was supported in part by the National Science Foundation under Grant No. ECS8200607.
iv
TABLE OF CONTENTS
PAGE
ACKNOWLEDGMENTS...................................................iii
ABSTRACT..........................................................vii
CHAPTER
ONE INTRODUCTION.............................................1
TWO PRELIMINARY DEFINITIONS AND RESULTS......................7
THREE THE TRANSFERFUNCTION FRAMEWORK.........................18
FOUR POLYNOMIAL REALIZATION THEORY...........................28
FIVE POLYNOMIAL FACTORIZATIONS OF TRANSFER
SIX
SEVEN EIGHT
NINE
TEN ELEVEN
FUNCTION MATRICES.......................................47
APPLICATIONS OF THE POLYNOMIAL THEORY TO FEEDBACK CONTROL.....................................60
STABILITY...............................................70
STABILIZABILITY AND ASYCONTROLLABILITY..................77
STABLEPROPER FACTORIZATIONS............................85
FEEDBACK CONTROL........................................95
CONCLUDING REMARKS.....................................103
V
APPENDICES PAGE
A PROOF OF PROPOSITION (4.1).............................106
B PROOF THE THEOREMS (5.2) AND (5.11)....................116
C PROOF OF THEOREM (8.5).................................131
REFERENCES .......................................................147
BIOGRAPHICAL SKETCH .......... ...............................153
vi
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fullfillment of the
Requirements for the Degree of Doctor of Philosophy
LINEAR TIMEVARYING SYSTEMS:
REPRESENTATION AND CONTROL VIA TRANSFER FUNCTION MATRICES
By
KAMESHWAR RAO POOLLA
August 1984
Chairman: Prof. E. W. Kamen
CoChairman: Prof. P. P. Khargonekar Major Department: Electrical Engineering
In this dissertation we have developed a "transferfunction" type theory for linear timevarying discretetime systems. Using this
framework in the first part of the dissertation, we have been able to generalize much of the existing polynomial model theory. Specifically we have treated polynomial realization theory (FUHRMANN), polynomial factorization theory, and applications to feedback control. In the
second half of the dissertation, we have treated the problems of stabilization, of existence of stableproper factorizations, and have taken a cursory look at the tracking problem for timevarying systems. One of our most significant results is the equivalence of dynamic and memoryless state feedback as far as the problem of stabilization is concerned.
vii
CHAPTER ONE
INTRODUCTION
This dissertation is concerned with the study of linear discretetime timevarying systems. Timevarying systems arise
frequently in practical applications. For example, time variation could result from change of the mass and center of gravity of an aircraft due to fuelburn, aging or slag buildup in chemical
reactors, linearization of nonlinear systems about a timevarying nominal trajectory, etc. The broad objective of this dissertation is
to develop a systematic theory for the analysis of linear timevarying systems based on matrixfraction representations and also to apply this theory to feedback control system design problems.
The earliest approaches to studying feedback control problems for linear timevarying systems were based on linear quadratic optimal control theory and pioneered by Kalman. In particular KALMAN [1960] showed that a uniformly reachable continuoustime timevarying
system can be stabilized by a state feedback control law of the form u(t) = L(t) x(t). Here u(t) is the input to the system and x(t) is the state of the system. The gain matrix L(t) may be computed by solving a timevarying Riccati differential equation. The reader may
consult the books by JAZWINSKI [1970] and KWAKERNAAK and SIVAN [1972]
1
2
for further results on optimal control/filtering of linear timevarying systems.
CHENG [1979] has obtained different stabilizing memoryless
feedback laws (based on controllability grammians) as have KWON and PEARSON [1977, 1978] (based on receding horizon optimal control). Both Cheng and Kwon and Pearson work with reachable timevarying systems. Recently, ANDERSON and MOORE [1981a] have been able to define the weaker notion of stabilizability and have obtained
memoryless stabilizing feedback laws for stabilizable timevarying systems. Their work is based on an optimal control/filtering approach and specified in terms of Riccati difference equations.
In contrast to the abovementioned work which focusses on obtaining stabilizing feedback laws based on grammians/Riccati equations, this dissertation is an attempt to develop a more
algebraic theory for the study of linear timevarying systems. An
algebraic theory would provide more insight in the study of feedback control problems and would, perhaps, yield control laws that are easier to compute. In the past, there has been some effort directed at obtaining an algebraic/geometric theory for timevarying systems (see for example WOLOVICH [1968], MORSE and SILVERMAN [1972], KAMEN and HAFEZ [1979]). This work, however, treats very restricted
classes of linear timevarying systems such as indexinvariant systems or cyclizable systems. This dissertation attempts the study of linear timevarying systems in complete generality.
3
In Chapters Two and Three of this dissertation, we outline a "transferfunction" type theory for linear timevarying systems.
Attempts have been made in the past to develop such a theory (notably the system function of ZADEH [1950]), but these have not met with much success. Our framework is specified in terms of skew
(noncommutative) rings of polynomials, formal power series, and formal Laurent series, all with coefficients in the ring of time functions. These skew rings have, in previous work, found
application to the study of linear timevarying networks and systems; see for example articles by NEWCOMB [19701, SALOVAARA and BLOMBERG [1973], YLINEN [1975], KAMEN and HAFEZ [1979]. The rudiments of the transferfunction approach we develop here may be found in an unpublished paper of KAMEN [1974]; however, a complete development of this approach is not attempted in that paper. KAMEN and KHARGONEKAR [1982] have pursued this approach and have developed much of the framework for this transferfunction theory for linear timevarying systems. Indeed, this dissertation is a natural extension of their work.
During the past decade, significant progress has been made in the study of both linear timeinvariant systems and systems over commutative rings using polynomial matrixfraction methods (see for example the books by ROSENBROCK [1970], WOLOVICH [1974], KAMEN and ROUCHALEAU [1984]). This approach has proven to be useful in
tackling many system and control theoretic problems such as realization, dynamic compensation, regulation in the presence of
4
disturbances, etc. For details on this work, the reader may consult the work of FUHRMANN [1976], ROSENBROCK and HAYTON [1978], CHENG and PEARSON [1978], ANTSAKLIS [1979], KHARGONEKAR [1982], to mention a few.
Given the power and success of polynomial matrixfraction
methods in the linear timeinvariant system theory, it seems natural to attempt to generalize this approach to encompass linear timevarying systems. We do just this in the first half of the dissertation. This generalization to timevarying systems is
entirely nontrivial because our framwork is specified in terms of a noncommutative (skew) ring structure to incorporate the timevariance of our systems. Consequently, in many instances we are compelled to
use proof techniques that are novel and entirely different from those employed in the study of linear time invariant systems over both fields and commutative rings.
In particular, we obtain in Chapter Four a "natural" statespace representation derived from a polynomial matrixfraction representation of the transferfunction matrix. This realization is a timevarying analog of the FUHRMANN [1976] realization in the timeinvariant case. We then investigate the relationship between systemtheoretic properties of this realization and algebraic properties of the associated polynomial matrix representation. We also examine the
problem of strict system equivalence and derive results similar to those obtained by FUHRMANN [1976, 1977] for linear timeinvariant systems and by KHARGONEKAR [19821 for systems over commutative rings.
5
Many frequencydomain methods used to design controllers for linear timeinvariant systems begin with left and/or right Bezout (coprime) factorizations for the transferfunction matrix of the plant. As is well known, any linear timeinvariant system admits such a factorization. This, however, is not the case for linear timevarying systems. In Chapter Five, we derive necessary and sufficient conditions for the existence of Bezout polynomial factorizations for a linear timevarying system. Moreover, our
results are constructive, and we present a systematic procedure for obtaining these factorizations.
Following this, in Chapter Six, we use these Bezout polynomial factorizations together with the polynomial realization theory of Chapter Four, to study feedback control problems. In particular, we derive an assignability result (which corresponds to being able to assign the closedloop system dynamics) for canonical linear timevarying systems. We illustrate our constructive techniques by
designing a "deadbeat" controller for an armaturecontrolled dc motor with a timevarying motor torque "constant" (the timevariation being due to loading and heating effects).
Chapter Seven is concerned with reviewing some basic concepts dealing with the stability of linear timevarying systems, and with translating these concepts into our framework. In Chapter Eight we examine in detail the problem of stabilizing a linear time varying system E. In particular we introduce the notion of
asycontrollability which is equivalent to being able to stabilize E
6
via dynamic state feedback. ANDERSON and MOORE [1981a] have defined a notion of stabilizability for linear timevarying systems which is equivalent to being able to stabilize E via memoryless state feedback. One of the most significant results in this dissertation is the equivalence of stabilizability and asycontrollablity. This
result, in particular, implies that dynamics in state feedback buy nothing extra as far as the problem of stabilization is concerned.
Of late the use of stableproper factorizations in the analysis and design of linear timeinvariant control systems has become increasingly popular, one advantage being that properness of
controllers is automatic. See for example the work of DESOER et al. [1980], VIDYASAGAR [1978], and SAEKS and MURRAY [1981]. In Chapter Nine we examine in detail stableproper factorizations for timevarying systems. Following this, in Chapter Ten we investigate the role of these factorizations in feedback control problems. In
particular we formulate the problem of Tracking with Internal Stability (TIS) for timevarying systems, and show using stableproper factorizations that the TIS problem can be solved if and only if a particular linear matrix equation over a skew ring admits a solution.
Finally, in Chapter Eleven, we make some concluding remarks and discuss some open problems in the area of linear timevarying systems.
CHAPTER TWO
PRELIMINARY DEFINITIONS AND RESULTS
In this chapter, we first establish some notation and state some
preliminary definitions and results on linear timevarying systems. We also prove a proposition on the existence of a "deadbeat" control law for reachable timevarying systems.
As Z = set of integers and R = field of real numbers, let A denote the Rlinear space of all functions from Z into R. With the
operations of pointwise addition and multiplication, it is easy to verify that A forms a commutative ring with identity 1, where 1(k) = 1 for all k in Z. Of central importance in this entire theory is the rightshift operator a defined by
(aa)(k) = a(k1), for all k in Z.
With the shift operator a (which is a ring automorphism on A), the ring A is called a difference ring. A subring B C A is called a difference subring of A if a(B) = B.
(2.1) EXAMPLE. Some examples of difference subrings are given below.
7
8
PER(N) := fa in A a is periodic with period NI.
PER PER(i) = {set of all periodic time functions.
i=1
Z)M :{set of all bounded time functions}.
R[k] := {set of all time functions that are evaluations of polynomials in time. D
Let A+ denote the difference subring of A consisting of all functions with support bounded on the left, i.e., for any a in A+, there exists an integer ka such that a(k) = 0 for k 4 ka.
(2.2) DEFINITION. Let m and p be positive integers. An minput
poutput linear causal timevarying input/output map f is an Rlinear map
f : A + A+
such that if u(k) = 0 for k < ku,, for some u in Am, then f(u)(k) = 0 for k Lku.
It is well known that for any input/output map f, there exists a p x m matrix function Wf(i,j) such that for any u in Am,
i
f(u)(i) =  Wf(i,j)u(j).
9
The matrix function Wf is called the unitpulse response function associated with the input/output map f. Note that by causality,
Wf(i,j) is not defined for i < j.
Our next concept is the notion of a system.
(2.3) DEFINITION. Let B be a fixed difference subring of A containing 1. An minput poutput ndimensional linear timevarying system over B is a quadruple E = (F,G,H,J) of matrices over B where F is n x n, G is n x m, H is p x n, and J is p x m. D
For any matrix M over B, define its conjugate M+ by M+(k) = M'(k) where ' denotes the transpose. The dual system E+ of
Z = (F,G,H,J) is defined by E+ = (F+,H+,G+ +). This timereversal is an essential part of the natural notion of duality for timevarying systems.
With a system E = (F,G,H,J), we shall associate the dynamical equations
x(j + 1) = F(j)x(j) + G(j)u(j),
y(j + 1) = H(j)x(j) + J(j)u(j),
where x(j), y(j), and u(j) have the usual interpretation. In the
above definition of a system, it is important to observe that by selecting the difference subring B, we can restrict our attention to
10
a particular class of systems. For example, we can study the class of linear timevarying systems with bounded coefficients by choosing B = 2.w(Z) = the set of all bounded functions from Z into R.
Let E = (F,G,H,J) and 2 = (F,G,R,J) be two minput, poutput, ndimensional systems over B ( A. Then, E and 2 are said to be (algebraically) Bisomorphic if and only if there exists an n x n matrix T over B, with inverse T~1 over B, such that
= (a T ) F T, H=HT
G (~T 1)G, 3 = J
As is well known, two isomorphic systems E and Z are related via a coordinate transformation x(k) = T(k)x(k) of their states.
Also, the unit pulse response function WE associated with the system E = (F,G,H,J) is given by
H(i)F(i1)F(i2) ... F(j+1)G(j),i>j WE(i,j) = J(j) ,i =j
not defined ,i
The input/output behaviour of E is described by its input/output map fE where
fE(u)(i) = W i,j)u(j)
11
Given an input map f : A + A+, a system I = (F,G,H,J) is said to be a realization of f if and only if f = fr. For further results
on realizability, we refer the reader to WEISS [1972], EVANS [1972], and FERRER and KAMEN [1984].
A system E = (F,G,H,J) or the pair (F,G) over A is said to be reachable in N steps if and only if there exists a positive integer N such that for any j in Z and any x in Rn, there exists an input sequence u(jN), u(jN+1), ..., u(j1) which drives E from x(jN) = 0 to x(j) = x. The dual notion of observability in N steps has the obvious systemtheoretic interpretation.
Also, a system E is observable in N steps if and only if its dual E+ is reachable in N steps. A system E is said to be canonical if and only if it is both reachable and observable in N steps.
Let RN denote the Nstep reachability matrix
RN [G F (aG) ... F(aF) ... (aN2F)(ONG)].
WEISS [1972] has obtained the following characterization of reachability.
(2.4) LEMMA. The pair (F,G) over A is reachable in N steps at all times if and only if
rank RM(j) = n for all j in Z,
rank RNOj) = n
, for all j in Z,
12
i.e., if and only if RN is rightinvertible over A. D
(2.5) REMARK. For timevarying discretetime systems, it can happen
that a pair (F,G) is reachable in N > n steps (see example (2.6)). This is due to the lack of a "CayleyHamilton" type theorem in this setting. [ 1
(2.6) EXAMPLE. Consider the pair (F,G) over A where F(k) = 1 for all k in Z, and G(k) = 1 if k is even and G(k) = 0 otherwise. The
pair (F,G) is easily seen to be reachable in two steps but not in one step. D
In some instances, one may be interested in a slightly different notion of reachability. Let = (F,G,H,J) be a system over a
difference subring B of A. Then, E is said to be Breachable in N steps if and only if RN is rightinvertible over B. For example, if
B = 0(Z), this notion of reachability is equivalent to requiring uniform boundedness (with respect to j) of the inputs u(jN),
U(jN+1), ..., in the definition of reachability given earlier. In
fact we have the following result.
(2.7) PROPOSITION. Let E = (F,G,H) be a timevarying system over too(Z). Then, Z is Zw(Z)reachable if and only if there exists a real number e such that
13
det(RNR) > c > 0
PROOF. Suppose E is .(Z) reachable. Then, RN is rightinvertible with inverse U over "(Z). Since U is over w(Z), this implies that there exists a real number M such that all the nxn minors ui of U are bounded by M. Let ri denote the nxn minors of RN. Then, by the CauchyBinet formula (see GANTMACHER [1959]) it follows that for any time k
det(RNR )(k) = r 2(k)
det(RNU)(k) = 1 =E r.(k)u.(k) < M E i(k) .
The last inequality in particular implies that there exists an E > 0 such that Er (k) > c which in turn implies that det(RNR )(k) > E.
Conversely, suppose det(RNR )(k) > e > 0 for all time k.
This implies tiat RNR is rightinvertible with right inverse
V = (RNR )adj x 1 It is now clear that V ;iust be bounded
N det(R N R )
since det(RNR) >
This completes the proof. F1
We shall also need the following result on reachability.
14
(2.8) PROPOSITION. Let E = (F,G,H,J) over A be reachable in N steps at all times. Then, there exists an n x m matrix L over A such that (FGL) is anilpotent, i.e., for some integer q,
(F  GL)a(F  GL) ... aq(F  GL) = 0.
PROOF. Consider the linear timevarying system described by
x(k + 1) = F(k)x(k) + G(k)u(k)
Since the pair (F,G) is reachable in N steps, it is also controllable to the origin in N steps, i.e., any initial state x(O) in Rn can be driven to zero final state in N steps. Therefore, there exist
controls u(O), u(1), ..., u(N  1) in Rm such that
x(l) = F(O)x(0) + G(0)u(0) x(2) = F(1)x(1) + G(1)u(1)
x(N) = F(N  1)x(N  1) + G(N  1)u(N  1) 0
Let V. C Rn be the subspace consisting of all initial states x(O) in
Rn that can be driven to zero final state in isteps or less. Clearly
V CV2 C... VN = Rn
15
We now select a linearly independent set of vectors b1, b2, ..., bnl that span V1. We then extend this set to form a basis for V2. In
this manner, we find a basis B = {b, ..., b } for VN = Rn. Suppose some bi in B is in Vt but not in Vt1. For the initial state
x(O) = bi, we can therefore find a control sequence ui(0), u(1), ..., u'(N  1) where u'(k) = 0 for k > t which drives x(0) to the final state x(N) = 0. In this manner, we determine
{u (k) ; k = 0, 1, ..., N  1 ; i = 1, 2, ..., n} .
Let B be the n x n invertible matrix over R whose ith column is b . Let U0, U1, ..., UN1 be m x n matrices over R where the ith column of Uk is u1(k). Also, recursively define the matrices X0, X1,
XN by
X0 = B
Xk+1 = F(k) Xk + G(k) Uk
It then follows that XN = 0.
Suppose E is in Ker (Xi). This means that the initial state
x(0) = B E can be driven to zero final state in i steps. Therefore, B E is in Vi, and then by our judicious choice of controls, it follows that
16
Uk E = 0 , for k > i
Specifically, Ui = 0 or Ker(Xi) N linear equations
U. = L X
Ker(U.). Hence we can solve the
, i = 0, 1, ..., N  1
for Li. Observe now that
Xk+1 = F(k)Xk + G(k) Uk
= (F(k)  G(k) Lk) Xk
It then follows that
0 = XN = (F(N1)  G(N1) LN1) ... (F(O)  G(O)Lo) X0 However, X0 = B is invertible. Consequently, (F(N1)  G(N1) LN1)(F(N2)  G(N2) LN2)...(F(O)  G(O) Lo) = 0
Since (F, G) is reachable in N steps at all times, we can repeat the above argument and find matrices LN, LN+1, ''' , L2N1 such that
(F(2N1)  G(2N1) L2N1) ... (F(N)  G(N) LN) =0
17
In this fashion, we find matrices Lk for each integer k. Define the m x n matrix L over A by L(k) := Lk. It is then apparent that
(F  GL)a(FGL) ... u2N (FGL) = 0
i.e., (F  GL) is anilpotent.
(2.9) REMARK. It is interesting to note that the above proposition implies the existence of a "deadbeat" control law for linear timevarying systems that are reachable in N steps. More precisely, with the matrix L as described in Proposition (2.8), it follows that starting from any initial state in Rn at any initial time ko, the state trajectory x(k) of
x(k+1) = (F  GL)(k) x(k)
is zero for k > ko + q.
7
CHAPTER THREE
THE TRANSFERFUNCTION FRAMEWORK
In this chapter, we describe in detail the elements of a "transferfunction" type approach to linear timevarying systems introduced by KAMEN and KHARGONEKAR [1982]. Much of the remainder of this dissertation is based on this theory.
The commutative rings of polynomials, power series and formal Laurent series all with coefficients in the reals R, play a central role in the transferfunction theory of linear timeinvariant systems. For timevarying systems the analogous objects are skew (noncommutative) rings with coefficients in the ring of time functions.
More precisely, with z equal to an indeterminate, let A((z )) denote the set of all formal Laurent series of the form
r zNr , a in A
rKN r r
Note that the coefficients above are written on the right. This is
because we will now impose a noncommutative ring structure on
A((z1)). With the usual addition, and with multiplication defined by
18
19
rzt r+t
z z = z
(3.1)
az = z(aa) , a in A
where (aa)(k) = a(k1), A((z1)) is a noncommutative ring with identity, called the skew ring of formal Laurent series over A. There are two important subrings of A((z1)): The skew ring of
polynomials A'z] and the skew ring of formal power series A[[z1]L These have the obvious definitions.
The noncommutative multiplication in A((z1)) defined above
captures in a very natural way the 'timevariance of our systems, and thus plays a central role in this entire theory. This is illustrated by the following example and will become more evident in the remainder of this chapter.
(3.1) EXAMPLE. Consider the following singleinput singleoutput timevarying difference equation:
a(k+1)y(k+1) = u(k)
The indeterminate z1 will, as in the timeinvariant theory, represent a delay operator. The above equation can be written (we show this more formally later) as
20
z(ay) = u
We can also write the above difference equation as
y(k+1) = a(k) u(k)
where S = (a a), and this can be represented in the "frequency1
domain" by zy = u, or, multiplying on the left by 6, 8zy = u. Comparing this with the previous representation, we obtain za = z = (ala)z which is precisely the manner in which we have defined our noncommutative multiplication in A((z1)).
Define a projection map
r : A((z  ) + A((z ) :r  a + z a
r z ar r=1 r
For any a in A((z1)), let (a)+ := aTr(a) = the polynomial part
of a. By (a)0, we shall mean the constant coefficient of a, and, a is said to be strictly proper if and only if ff(a) = a.
Given an r x r skew polynomial matrix (i.e., a matrix with entries in A[z]),
q i
0 = E z Q,
i=0
21
the degree of Q, written deg(Q), is the largest integer q such that OQ * 0. Further, Q is said to be monic if Q = I (the r x r identity matrix), and, Q is said to be rightinvertible if and only if there exists an r x r matrix p over A((z1)) such that Qp = I. We shall
need the following result on invertibility.
(3.2) PROPOSITION. Let Q be an r x r polynomial matrix. Then, 0 is rightinvertible if and only if there exists an r x r polynomial matrix T such that QT is monic. Further, T can be chosen such that
deg(Q) = deg(QT).
PROOF. Suppose Q is rightinvertible, i.e., there exists a * in Arxr ((z1)) such that QP = I. Let deg Q = d. We can now write
zd I =Q * zd
=Q(z d)+ + Qw(pz d
Notice that deg (Q(pzd)) < deg Q = d. Therefore the highest degree term of Q(pzd)+ is zdI. Choosing T = (zd)+ which is polynomial,
proves the necessity. Notice that deg QT = deg Q = d.
Now assume that there exists a T in Arxr[z] such that QT is a monic rxr polynomial matrix. We can then do right division of I by QT and find a in Arxr((z1)) such that Q T p = I, which implies that Q is rightinvertible. ED
22
The degree condition deg(Q) = deg (QT) is merely a technical fact which will be useful in proving several later results. An
analogous result holds for leftinvertibility of Q. In general,
leftinvertibility is not equivalent to rightinvertibility, the
pathology being due to the skew nature of our rings (see Example (3.3)). We shall almost always deal with polynomial matrices
that are both left and right invertible, in which case we shall call them invertible to avoid use of cumbersome prefixes.
The following examples illustrate the skew multiplication (3.1) in our rings and contrast this with multiplication in R[z].
(3.3) EXAMPLE. Define a,, a0, P in A by
0 k<0
al(k) := , a0 := a(1a1), u := a0a 1
1 0
Also define q(z), hj(z), h2(z) by
q(z) = a 1z + a0, h 1(z) = a0z + a2 (a1) , h2(z) = Pz  a(P)
Using multiplication in the skewring A[z] defined by (3.1) it is an easy computation to show that
k, h2(z)q(z) = 0
q(z)hl(z) = z
23
Thus, q(z) is rightinvertible but not leftinvertible, a phenomenon that does not occur in R[z].
(3.4) EXAMPLE. Define a in A by
1 , k is even
a(k) =
0 , k is odd
Consider the scalar poynomial q(z) := az + 1. It is easy to verify that
q(z)h(z) = h(z)q(z) = 1
where h(z) = az+1. Thus, q(z) is invertible and its inverse
q1(z) = h(z) is polynomial. This situation also is peculiar to the skewring A[z]. 1
We now describe a transferfunction approach to timevarying systems based on the skew rings defined earlier. All proofs are
omitted; they can be found in KAMEN, KHARGONEKAR and POOLLA [in press].
Again, let A+ denote the subring of A consisting of all functions a : Z + R with support bounded on the left. Let A denote
the unit pulse at the origin, i.e.,
24
1
A(k) =
0
if k=0 otherwise
Given any u in A+, the (generalized) ztransform of u written U(z) is defined to be the skew Laurent series
(3.5)
U(z) = Z zru(r)A
This generalized ztransform is simply the usual ztransform (imbedding R((z1)) in A((z~1))) multiplied by A.
Lef f be an input/output map, and let Wf denote the unitpulse response function associated with f. For each integer r > 0, define a p x m matrix Wr over A by
Wr (k) = Wf(r + k, k)
, k in Z
(3.6) DEFINITION.
associated with the A[[z1]] defined by
The (formal) transferfunction matrix 1f(z) input/output map f is the p x m matrix over
00 = rr W
0
25
This definition of transferfunction for time varying systems will be seen by the following Propositions to be the natural definition to capture the timevarying behaviour of our systems.
(3.7) PROPOSITION. Let f be an input/output map. Let y be the
output resulting from the input u in A . Let Y(z) and U(z) denote the (generalized) ztransforms of y and u respectively. Then
(3.8) Y(z) = Wf(z)U(z) . D
Note the close resemblance of (3.8) to the timeinvariant
transferfunction theory. Proposition (3.7) is a result one would desire of any definition for transferfunction. This analogy to
timeinvariant systems is further illustrated by the
(3.9) PROPOSITION. Let Z = (F,G,H,J) be a linear timevarying system with input/output map fr. Then, the transferfunction matrix W associated withfX isgiven by
(3.10) WE(z) = H(zI  F)~1G + J .
Despite the close resenblance these two results bear to the timeinvariant theory, it must be emphasized that (3.8) and (3.10) are computed via the skew (noncommutative) multiplication defined earlier. For instance, (zIF)l is determined by the formula
26
(zIF)~ = zI + (aF)z2 + (aF)(a 2F)z3 +
We conclude this chapter by relating polynomial factorizations of transferfunction matrices to collections of input/output difference equations. Consider the collection of input/output
difference equations with timevarying coefficients described by
q r
ilO Qi(k+i)x(k+i) = i*o Ri(k+i)u(k+i) (3.11)
t
y(k) = ijO Pi(k+i)x(k+i)
m nanD an D
Here, u E A+, x e A+, y E AP, and the Qi, Pi, and Ri are
matrices of appropriate dimension over A. Define the polynomial
matrices
q .)r .q r t
Q(z) = i50 z Q , R(z) = .0 z R , P(z) = O z P ,
and assume that Q(z) is invertible. Then, it is an easy computation
to verify that the transferfunction matrix associated with the input/output map defined by (3.11) is given by
(3.12) Wf(z) = P(z)0 1(z)R(z) .
27
Conversely, given an input/output map f whose associated transferfunction matrix Wf(z) admits a polynomial factorization (as in (3.12)), one can readily derive a collection of input/output difference equations (as in (3.11)) that correspond in a natural way to the particular factorization (3.12).
For linear timeinvariant systems this correspondence was first observed by ROSENBROCK [1970]. Polynomial factorizations of
transferfunction matrices are investigated extensively in the next three chapters.
CHAPTER FOUR
POLYNOMIAL REALIZATION THEORY
We will now consider polynomial matrix fraction representations of a given transfer function. Let P, Q, R, and S be polynomial
matrices such that Q is invertible. Let f be an input/output map
with the associated transfer matrix
Wf(z) = PQ1 R + S
For timeinvariant systems over fields, FUHRMANN [1976] gave a natural realization for f in terms of the polynomial matrices P, Q, R, and S. For timeinvariant systems over arbitrary commutative
rings, KHARGONEKAR [1982] has obtained corresponding results. We now proceed to derive results for timevarying systems analogous to those of Fuhrmann and of Khargonekar. We first develop the machinery with which we can obtain these natural realizations for timevarying systems.
Let Q be an rxr (skew) polynomial matrix, invertible over A((z1)). Define a right Amodule
XQ := {x in Ar[z] : Q1x is strictly proper .
28
29
Define a right Alinear projection nlap
7Q : Ar[z] + XQ : x + Qn(Q1x)
where w(Qlx) is the strictly proper part of QYIx. Clearly, 7ro is
surjective. The map 7rQ corresponds to viewing polynomials in Ar[z] modulo Q.
(4.1) PROPOSITION. XQ is a finitely generated, free right Amodule (i.e., X0 is isomorphic as a right Amodule) to An for some integer n.
PROOF. The proof of this technical fact is rather long, and we have therefore put it in Appendix A. 1
(4.2) EXAMPLE. Let us consider the monic rxr polynomial matrix
Q(z) =zn + zn1Qn1 + ... + zQ1 + 00
Let e denote the jth column of the rxr identity matrix. Notice
that for i = 0, 1, ..., n1; j = 1, 2, ..., r, z ej is in XQ since QYizie_ is strictly proper. In fact the set
{zie : i = 0, 1, ... , n1 ; j = 1, 2, ... , r} forms a basis for Xq, and X = Anr
30
Suppose Q(z) is an (not necessarily monic) rxr polynomial matrix of degree n. It follows from the proof of Proposition (4.1) that the set
(4.3) {Q (z e ) : i = 0, 1, ..., n1; j = 1, 2, ..., r}
where ej is the jth column of the rxr identity matrix, generates X . The difficulty however, is to extract a basis for XQ which we shall require to obtain our natural realizations (see Remark (4.5) and Theorem (4.6)). The following example illustrates what can happen if 0(z) is not monic.
(4.4) EXAMPLE. Let q(z) = Oz3 + z2 where S(k) = 1 if k is even, and a(k) = 0 otherwise. Notice that q1(z) = z2  z~1(a ). We compute a set of generators for Xq using (4.3):
7q (1) = qw(q~ ) = qq I 1 ,
i (z) = qir(q z) = qzI = az2 + z
i (z) = 0
22
Thus, Xq is generated by {1, z2 + z} which also happens to be a basis for Xq, and Xq A2.
31
(4.5) REMARK. The right Amodule XQ will serve as the state space for our natural realizations. Proposition (4.1) tells us that for
any polynomial matrix Q, our state space X =A for some
integer n. This means that the dimension of our natural state space (see Theorem (4.6)) does not vary with time: a comforting fact. It
is shown in the proof of this proposition that the dimension of X0 is given by the rank of a certain matrix which can be computed for any given Q. 0
We can also view XQ as a right A[z]module by defining rightmultiplication by z in XQ as
x  z = TQ (xz) , for any x in X Q
This right A[z]module structure is fundamental for obtaining the natural realizations alluded to at the beginning of this chapter.
We are now in a position to state the following result.
(4.6) THEOREM. Let P, Q, R, and S be pxr, rxr, rxm, and pxm polynomial matrices such that Q is invertible. Let f be an input/
output map with the associated transfer matrix
Wf(z) = PQ R + S
Define the maps c, r, and ip by
32
S: XQ + X Q x + TQ (xz)
r :Am + X : u + Q (Ru)
* :X + A: x + (PQ~ xz)o
Let F, G, and H be the matrix representations of ,, and
respectively with reference to some (fixed) basis of Xn, and let
J := (PQIR + S)0. Then E(P, Q, R, S) := (F, G, H, J) is a
realization of f.
PROOF. Let {bh, h29 ..., hn he a basis for XQ. Any element x in XQ
n
has the unique representation x = b. a , a in A, since XQ
i=1
is a free right Amodule (see Proposition 4.1). We will write this
as
x = B
af
1
a2
a
n
where B = [b1 b2 ... bn]; using this notation, Lhe matrices F, G, and H with respect to the basis {b1, b2, ..., bn} are determined by
33
BF = (B)
(4.7)
BG = r (IM
H = *(B),
where IM is the mxm identity matrix (over A). Notice that the naps $ and * and Asemilinear, i.e.,
(xa) = c(x) a(a)
*(xa) = *(x)a(a)
for any a in A. The map r is Alinear. Let o denote composition of maps. We now compute using (4.7)
Ak := H(aF) ... (a kF)(a kG)
= (B)(aF) ... (a k1F)(a kG)
= * o [BF(aF) ... (a 2F)(a k G)]
The last equality above follows from the Asemilinearity of P.
Continuing our computation, we see that
34
= ' o [BF(aF) ...
(a k2F)(a k1G)]
= 'p0 [o(B)(aF) ... (a k2F)(a k1G)]
= (* o )[BF(aF) ...
(a 3F) (a 2G)]
where the last equality was deduced from the Asemilinearity of In this manner, we see that
Ak = (' k1 )(BG) =( rki M
Notice now that for any a in A[z], and any x in Ar[z],
rQ(1Q (x)a) = rQ (xa) .
4'.
Therefore,
Ak
ok1 r)(Im)
= (' o k1) Q(R)) =
(' o p k2p)( (Rz))
= (*p)(TT (Rz k1)) = (PQl Q(Rzk1)z)
Ak
35
=(PQR zk)0
From Proposition (3.9) it follows that the transfer matrix W (z) of the input/output map fZ where E = (F, G, H, J) is given by
W, (z) = H(zI  F)~1 G + J
= Z Ak zk k=1
since
Z(P,
+ J
= E (P01~R zk) zk + (PQ~1 R + S)
k=1
= PQ~1R + S
(PQ~1R + S) is proper. Hence, WE(z) = If(z), i.e.
0, R, S) = (F, G, H, J) is a realization of f. D
(4.8) REMARK. We could have chosen a different basis {b1, b2'
..., } for XQ and obtained a different realization
= (F, G, H, J) for the input/output map f. In this event, I
and E = (F, G, H, J) are Aisomorphic. We shall call any
realization of f obtained as in Theorem (4.6) the Fuhrmann realization associated with the polynomial matrix function representation Wf(z) = PQ R + S.
36
We now illustrate Theorem (4.6) with two examples.
(4.9) EXAMPLE. Let f be an input/output map described by the following collection of input/output difference equations
E(k+2) + q1C(k+1) = q0o(k) = r u(k+1) + r0u(k)
y(k) = pE(k+1) + p0E(k)
As shown in Chapter Three (see equations (3.11) and (3.12)) the
transferfunction Wf(z) associated with f is given by the polynomial representation
Wf(z) = PQ~R = (pz + p0)(z + zq + q0)~ (zr1 + r 0)
1^
where pi = a p1.
It follows from Example (4.2) that the set {l, z} forms a basis for XQ. Let us represent any x = ao + za, in XQ by the vector [a0 a1]'. We first compute the action of the maps p, F, and P (as
in Theorem (4.6)) on this basis of XQ:
*(1) = TQ (1  z) = z
(z) = Q (z  z) = Qn(Qz 2) = z2 _ Q(Q1z 2
= Z2  Q(z) = zql  q0 '
37
r(1) = Q(R  1) = Q(zr + r0) = zr1 + r0
(l) = (PQ 1 Z)* = (pz + pO)(z~ + ... )}0 = p1
*(z) = (PQ 1zz)0 = {(pz + p0)(1  z~1a2(qj) + ... )} 90 ~ 912
Also, J = (PQ~1R)0 ~ p1r1. Thus, the Fuhrmann realization E(P,Q,R) = (F, G, H,J) of (with respect to the basis chosen) is given by
0 q ~r 02
F = , G = , H = [p0  P1,2(q ) J = Pir .
1 q r
(4.10) EXAMPLE.
transfer function Example (4.4))
Let f be an input/output map whose associated Wf(z) admits the polynomial representation (see
Wf(z) =Q R (oz3 + Z) (zr1 + r0)
Observing that
(1) = 7 (1  z) = 8 z2 + z ,
38
(az2 + z) = Q (az3 + z2) = 0,
r(1) = ITQ (zri + r0) = (Oz2 +
z)r1 + r0
*(1) = (Q1 1 * Z)= (W
*( z2 + z) = (Q ( z3 + z2 0=
we can immediately write down the Fuhrmann realization E(I,Q,R) = (F,G,H,J) of f as
0 r 07
G=[
,] G
H = [(aS) 1] , J = ar1
In the form of the
case where Q is monic, one can readily obtain the general Fuhrmann realization: Let
t q . s .
P(z) = E P z , Q(z) = E z Q , R(z) = E z R.
i=0 i=0 i=0
and assume that Q = I. associated transferfunction representation
Let f be an input/output
matrix Wf(z) admits the
map whose polynomial
D
0
F = 1
39
Wf(z) = P(z)Q (z)R(z)
Then, the Fuhrman realization E(P,Q,R) = (F,G,H,J) of f is given by
0
I
0
0 ...
O
I O*
QO
Qi
0q1
R0
R
R S
0
(4.11)
H = [P0 P1''P t0 ...0]
0
0
0
I
0 ...
0...
I
a2(B I
3
a ( 2) ...
j 3 (B3) ...
_I
q
a 9(B ) a~q(B q)
J = (PQ~ R)0 '
where A~4(z) = z~qI + z1B2 + zq2 B3 +
The FuhIrmann realization is "natural" in the sense that controltheoretic properties of the realization can be characterized in terms of algebraic properties of the associated factorization. We have,
for instance, the following result characteri/ing reachability in N steps.
40
(4.12) THEOREM. Let P, Q, and R be pxr, rxr, and rxm polynomial matrices over A such that Q is invertible over A((z1)). Let f be an input/output map with associated transfer matrix
Wf(z) = PQ~R
Then, the Fuhrmann realization E(P, Q, R) = (F, G, H) is reachable in N steps for some positive integir I if and only if there exist polynomial matrices Y1 and Y9 such that
QY, + RY2 =I
PROOF. Let E(P, Q, R) = (F, G, H) be the Fuhrmann realization of Wf(z) = PQ R relative to some (fixed) basis {b1, b2' ..., h } of XQ. We continue using the notation used in the proof of Theoren
n
(4.6): any x in XQ has a unique representation x = b. a , {a }
i=1 1
in A, and we shall write this as
x = [b1 b2 ... bn
a1
a2 a2
a
Suppose E = (F, G, H) is reachable in N steps for some integer N. This means that there exist matrices CO, C1, ..., CNI over A such
41
that
[GF(aG) ... IF(aF) ... (a N2 F)(aN1G)]
Let Q(I) = BT, where T is an nxn matrix over A. that
It then follows
7 Q(I) = B(IT)
= B [GIF(aG)I ... JF(aF) ... (a N2F)(a N1 G)]
CL T
 CN1 T
= [r(I)j(por)(I)l ... ( N1or)(I)]
C0T C T
 N1T
= 7 Q(RY2)
C c
L CNi
= I
42
N1
where Y2 E z C T. Hence,
i=0
7 (I  RY2) = 0.
This implies
that there exists an rxr polynomial matrix Yj over A[z] such that
QY1 + RY2
1.
Now suppose that there exist polynomial matrices Y, and Y2 such that QY1 + RY2 I. Then, for any x in XO,
RY2x = x Qy1x
Hence, Q (RY2 = x) = x. We can, therefore, write
B, = R(RY2B)
N1
where B is as before. Let Y2B = E z D. . It then follows ;ihat i=1
/
N1
B = 7 (R E z D.)
Q 1 =1
= r(I) D 0 + ($ o r)(I) DI + ... + ( N1 o r)(I) DN1
= B [GIF(aG)...F...(a N2 F)(a N1G)]
D 0
D
 N1
Since {b1, h2, *., b n is a basis for XQ, this implies that
43
[GjF(aF)I... IF(aF) ... (a N2F)(a N G)] D0 D 1
N1_J
Consequently, the system E = (F, G, H) is reachable in Nsteps.
(4.13) REMARK. A dual version of Theorem (4.12) characterizes
observability in Nsteps. We state this without proof. The Fuhrmann realization E(P, Q, R) = (F, G, H) is observable in N steps if and only if there exist polynomial matrices Y3 and Y4 such that
Y3Q + Y4P = I.
It is a simple calculation to verify that the Fuhrmann realization associated with the polynomial matrix representation Wf(z) = H(zI  F)~ G is itself Aisomorphic to E = (F, G, H).
Consequently, from Theorem (4.12) we immediately have the following analog of the familiar timeinvariant theory result:
(4.14) COROLLARY. The pair (F, G) is reachable in N steps if and only if there exist polynomial matrices Y, and Y9 such that
(zI  F)Y1 + GY2 = I
44
Let Q and Q be any rxr and XxX invertible poly n oi l iatr ics. '4,i ill now give necessary and sufficient coniitiorii Fr XQ and X to be A[z]module isonorphic. 7, , cist A = i 'i ,
A(z) is the usual riny oF polynonials, this issue was resolved by FURHMANN [1975, Thin. 4.7], 1: for the case A = an arbitrary commutative ring, by KHARGONEKAR [1982, Thm. 3.6]. The results we
have obtained are natural extensions of those of Fuhrmann and Khargonekar.
(4.15) PROPOSITION. Let Q and Q be rxr and ex) invertible
polynomial matrices. A map P X Q+ X is a right A[z]module
isomorphism if and only if there exist polynomial matrices C, 0, Y1, Y, Y3, and such that for any x in Xn
(4.16) *(x) = 7 (Cx)
and such that
(4.17) CQ = QD
(4.18) CYI + QY2=I
(4.19) Y3D + YQ= I .
45
We omit the proof of this Proposition since it closely resembles that of KHARGONEKAR [1982, Thm. 3.6]. This abUve proposition is used later in the dissertation to prove further results.
Let f be an inputoutput map. In general, there will exist
several polynomial matrix franction representations for Wf(z). We now address the following problem: If
Wf = PQ1R + S =PQ +
are two polynomial matrix representations for Wf what conditions
must P, Q, R, S, P, Q, R, and S satisfy in order that the Fuhrmann realizations Z(P, Q, R, S) and Z(P, Q, R, S) be Aisomorphic?
This problem is closely related to the problem of strict system equivalence (see ROSENBROCK [1970] and FUHRMANN [1977] for s ric system equivalence for systems over fields). Our results closely
resemble the results obtained by FUHRMANN r1977, Thin. 4.1] and by KHARGONEKAR [1982, Thm. 4.3] (for systems over rings). We state
without proof the
(4.20) THEOREM. Let f be an input/output map whose associated transfer matrix
Wf(z) = PQ1R P
46
Then, the Fuhrmann realizations E(P, Q, R) and E(P, Q, R) are
Aisomorphic if and only if there exist polynomial :natrica C, '), Y1
... YS such that
(4 2 )C 0  Q RQ R D Y6
(4.21) IL 1[
Y5 1 P 0 P 0 0 I
CY 1 + 2 Y3D +YQ=I . D
Essentially equations (4.21) ofthe above theorem state that the system matrices (in the sense of ROSENBROCK [1970]) must be
polynomially equivalent for their respective natural realizations to be Aisomorphic.
In the next chapter we continue devilopiny the polynomial model theory and investigate Rezout polynomial factorizations of transferfunction matrices.
CHAPTER FIVE
POLYNOMIAL FACTORIZATIONS OF
TRANSFERFUNCTION MATRICES
In this chapter, we shall consider the issues of existence and computation of polynomial matrix fraction representations. ChapL r
Six explores application of the theory developed here to the design of feedback control systems. The controllers designed will be
specified in terms of a polynomial matrix fraction representation of the transferfunction matrix and can be implemented using the
polynomial realization theory described in the previous chapter. We would like to emphasize that all of our results are constructive.
Many frequencydomain methods used to design controllers for timeinvariant systems begin with polynomial factorizations of the plant transferfunction matrix G(s) of the form
G(s) = Q1(s)R(s), with
(5.1)
Q(s)Yj(s) + R(s)Y2(s) = I
See for example the books of ROSENBROCK [1970] and WOLOVICH [1974]. Here, Q, R, Y1, and Y? are polynomial matrices, and (5.1) is referred
47
48
to as a leftBezout polynomial factorization. For timeinvariant
systems, the existence of these factorizations is not an issue; any plant transFer Function natrix always admits left or rightBezout polynomial factorizations. This however, is not the case with timevarying systems (see Example (5.9)). We therefore need to understand what class of time varying systems admit such factorizations before we use them to design control systems. Theorem (5.2) answers just
this issue and essentially sta s tht d transferfunction matrix admits Bezout factorizations if and only if the associated input/ output map admits a canonical realization.
(5.2) THEOREM. Let f be an input/output map, and let Wf(z) be the
transferfunction matrix associate! with f. Then, the following
statements are equivalent.
(a) There exist polynomial /atrices Q, R, YI, and Y2 over A[z]
with Q invertible and such that
Wf(z) = Q_ R , QYI + RY2 = I
i.e., Wf(z) admits a rightBezout polynomial
factorization.
(b) There exist polynomial matrices P, Q, Y3, and Y4 with Q
invertible and such that
Wf(z) = PQ~ , Y3P + YQ = I
49
i.e., Wt(z) admits a leftBezout polynomial
factorization.
(c) f admits a canonicli realizatim.
PROOF. The proof of this theorem is extremely long and rather intricate, and may be found in Appendix B. We would like to
emphasize that this proof is constructive.
We now present a systematic procedure for obtaining the polynomial factorizations (i.e., the matrices Q, R, YI, and Y2) of part (a) of this theorem. An almost identical technique can be used to obtain the polynomial factorizations of part (b) of the above theorem.
Let Z = (F, G, H) be (given) a canonical realization of the iiput/output map f.
STEP I. Since the given system E is canonical, it is reachable in N steps for some integer N. Let RN := [G F(aG) ... F(aF)
(aN2F)(aNlG)] be the Nstep reachability matrix for the
pair (F, G). Compute a rightinverse U for RN over A. Compute
the polynomial matrices
0 T zI ... z N2
0
50
X2 = [I ZI ... zN1I] U
Here, each of the identity matrices i n jn ixr )lock. It will happen that, with X, and X2 defined as above,
(zT  F)X1 + GX2 I
STEP II. Find matrices X and Y over A such that
F X
M := K:
H Y
is invertible over A. Different choices for the matrices X ind Y will result in unimodularly related Bezout polyio i iA1 factorizations (see Appendix B). STEP III. Compute No := M01 and partition it as
A C
LB C 2
where A is an nxn matrix. It .iill hppen because of the
observability of the pair (F, H) (see Proposition (8.1)) that
51
the pair (A', B') is reachable in K steps for some integer K. Determine (using Proposition (2.6)) an nxp matrix S over A such that A A' + B'S' is anilpotent, i.e., for some integer L,
A(aA)(2A) (L(A) =0
STEP IV. Define the matrix
I (a~S)
W :=
0 0
and compute the polynomial matrix
L
V E (N0 Wz)i)N 0
i=0
Partition V as
0 1
(5.3) V = ,
D 0
where Q is a pxp matrix.
STEP V. Define the polynomial matrices
R := DG , Yj := HXj(zS + X)  Y
52
Y2 X2(zS + X) .
Then, R, Y1, Y2 and Q (as defined in 5.3) determine the desired
polynomial factorization of Wf(z) as in Theorem (5.2).
We now illustrate this procedure using two examples.
(5.4) EXAMPLE. Consider the linear timevarying system Z = (F, g, h) where F(k) = 1 for all k in Z, and g = h = a, where a(k) = 1 if k is even and a(k) = 0 otherwise (see Example (2.5)). We
systematically employ the procedure described earlier to obtain a
1
rightBezout polynomial factorization for Wf(z) = h(zI  F)_ g.
STEP I. The pair (F, g) is steps, and R2 = [g F(ag] simply U = [1 1]. Then,
0 1~
X= (aR2)
X0 0
X2 = 1 z] U = z + 1
easily seen to be reachable in two [a 1a]. A rightinverse for R2 is
U = a  1 ,
STEP II. With X = 0, and Y = 1, the matrix
F X 1 0
H Y a 1
53
is invertible over A. STEP III.
N :M5
_a
0
1J.
We need to find an S in A such that (1 + aS) is anilpotent. By inspection, it is clear that S = 1 will suffice.
STEP IV. Defining
W:=
.0 po
we compute the polynomial matrix V:
q
V = ( E0
i =0
1az
_a+aZ
(N0Wz) i) N0
az2j+z
az 2 az+1
STEP V. The desired factorization is then
Wf(Z) = Q~R , QY1 + RY2 = 1
54
where
Q =az2 + az  1 ,R = a
Y 1 , 2 = z2 + z .
(5.5) EXAMPLE. Let us consider an armaturecontrolled dc motor described by the input/output differential equation
(5.6) d 2 e(t) + a(t) d(t) = (t)u(t)
dt dt
Here, the input u(t) is the applied armature voltage, the output 6(t) is the angular position of the motor shaft, and the timevarying coefficients a(t) and s(t) are given by
a(t) = +0.74 + 0.3k(t) , 6(t) = 5.56k(t)
where k(t) is the (normalized) effective motor torque "constant". The nominal value of k(t) is 1, however during operation k(t) may vary significantly due to motor shaft loading and heating effects. We assume that the timevariation of k(t) is known a priori; for example the motor may be part of a machining operation and k(t) could be identified offline, say by measurements taken during test runs.
55
By sampling the continuoustime system (5.6) with a suitably small sampling period T we obtain the following timevarying sampleddata state model E =(F, G, H) that approximates the motor characteristics:
L (kT+T)] w(kT+T)j
[1 T 6(kT) 1T2/2
+ u(kT)
L0 1aT_ (kT) LT
(5.7)
e(kT) = 1
] (kT)
0 .
w(kT)
Here w(kT) is angular velocity fo the motor shaft. It can be easily verified that E = (F, G, H) is canonical. We wish to obtain a rightBezout polynomial factorization for the transferfunction matrix WE(z) = H(zI  F)~1G. We shall later use this factorization (see Example (6.7)) to design a deadbeat controller for the motor. We
systematically employ the procedure described earlier.
STEP I. It can easily be shown that
(zI  F)X1 + GX2 = I
where
56
T/2 T2/4
X 1
[1 T/2
X 1 X S 
+ (1  aT)
 + 3T 1 2z  T
X = 2 + T
STEP II. With X' = [0
1/T], Y = 0
, the matrix
F X i
H YJ I
T
1+aT
0
0
1/T
0
is invertible over A. STEP III.
0
N 0 := M  = 1/T
1+aT
0
0
T
11
l/T
1+aT
Notice that the matrix A := the upper 2x2 block of No is already anilpotent. Hence we may choose S = [0 0]'.
57
STEP IV. Defining
W=L
0
( S)
0 .
we compute
2
V ( E (W
i=0
0
=1/T
z+aT1
z)) N0
0
0
T
1
1/T(z1)
Z2+(2aT)z+aT1
The desired factorization is then
W,(z) = Q R
S QY1 + RY2=1
Q = Z2 + (aT2)z + (1  aT)
2 2
,R = z (1) + 11(aT+1),
Y2= (1 +3) 1 , 2 2 1az + ) TT
Recall that X = 2 + T.
L i
0
_0
0 0
1 0 ,
0 0_
STEP V.
where
(5.8)
y T Y T=T45
58
We would like to remark that not all timevarying systems admit canonical realizations. This is demonstrated by the following example.
(5.9) EXAMPLE. Consider the linear time varying system E = (F, g, h) where F(k) = h(k) = 1 for all k in Z, and g = A = the unit pulse concentrated at the origin. Let f. be the input/output map
associated with E. We will show that fE does not admit a canonical realization, and therefore, by Theorem (5.2) Wf(Z) does not admit left or right polynomial factorizations.
Suppose fZ did admit a canonical realization. Then, by Theorem (5.2), there exist polynomial matrices P and Q with Q invertible and such that
(5.10) h(zI  F)lg = (z  1)1 A PQ1
n 1
Let Q = E z Q. From (5.10) it follows that (z  1)_ A Q = P is a
i=0 1
polynomial. The coefficient of z1 above must be zero:
0 = AQ0 + (aA)Q1 + ... + (n A)Qn
Multiplying this equation on the left by (a A) we conclude that
(a A)Q. = 0 for i = 0, 1, ..., n. Thus AQ(z) = 0 which is
impossible since Q is assumed to be invertible. El
59
The central theorem of this chapter, Theorem (5.2), is stated for canonical linear timevarying systems. Under the weaker hypothesis of reachability we can obtain the following theorem (which is actually an intermediate result used in proving Theorem (5.2)), the proof of which may be found in Appendix B.
(5.11) THEOREM. Let E = (F, G, H) be a linear timevarying system that is reachable in N steps. Then, there exist polynomial matrices P and Q with Q invertible such that
W,(z) = H(zI  F) G = PQ~1
and the Fuhrmann realization E(P, Q, I) is Aisomorphic to E = (F, G, H).
We would like to remark in closing that the systematic procedure given in this section for computing Bezout polynomial factorizations for timevarying systems carries through with minor modifications for systems over a principal ideal domain. The interested reader may
refer to POOLLA and KHARGONEKAR [1983] for more details.
In the next chapter, we apply the theory developed here and in Chapter 4 to the design of feedback control systems.
CHAPTER SIX
APPLICATIONS OF THE POLYNOMIAL THEORY TO FEEDBACK CONTROL
We will now explore applications of the theory developed in earlier chapters to the design of feedback control systems. In
particular, we will show that via dynamic output feedback it is possible to "coefficient assign" canonical systems. We would like to
stress that our design techniques are constructive. The controllers we obtain will be specified in terms of a polynomial matrix fraction representation of their transferfunctions, and can be then implemented (realized) using the polynomial realization theory of Chapter Four.
Let us consider a canonical linear timevarying system
= (F, G, H) over A. From Theorem (5.2) it follows that WE(z) = H(zI  F)lG admits a rightBezout polynomial factorization:
(6.1) WE(z) = Q~ R , QYI + RY2 = I
Consider now the feedback control system shown below.
60
61
V(z) + E(z) U(z) Y(z)
I c
Here Ec is the controller (to be designed). Suppose Zc is specified
in terms of the following collection of input/output difference equations:
U(z) = P CE(z)
(6.2)
QcE(z) = R cE(z)
This is the socalled Rosenbrock representation (see discussion at the end of Chapter Three), and here Pc' Qc, and Rc are polynomial matrices over A[z] and Qc is assumed to be invertible. Then the
controller transferfunction matrix has the form Wz (z) = PCQC Rc
We can then represent the closedloop system shown above by the equations
Q RPC Y(z) 0
(6.3) =V(z) .
.R c Qc R c.
62
Thus, we see that the closedloop system dynamics are determined by the inverse of the matrix
Q RPc
7T(z) =
Rc C C
In particular, the closed loop system is internally uniformly asymptotically stable if and only if 7r (z) is a stable Laurent
series (see Chapter Seven).
We now address the following question: To what extent can the matrix i~ (z) be "assigned" by selecting Pc' QC, and Rc, i.e. by designing the controller. Recall that we must constrain Qc to be invertible and WE (z) = PcQc1Rc to be proper in order to realize
c
the controller. In the timeinvariant case this problem has been extensively studied. It has been shown that (see for example EMRE and KHARGONEKAR [1982]) with Rc = I, it is possible to arbitrarily assign the coefficients of the polynomial
detw(z) = det(QQc  RP )
For the timeinvariant case, since the roots of the equation detir(z) = 0 are the closedloop system poles, this assignability result implies that it is possible to arbitrarily alter the closedloop system dynamics by appropriately designing the controller (i.e. selecting Pc and Qc).
63
In the timevarying case, the matrix Tr(z) is defined over the noncommutative ring A[z] and there is no known definition for the determinant of a matrix over a noncommutative ring. Thus, we cannot speak of assigning the coefficients of detw(z). However, we can
still consider the problem posed earlier: to what extent can i(z)
be assigned by choosing W. (z). In this regard, we have the
c
following result:
(6.4) THEOREM. Let E = (F, G, H) be a canonical linear timevarying minput poutput system over A, and let
W,(z) = H(zI  F)G = Q R , QY1 + RY2=I
be any rightBezout polynomial factorization of W,(z). Let p(z)
be any pxp monic polynomial matrix with deg(p) > 2 deg(Q) + max{deg(Yj),deg(Y2)1}. Then, there exists a controller Ec with proper transfer function matrix Wz (c) = Pc.cQR. where P., Q and
c
R. are polynomial matrices, and there exist unimodular polynomial matrices T_(z) and T2(z) such that
I 0
w~(z) = Tj(z) I T T2(z) .
64
PROOF. Since Q is an invertible polynomial matrix, it follows from Proposition (3.2) that there exists a polynomial matrix T such that M := TQ is monic and deg(M) = deg(Q). We now divide p on the left by TQ to obtain
(6.5) = MITQ + N
where deg(N) < deg(Q). Define the polynomial matrices Pc' Qc, and Rc by
(6.6) Pc Y2 ' c := MIT + NY1 , Rc := N
We shall show that the desired controller Ec has transfer function matrix Wrc(z) = PCQC1RC where Pc' Qc, and Rc are defined by (6.6). Before we do this, we must first ensure that (i) Qc is invertible and that (ii) PcQ C Rc is proper.
To prove (i), notice that
deg(T1) = deg(QM1) = deg(Q)  deg(M) = 0
deg(Ml) = deg(4)  deg(M)
> deg(Q) + deg(Yi)
It then follows that
Qc = MT(I + T~1M~INYI) ,
and that
65
deg(THM1NY1) < deg(T~1)  deg(Ml) + deg(N) + deg(Yj)
< 0  deg(Q)  deg(YI) + deg(N) + deg(Y,) < deg(N)  deg(Q)
< 0
Thus, I + T'M1NY1 is monic, and hence Qc is invertible.
We now prove (ii). Notice that we can write
PcQc Rc = Y2(I +T M NY1) T I MIIN
However, since deg(T1) = deg(I + T'MI'NY,)~ = 0, it follows that
deg(PCQClRc) < deg(Y2) + deg(N)  deg(Ml) < 0.
Thus, the controller transferfunction matrix Wrc(z) = P QI R is
c c c
c
well defined.
With the definitions (6.6) for Pc' Qc, and Rc, it can now be mechanically verified that
Q RPc I 0 I 0 Q I I Y1
R c M T 0 1I 0 0 I
: T 1(z) : T2(z)T3(z)
0
66
The above equation together with the observation that Ti, T2' and T3 are unimodular polynomial matrices completes the proof.
Our technique in the above proof closely resembles that of KHARGONEKAR and OZGULER [in press].
(6.6) REMARK. We could, for instance, choose $(z) = zrI with r a sufficiently large integer to satisfy the degree constraint on p. In
this event, the controller designed as in the above theorem will result in a "deadbeat" closedloop system response. In other words,
with this controller, the closedloop system response resulting from any initial state will become zero after a finite number of steps (assuming the external input V(z) = 0). F
We now present a systematic procedure for designing a controller
Ec = (Fc, Gc, Hc) to yield a desired closedloop system response (specified by the polynomial matrix $). This procedure is
essentially distilled from the proof of the previous theorem and is exhibited explicitly for the reader's convenience.
STEP I. Given a canonical linear timevarying system E = (F, G, H), first obtain a rightBezout polynomial factorization for the transferfunction matrix as
, QY1 + RY2=I
W. z) = Q~ R
67
This can be done by systematically employing the procedure described in the previous chapter.
STEP II. Let p(z) be a (given) monic pxp polynomial matrix which represents the desired closedloop system response as in the statement of Theorem (6.4). Recall that there is a contraint on 4: deg(p) > 2 deg(Q) + max{deg(YI), deg(Y2)  1}. Leftdivide by 0 to obtain
= XQ + N
with deg(N) < deg(Q).
STEP III. Compute the polynomial matrices
PC := 2 , Qc := (X + NY1) , Rc := N
Then, the controller transferfunction matrix is given by W(z) =
PcQcIRc' C
STEP IV. Realize the controller by a state model Ec = (Fc,Gc,Hc)* This can be done using the polynomial realization theory of Chapter 4. If Q is monic, it will happen that Qc is also monic, in which case explicit formulae for the state model Ec = (Fc,Gc,HcJc) are given by (4.11).
68
We conclude this chapter with an example illustrating the design procedure outlined above.
(6.7) EXAMPLE. Consider again the armaturecontrolled dc motor described in Example (5.5), with a timevarying motor torque "constant". The timevariation could be due to motorshaft loading or heating effects. We shall use the systematic procedure described above to design a "deadbeat" controller for the motor.
STEP I. The timevarying sampleddata state model E = (F, G, H) of the motor is specified by Equations (5.7). We have already, in
Example (5.5) computed a rightBezout polynomial factorization for W,(z), and obtained
W (z) = Q1R , QY1 + RY2 1
where
Q = z2 + (aT  2)z + (1  aT) R = z(OT2) + (aT + 1)
2
2
1 =X , Y 2 =( z + a)T ,2
STEP II. Since we wish to design a "deadbeat" controller for Z we choose 4(z) = z4 [see Remark (6.6)]. Notice that 4 satisfies the
69
degree constraint deg(p) > 2deg(Q) + max{degY1, degY2  1}. Leftdividing by Q as = XQ + N we obtain
X(z) = z2  z(a1(aT  2)) + c
N(z) = (a(aT 2) a1(1  aT)  (aT  2)c)z (1 aT)c
c = &2(aT  2)a1(aT  2)  a1(1  aT)
STEP III. The controller transferfunction matrix is then given by
WE (z) =  Y2(X + NY,) N
c
and can readily be realized using the formulae (4.11) since in this case, Q(z) is monic. We leave the details to the interested reader.L
In this first part of the dissertation we have examined in detail a polynomial theory for linear timevarying systems. For the remainder of this dissertation we turn our attention to the study of linear timevarying systems based on the ring of stableproper rational functions in i"(Z)[[z 1].
CHAPTER SEVEN
STABILITY
In this chapter, we briefly review some important well known concepts dealing with the stability of linear timevarying systems. We also relate some of these concepts to the transferfunction theory based on skew rings developed in this dissertation. Some of the
material in this chapter is condensed from Section 4 of KAMEN, KHARGONEKAR and POOLLA [1984]. The reader is referred to this paper for further details.
In the past, a good deal of research on the stability of both continuous and discretetime linear timevarying systems has been done. For example, the interested reader may consult articles by WILLEMS [1970], ANDERSON and MOORE [1969, 1981a] (for Liapunov
stability theory), CESARI [1963], STARZINSKII [1955] (for stability of special classes of timevarying systems such as periodic systems),
FREEDMAN and ZAMES [1968] (for stability of slowlyvarying systems), etc.
For any vector x in Rn let lxii denote the Euclidean norm of x. For an mxn matrix M over R, let M0 := sup IMxII/4x1I. For an nxm
x
matrix N over A let the norm of N be defined by 1N1 := sup nN(t)Ii.
t
70
71
Let f : Am A be an input/output map. We shall say that f is bounded inputbounded output (BIBO) stable if and only if for any bounded input sequence, i.e., for any u over XC(Z)+. the corresponding output sequence y = f(u) is bounded, i.e., y is over XO(Z)+. Let Z = (F, G, H, J) be any (fixed) realization of f. Consider the free behaviour of the system E described by the vector difference equation
(7.1) x(k + 1) = F(k)x(k)
The system E is said to be internally uniformly asymptotically
stable (u.a.s.) if and only if for every real number E > 0, there exists a positive integer N such that for any initial time to in Z and any initial state x(to) with Rx(t )11 < 1, we have that
x(t 0 + i)II < e for all i > N . Here, x(t0 + i) is the solution of (7.1) at time to + i starting from initial state x(to).
Let P(z) be an nxm matrix over the skew ring A((z1)), i.e. P(z) is a matrix Laurent series of the form
P(z) = i z . .
i=N 1
Then, P(z) is said to be stable if and only if
IP. a + 0 as
72
In terms of this notion, we can characterize internal stability as follows (see GREEN and KAMEN [1983] and KAMEN, KHARGONEKAR and POOLLA [1984, Prop. (4.4)]).
(7.2) PROPOSITION. Let E = (F, G, H) be a linear timevarying system over A. Then, Z is internally u.a.s. if and only if (zIF)l is a stable matrix power series. rl
Note the similarity this result bears to the timeinvariant theory. Let f be an input/output map, and let E = (F, G, H, J) over A be any (fixed) realization of f. In contrast with the timeinvariant case, internal stability of E does not in general imply BIBO stability of f. However, for the class of bounded linear timevarying systems, we have the following result (the proof is omitted on account of its relative ease).
(7.3) PROPOSITION. Let f be an input/output map and let E = (F, G, H, J) over c(Z) be any (fixed) realization of f. Suppose is internally u.a.s. Then f is BIBO stable.
The following examples illustrate phenomena peculiar to timevarying systems with unbounded coefficients.
(7.4) EXAMPLE. Consider the timevarying system E = (F, G, H) over A where F = 1/2, G = 1, and
73
0, k<(
H(k) =
k k>0
From Proposition (7.2) it is evident that E is internally u.a.s. This system, however, is not BIBO stable because the output y(k) resulting from the input u = A = unit impulse at the origin grows without bound. Notice that Proposition (7.3) does not apply here because H is not over z (Z).
(7.5) EXAMPLE. Consider the linear timevarying system E =
(F, G, H) over A defined by G = H = 1, and
0, k40
F(k) =
1, k>0
Define a timefunction T over A by
1, k
T(k) =
k, k>0
Notice that T is not an element of zo(Z) and that T has an inverse T1 over A. Consider the system 2 = (F, r, R) over A where
74
S=(~ T~ )FT , H= HT
G=(a T )G
The systems Z and 2 are algebraically Aisomorphic (see Chapter 2). However, E is not internally u.a.s. while 2 is
internally u.a.s. This is because the transformation R is not Liapunov (recall that an nxn matrix T is a Liapunov transformation if and only if T and T1 are over Z"(Z)).
Let f be an inputoutput map with associated transferfunction matrix Wf(z). Suppose there exist polynomial matrices P, Q, and R of appropriate sizes with Q monic and such that Wf(z) admits the
polynomial representation
(7.6) Wf(z) = PQ 1R
In Chapter Four, we have described in detail a technique to obtain the (natural) Fuhrmann realization E(P, Q, R) = (F, G, H, J) of f associated with the particular polynomial representation (7.6) (the explicit formulae for F, G, H and J are given in equation (4.11)). The following theorem characterizes internal stability of the Fuhrmann realization in terms of the polynomial representation (7.6).
75
(7.7) THEOREM. Let f be an input/output map whose associated transfer function matrix W (z) admits the polynomial representation Wf(z) = PQI R. Further assume that P, Q, and R are polynomial matrices over 00(Z) [z], with Q monic. Then, the Fuhrmann
realization E (P, Q, R) = (F, G, H, J) is internally u.a.s. if and only if Q1(z) is a stable matrix Laurent series.
PROOF. Since the Furhmann realization associated with the polynomial
factorization Wf(z) = H(zIF)1G + J is itself E = (F, G, H, J), it follows that in particular XQ and XzIF must be isomorphic as right 00(Z)[z]modules. This, by Proposition (4.15) implies that there exist polynomial matrices C, D, Y1 and Y2 over Z(Z)[z] such that
CQ = (zIF)D , YID + Y2Q I
From these equations, we can write
Q = YDQ1 + Y2
= Yj(zI  F) IC + Y2
It is now clear from the last equality that if (zI  F)1 is stable then Q1 is stable. The converse follows from an identical argument.
D
76
In subsequent chapters, we shall deal only with linear timevarying systems with bounded coefficients, i.e., defined over the difference subring Zc(Z) C A. We shall also require that any controller we design be over Z (Z). These are physically reasonable
constraints since most timevarying plants that arise in practice have bounded time variation, and, implementation of controllers with unbounded coefficients would be numerically illconditioned.
(Technically, these constraints substantially complicate proofs and make results harder to obtain).
CHAPTER EIGHT
STABILITY AND ASYCONTROLLABILITY
In this chapter, we introduce the key notion of asycontrollability, which is closely related to being able to
stabilize a linear timevarying system by dynamic state feedback. ANDERSON and MOORE [1981a] have defined the notion of stabilizability
which is equivalent to being able to stabilize a linear timevarying system by nondynamic (i.e., memoryless) state feedback. One of our main results in this chapter, Theorem (8.9), shows the equivalence of stabilizability and asycontrollability. A striking conclusion of this theorem is that dynamics in state feedback buy nothing extra as far as the problem of stabilization is concerned.
Let us consider a linear timevarying system E = (F, G, H, J) defined over the difference subring Z0(Z) _ A. Recall that by
Corollary (4.14), E is reachable in N steps if and only if there exist polynomial matrices Y, and Y2 such that
(8.1) (zI  F)Y1 + GY2 = I
Motivated by equation (8.1) we have the following key definition:
77
78
(8.2) DEFINITION. The system E is said to be asycontrollable if and only if there exist stable matrix Laurent series Y and Y2 such that
(zI  F)Y1 + GY2 = I .1
This technical definition has a rather complex precise systemtheoretic interpretation (see Appendix C and also KHARGONEKAR and POOLLA [1984b]) in terms of stabilizing E via an openloop control law. Roughly speaking, a system is asycontrollable if and only if it
can be driven to zero "final" state asymptotically, using uniformly bounded input sequences along uniformly bounded state trajectories. In particular, this implies that if E can be stabilized using a dynamic statefeedback controller, then E is ascontrollable. The
phrase asycontrollability (from asymptotically controllable) is borrowed from KHARGONEKAR and SONTAG [1982] for timeinvariant systems over rings.
We shall also need the following definition. A system E =
(F, G, H, J) over X.(Z) is called rightrational asycontrollable if and only if there exist polynomial matrices A, B, and S over X (Z)[z] with S invertible and S1 a stable Laurent series and suoch that
(zI  F)AS1 + GBS1 = I
79
We now relate rightrational asycontrollability to the existence of a stabilizing controller with the following result.
(8.3) THEOREM. Let E = (F, G, I) over Zo(Z) be a linear timevarying system that is rightrational asycontrollable. Then, there
exists a controller E. = (Fe, Gc, Hc) over i (Z) such that the
closed loop system defined by
x ~F GH c x ~ G~
(k + 1) = (k) (k) + u(k)
x G c F C x . 0
is internally uniformly asymptotically stable (u.a.s.).
SKETCH OF PROOF. We would like to remark that the proof of this result is constructive enabling one to explicitly compute the controller Ec using operations in the skewring A[z]. This proof
closely resembles the proof of Theorem (6.4). We outline the basic
elements of the construction.
(i) Define K := max{deg(AS1), deg(BS1)  11. Pick any monic
polynomial matrix S1 over k (Z)[z] with deg(S1) > K + 2
and such that S1 is a stable matrix Laurent series.
1
(ii) Divide S, on the left by (zI  F) to obtain
80
S1 = M(zI  F) + N
with deg(N) = 0.
(iii) Define the polynomial matrices
PC := B , QC := MS + NA , Rc := N
Then, the controller transfer function is function is
given by Wrc(z) = P Q1R
C
(iv) Implement the controller using the polynomial realization
theory of Chapter 4. Explicit formulae for the controller in terms of the coefficient matrices of Pc' Qc, and Rc are given (for the case when Qc is monic) by equations (4.11).
Let E = (F, G, H) be a linear timevarying system over (Z).
ANDERSON and MOORE [1981a] have defined a notion of stabilizability which intuitively corresponds to requiring that unstable modes be controllable. The authors then show that this notion is equivalent to the existence of a stabilizing memoryless state feedback law u(k) = L(k) x(k). We shall (perversely) take this to be the
definition of stabilizability. In terms of our skewring framework, we phrase this as follows:
81
(8.4) DEFINITION. Let E = (F, G, H) over "(Z) be a linear timevarying system. Then, E is said to be stabilizable if and only if there exists an mxn matrix L over 2.o( Z) such that (zI  F + GL)I is a stable matrix power series.
One can similarly define the dual notion of detectability.
Let E = (F, G, H) over 10(Z) be a stabilizable linear timevarying system. Let L be an mxn feedback matrix as in Definition (8.4). Notice that we can write
(zI  F)Y1 + GY2 = I
where YJ = (zI  F + GL)~I and Y2= LY1. Since Y, and Y2 are stable
Laurent series, it follows from Definition (8.1) that E is asycontrollable. Thus, stabilizability implies asycontrollability. The much more difficult converse is also true.
(8.5) THEOREM. Let E = (F, G, H, J) be a linear timevarying system over X0(Z). Then, E is asycontrollable if and only if E is
stabilizable.
PROOF. The proof of this theorem is extremely long and may be found in Appendix C. We would like to remark that the essential technical difficulty in the proof is ensuring that the feedback matrix L is over X0(Z). D
82
We also immediately obtain the following:
(8.6) COROLLARY. A system E is rightrational asycontrollable if and only if it is asycontrollable.
PROOF. We have the following sequence of implications proving our claim.
(a)
stabilizability > rightrational asycontrollability I, (b)
c)
asycontrollability
Here, (a) follows from the discussion preceding Theorem (8.5), (b) follows trivially from the definition of rightrational asycontrollability, and (c) is a restatement of Theorem (8.5). D
Recall (from the discussion following Definition (8.2)) that if a linear timevarying system E can be stabilized by a dynamic statefeedback controller, then E is asycontrollable. This observation
together with the Definition (8.4) of stabilizability immediately
offers the following surprising conclusion.
83
(8.7) COROLLARY. If a linear timevarying system E can be stabilized using dynamic statefeedback, then E can also be stabilized using memoryless statefeedback. D
(8.8) REMARK. Indeed the above corollary is not (necessarily) expected, because there are classes of systems (for example delay systems, see KAMEN [1982, Ex. 3, p. 371]) for which it is not true. For timeinvariant systems over a field, it is a wellknown fact that dynamic state feedback is equivalent to memoryless state feedback as far as the problem of stabilization is concerned. The proof of this fact (see any modern text on control theory) relies heavily on the Kalman canonical decomposition. No such decomposition exists for
timevarying systems because the "dimension" of the reachable space could depend on time. D
Since stabilizability and asycontrollability are equivalent notions, we shall henceforth only speak of stabilizability.
It is important to find "nice" necessary and sufficient tests for stabilizability. This problem appears to be quite formidable unless one specializes to particular classes (e.g., periodic) of
timevarying systems. We do have, however, the following sufficient condition.
(8.9) THEOREM. Let E = (F, G, H, J) over *(Z) be a linear timevarying system. Suppose that one can find integers N and K and a
84
real number E > 0 with the following property: For any integer to, there exists some t with to < T < to + M such that
(8.10) det[RNR ](tl) > e > 0
where RN is the N step reachability matrix
RN := [G FaG ... FaF ... a N2FaN1G] .
Then, E is stabilizable.
PROOF. Essentially condition (8.10) corresponds to the system being
2. (Z)reachable in N steps but not at all times.
Given any initial state E in Rn and any initial time to, we construct an openloop stabilizing control law as follows : we apply zero control (i.e. u(t) = 0) for the time t0 < t < t . Then, we
drive the system to zero state in N steps. This can be done because from (8.10) E is reachable at time tj. Moreover, the input sequences applied are uniformly (in tj) bounded in norm. Having brought the
system to zero state we apply no further inputs. We can thus
stabilize E by an openloop control law. This implies that E is
asycontrollable, which by Theorem (8.5) implies that E is stabilizable. El
CHAPTER NINE
STABLEPROPER FACTORIZATIONS
Of late, the use of stableproper factorizations for control system design has become increasingly popular, one advantage being that properness of controllers is automatic. See for example
VIDYASAGAR [1978], DESOER et al . [1980], SAEKS and MURRAY [1981], KHARGONEKAR and SONTAG [1982], etc. In this chapter we investigate in detail stableproper factorizations for timevarying systems.
Throughout this chapter, we shall treat only bounded timevarying systems, i.e. systems defined over the difference subring 2 (Z). This is a reasonable physical constraint as discussed at the end of the previous chapter. The results we obtain in this chapter extend to some classes of timevarying systems defined over a difference subring B _ *(Z) such as periodic systems.
Define the set RP to be
RP {4 in X0(Z)[[z 1]] : a(zI  0)~ y + 6}
where a, a, y, and 6 are matrices of compatible dimensions over "(Z). Here, RP is a mnemonic for rational or realizable and proper. It is easy to verify that RP forms a ring with the usual
85
86
skewmultiplication and addition defined in Z(Z)[[z~ ]. Also,
define the subring RPs by
RPs := { = a(zI  $)_ y + 6 in RP : (zI  8)~ is stable)
We shall call RPs the ring of stable, proper, rational functions (the subscript s denotes stable). We shall abandon rigorous nomenclature and loosely refer to elements in RPs or matrices over RPs as being stableproper.
Let D(z) be an nxn matrix over RP. We shall say that D(z) is
bicausal if and only if D has an inverse D~1(z) also over RP. We
have the following simple lemma.
(9.1) LEMMA. Let D(z) = M1(zI  M2)1M3 + M be an nxn matrix over RP. Then, D is bicausal if and only if M4 is invertible over X(Z), and, in this event D1(z) is given by the formula
(9.2) D1I(z) = M1  M 1M (z  M2 + M3M3 M 4)M3M .
PROOF. Suppose M4 is invertible over X0(Z). Then, it can be
verified by direct multiplication that D1(z) is given by (9.2). To
prove the converse, suppose D(z) is invertible over RP. Let
D1(z) = No + Njz1 + N2z2 + ...
87
Notice that DD1 = I. Thus,
(M4 + terms in z1, z2, ...)(N0 + Niz+ ...) = I
Equating coefficients of zo we obtain M4NO = I. Notice that No is bounded (i.e. over X"(Z)). This proves our claim. D
The previous lemma is merely a generalization of the familiar timeinvariant theory result that a proper power series D in R[[z1E is invertible and its inverse is proper if and only if the constant coefficient of D is invertible.
Let f be an input/output map and let Wf(z) be its associated transferfunction matrix. Then, Wf(z) is said to admit a rightBezout stableproper factorization if and only if there exist stableproper matrices (i.e., over RPs) N, D, X, and Y with D bicausal and such that
(9.4) Wf(z) = ND~ , XN + YD = I
One can also similarly define leftBezout stableproper factorization. Such factorizations have been found to be extremely useful in tackling many controltheoretic problems (for example, optimal controller design, see ZAMES and FRANCIS [1983]). As with
the polynomial factorization theory of Chapter Five, we begin by characterizing input/output maps whose transferfunctions admit left
88
and/or rightBezout stableproper factorizations. We have the
following central result.
(9.4) THEOREM. Let f be an input/output map and let W,(z) be its pxm associated transferfunction matrix over 2"(Z)[[z 1 ] Then,
the following are equivalent:
(a) W(z) admits a rightBezout stableproper factorization.
(b) Wf(z) admits a leftBezout stableproper factorization.
(c) f admits a stabilizable and detectable realization.
PROOF. (c) + (a). Let E = (F, G, H, J) be a stabilizable and detectable realization of f. Consequently, by definition, there
exist matrices L and K over 2.(Z) that (zI  F + GL)~1 and (zI
F + KH)~1 are stable matrix power series. Define the stableproper
(i.e., over RPs) matrices N, D, X, and Y by
N = (H JL)(zI  F + GL)G + J,
D = I  L(zI  F + GL)lG
(9.5)
X = L(zI  F + KH)~IK
Y = I + L(zI  F + KH)1(G  KQ)
89
Notice by that Lemma (9.1) D is bicausal. mechanically verified that with these definitions
Wf(z) := H(zI  F)_ G + J = ND~
It can be
, XN + YD = I
Thus, Wf(z) admits a rightBezout stableproper factorization.
(a) + (c). Now suppose that Wf (z) admits a rightBezout stableproper factorization, i.e. these exist stableproper matrices N, D, X, and Y with D bicausal and such that
Wf(z) = ND~
! XN + YD = I
From the definition of a stableproper function (i.e., the ring RPs)' it follows that we can write
N = NY(zI  N2)1 N3 + N4
D = D1(zI  D2)Y1D3 + D4
where NJ, ... , N4, DI, ..., 04 are matrices of appropriate sizes over t (Z), with (zI  N2)1 and (zI  M2)_1 stable matrix power series. Since D is bicausal, it follows from Lemma (9.1) that
without loss of generality, D4 = I. Define the linear timevarying
system (over O(Z)) Z = (F, G, H, J) by
90
N2 N3D
F =
0 D2
H = [N 1  N4D11
N 3
G=
D 3D 1D3
J = N4
The system E is stabilizable because with L = [0  D] (which is over i*(Z)),
(zI  N2
(zI  F + GL)~
0
0
(zI  D 2)_1]
which is stable. We now show that E, defined by (9.6) is detectable.
Define the stableproper matrix Q by
(zI  N2)~1 N3
(zI  D2)~ D3 *
It is easy to verify that (zI  F)Q  GD = 0 and HQ + JD = N. Combining these equations with XN + YD = I, we can write
(9.6)
91
zIF G (zIF+GL)1 Q I o
(9.7)
XH (Y+XJ)_ LL (z IF+GL) 0 Y Iwhere V is some stableproper matrix whose exact formula is not critical to our needs. Define the matrix
zIF G
XH (Y+XJ)j
It is clear from (9.7) that is rightinvertible with 41 (the
subscript R denotes right) a stableproper matrix. We now show that p is also leftinvertible (and thus its inverse I  ~ is
unique). Notice that
zIF 0 I (zIF)] 1 G
(9.8) =:= 0 102 '
0 I XH (Y+Xi)
The matrix 01 is clearly both left and right invertible. Also,
(Y + XJ) must be of the form
(Y + XJ) = I + terms in z1, Z2,
92
since XN + YD = I. Thus, the leading (z*) coefficient of 2 is of
the form
V 3
*I
Consequently, *2 is both left and right invertible. Thus, from
(9.8) $ must also be both left and right invertible, proving our claim. Recalling that ' is a stableproper matrix, we can write
1 W1 WL2 (zIF) G
* *XH (Y+XJ)One component is the above equality is
Wj(zI  F) + (W2X)FH = I
Thus the dual of the pair (F, H) is asycontrollable, which by
Theorem (8.5) implies that E is detectable. This completes the proof
of (a) + (c).
(b) + (c) and (c) + (b) follow from a dual argument. I
93
(9.9) REMARK. In proving the above result, we have made critical use
of Theorem (8.5); the fundamental result of the previous chapter. Recall that Theorem (8.5) states that asycontrollability is
equivalent to stabilizability, and the rather difficult proof of this result is in Appendix C. There appears to be no way to circumvent Theorem (8.5) in studying the existence of stableproper factorizations. We would further like to remark that equations (9.5) enable one to compute stableproper factorizations once the stabilizing feedback matrices L and K are determined. D
Theorem (9.4) essentially states that a timevarying system admits stableproper factorizations if and only if it admits a
stabilizable and detectable realization. This characterization is
nontrivial because (in glaring contrast with timeinvariant systems), not all timevarying systems admit stabilizable realizations (and therefore by Theorem (9.4), stableproper factorizations). We illustrate this with the following example.
(9.10) EXAMPLE. (Also see Example (5.9)). Consider the linear
timevarying system E = (F, G, H) over X(Z) where F(k) = G(k) = 1 for all k in Z and G = A = the unit pulse concentrated at the origin. Let fE be the input/output map associated with E. Suppose fE admits a stablizable realization f = (F, , R). This would imply that for any initial state E at time t=1, there exists an open loop control law that drives f from x(1) = E to zero final state (and

Full Text 
PAGE 1
LINEAR TIMEVARYING SYSTEMS: REPRESENTATION AND CONTROL VIA TRANSFER FUNCTION MATRICES By KAMESHWAR RAO POOLLA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1984
PAGE 2
To my parents tr^ >. 7 S Â• ' 5 ' "
PAGE 3
ACKNOWLEDGMENTS I wish to express my sincere gratitude to all those who contributed towards making this work possible. Professor Edward Kamen, the chairman of my dissertation committee, has over the past three years been a constant source of encouragement for me. He has, through hours of invaluable discussions, been instrumental in advising me through all phases of this project. It has been a privilege and a pleasure to have been his student. Without the financial support he has arranged for me, this work would not be possible. To him I would like to express my deepest /gratitude. I cannot in words express my thanks to my dissertation committee cochairman and kind friend, Professor Pramod Khargonekar. He has over the last decade been my mentor and my source of inspiration, and has always been there when I needed him. I will hold the fondest memories of our association forever. I shall long cherish my association with Professor Allen Tannenbaum. Indeed, his tireless enthusiasm and optimism have left an indelible mark on me. Adieu, Allen. I would also like to express my most sincere appreciation to the other members of my supervisory committee. Professors T. E. Bullock,
PAGE 4
D. Drake, and R. L. Long for their guidance and help during the course of my studies. I am especially grateful to Ms. Carole Boone for her excellent typing and patience through illegible manuscripts and many revisions. Thanks go also to Radhika (for everything), to Seema (for lifesustaining khana), to my friends Amitava, Corrine, and Prakash (for moral support). This work was supported in part by the National Science Foundation under Grant No. ECS8200607. TV
PAGE 5
TABLE OF CONTENTS PAGE ACKNOWLEDGMENTS ABSTRACT ^ CHAPTER ONE INTRODUCTION TWO PRELIMINARY DEFINITIONS AND RESULTS 7 THREE THE TRANSFERFUNCTION FRAMEWORK 18 FOUR POLYNOMIAL REALIZATION THEORY 28 FIVE POLYNOMIAL FACTORIZATIONS OF TRANSFER FUNCTION MATRICES SIX APPLICATIONS OF THE POLYNOMIAL THEORY TO FEEDBACK CONTROL 60 SEVEN STABILITY EIGHT STABILIZABILITY AND ASYCONTROLLABILITY 77 NINE STABLEPROPER FACTORIZATIONS 85 TEN FEEDBACK CONTROL 95 ELEVEN CONCLUDING REMARKS 103 V
PAGE 6
APPENDICES page A PROOF OF PROPOSITION (4.1) 106 B PROOF THE THEOREMS (5.2) AND (5.11) 116 C PROOF OF THEOREM (8.5) 131 REFERENCES BIOGRAPHICAL SKETCH VI
PAGE 7
Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fullfillment of the Requirements for the Degree of Doctor of Philosophy LINEAR TIMEVARYING SYSTEMS: REPRESENTATION AND CONTROL VIA TRANSFER FUNCTION MATRICES By KAMESHWAR RAO POOLLA August 1984 Chairman: Prof. E. W. Kamen CoChairman: Prof. P. P. Khargonekar Major Department: Electrical Engineering In this dissertation we have developed a "transferfunction" type theory for linear timevarying discretetime systems. Using this framework in the first part of the dissertation, we have been able to generalize much of the existing polynomial model theory. Specifically we have treated polynomial realization theory (FUHRMANN), polynomial factorization theory, and applications to feedback control. In the second half of the dissertation, we have treated the problems of stabilization, of existence of stableproper factorizations, and have taken a cursory look at the tracking problem for timevarying systems. One of our most significant results is the equivalence of dynamic and memoryless state feedback as far as the problem of stabilization is concerned .
PAGE 8
CHAPTER ONE INTRODUCTION This dissertation is concerned with the study of linear discretetime timevarying systems. Timevarying systems arise frequently in practical applications. For example, time variation could result from change of the mass and center of gravity of an aircraft due to fuelburn, aging or slag buildup in chemical reactors, linearization of nonlinear systems about a timevarying nominal trajectory, etc. The broad objective of this dissertation is to develop a systematic theory for the analysis of linear timevarying systems based on matrixf raction representations and also to apply this theory to feedback control system design problems. The earliest approaches to studying feedback control problems for linear timevarying systems were based on linear quadratic optimal control theory and pioneered by Kalman. In particular KALMAN [1960] showed that a uniformly reachable continuoustime timevarying system can be stabilized by a state feedback control law of the form u(t) = L(t) x(t). Here u(t) is the input to the system and x(t) is the state of the system. The gain matrix L(t) may be computed by solving a timevarying Riccati differential equation. The reader may consult the books by JAZWINSKI [1970] and KWAKERNAAK and SIVAN [1972] 1
PAGE 9
2 for further results on optimal control/filtering of linear timevarying systems. CHENG [1979] has obtained different stabilizing memoryless feedback laws (based on controllability grammians) as have KWON and PEARSON [1977, 1978] (based on receding horizon optimal control). Both Cheng and Kwon and Pearson work with reachable timevarying systems. Recently, ANDERSON and MOORE [1981a] have been able to define the weaker notion of stabilizability and have obtained memoryless stabilizing feedback laws for stabilizable timevarying systems. Their work is based on an optimal control /fi Iteri ng approach and specified in terms of Riccati difference equations. In contrast to the abovementioned work which focusses on obtaining stabilizing feedback laws based on grammi ans/Ri ccati equations, this dissertation is an attempt to develop a more algebraic theory for the study of linear timevarying systems. An algebraic theory would provide more insight in the study of feedback control problems and would, perhaps, yield control laws that are easier to compute. In the past, there has been some effort directed at obtaining an algebraic/geometric theory for timevarying systems (see for example WOLOVICH [1968], MORSE and SILVERMAN [1972], KAMEN and HAFEZ [1979]). This work, however, treats very restricted classes of linear timevarying systems such as i ndexi nvariant systems or cyclizable systems. This dissertation attempts the study of linear timevarying systems in complete generality.
PAGE 10
3 In Chapters Two and Three of this dissertation, we outline a "transferfunction" type theory for linear timevarying systems. Attempts have been made in the past to develop such a theory (notably the system function of ZADEH [1950]), but these have not met with much success. Our framework is specified in terms of skew (noncommutati ve) rings of polynomials, formal power series, and formal Laurent series, all with coefficients in the ring of time functions. These skew rings have, in previous work, found application to the study of linear timevarying networks and systems; see for example articles by NEWCOMB [1970], SALOVAARA and BLOMBERG [1973], YLINEN [1975], KAMEN and HAFEZ [1979]. The rudiments of the transferfunction approach we develop here may be found in an unpublished paper of KAMEN [1974]; however, a complete development of this approach is not attempted in that paper. KAMEN and KHARGONEKAR [1982] have pursued this approach and have developed much of the framework for this transferfunction theory for linear timevarying systems. Indeed, this dissertation is a natural extension of their work . During the past decade, significant progress has been made in the study of both linear timeinvariant systems and systems over commutative rings using polynomial matrixfraction methods (see for example the books by ROSENBROCK [1970], WOLOVICH [1974], KAMEN and ROUCHALEAU [1984]). This approach has proven to be useful in tackling many system and control theoretic problems such as realization, dynamic compensation, regulation in the presence of
PAGE 11
4 disturbances, etc. For details on this work, the reader may consult the work of FUHRMANN [1976], ROSENBROCK and HAYTON [1978], CHENG and PEARSON [1978], ANTSAKLIS [1979], KHARGONEKAR [1982], to mention a few. Given the power and success of polynomial matrixfraction methods in the linear timeinvariant system theory, it seems natural to attempt to generalize this approach to encompass linear timevarying systems. We do just this in the first half of the dissertation. This generalization to timevarying systems is entirely nontrivial because our framwork is specified in terms of a noncommutative (skew) ring structure to incorporate the timevariance of our systems. Consequently, in many instances we are compelled to use proof techniques that are novel and entirely different from those employed in the study of linear time invariant systems over both fields and commutative rings. In particular, we obtain in Chapter Four a "natural" statespace representation derived from a polynomial matrixfraction representation of the transferfunction matrix. This realization is a timevarying analog of the FUHRMANN [1976] realization in the timeinvariant case. We then investigate the relationship between systemtheoretic properties of this realization and algebraic properties of the associated polynomial matrix representation. We also examine the problem of strict system equivalence and derive results similar to those obtained by FUHRMANN [1976, 1977] for linear timeinvariant systems and by KHARGONEKAR [1982] for systems over commutative rings.
PAGE 12
5 Many frequency doma i n methods used to design controllers for linear timeinvariant systems begin with leftand/or rightBezout (coprime) factorizations for the transferfunction matrix of the plant. As is well known, any linear timeinvariant system admits such a factorization. This, however, is not the case for linear timevarying systems. In Chapter Five, we derive necessary and sufficient conditions for the existence of Bezout polynomial factorizations for a linear timevarying system. Moreover, our results are construct! ve , and we present a systematic procedure for obtaining these factorizations. Following this, in Chapter Six, we use these Bezout polynomial factorizations together with the polynomial realization theory of Chapter Four, to study feedback control problems. In particular, we derive an assignability result (which corresponds to being able to assign the closedloop system dynamics) for canonical linear timevarying systems. We illustrate our constructive techniques by designing a "deadbeat" controller for an armaturecontrol led dc motor with a timevarying motor torque "constant" (the timevariation being due to loading and heating effects). Chapter Seven is concerned with reviewing some basic concepts dealing with the stability of linear timevarying systems, and with translating these concepts into our framework. In Chapter Eight we examine in detail the problem of stabilizing a linear time varying system E. In particular we introduce the notion of asycontrol lability which is equivalent to being able to stabilize Z
PAGE 13
6 via dynami c state feedback. ANDERSON and MOORE [1981a] have defined a notion of stabilizability for linear timevaryi ng systems which is equivalent to being able to stabilize E via memoryl ess state feedback. One of the most significant results in this dissertation is the equivalence of stabilizability and asycontrollablity . This result, in particular, implies that dynamics in state feedback buy nothing extra as far as the problem of stabilization is concerned. Of late the use of stableproper factorizations in the analysis and design of linear timeinvariant control systems has become increasingly popular, one advantage being that properness of controllers is automatic. See for example the work of DESOER et al . [1980], VIDYASAGAR [1978], and SAEKS and MURRAY [1981]. In Chapter Nine we examine in detail stableproper factorizations for timevarying systems. Following this, in Chapter Ten we investigate the role of these factorizations in feedback control problems. In particular we formulate the problem of Tracking with Internal Stability (TIS) for timevarying systems, and show using stableproper factorizations that the TIS problem can be solved if and only if a particular linear matrix equation over a skew ring admits a solution. Finally, in Chapter Eleven, we make some concluding remarks and discuss some open problems in the area of linear timevarying systems .
PAGE 14
CHAPTER TWO PRELIMINARY DEFINITIONS AND RESULTS In this chapter, we first establish some notation and state some preliminary definitions and results on linear timevarying systems. We also prove a proposition on the existence of a "deadbeat" control law for reachable timevarying systems. As Z = set of integers and R = field of real numbers, let A denote the R1 inear space of all functions from Z into R. With the operations of pointwise addition and multiplication, it is easy to verify that A forms a commutative ring with identity 1, where l(k) = 1 for all k in Z. Of central importance in this entire theory is the right shift operator a defined by (oa)(k) = a(kl), for all k in Z. With the shift operator a (which is a ring automorphism on A), the ring A is called a difference ring . A subring BCA is called a difference subring of A if o(B) = B. (2.1) EXAMPLE. Some examples of difference subrings are given bel ow. 7
PAGE 15
8 PER(N) := (a in A : a is periodic with period N}. oo PER := PER(i) = {set of all periodic time functions}. i=l 00 A (Z) := {set of all bounded time functions}. R[k] := {set of all time functions that are evaluations of polynomials in time}. Q Let denote the difference subring of A consisting of all functions with support bounded on the left, i.e., for any a in A+, there exists an integer k such that a(k) = 0 for k < k . (2.2) DEFINITION. Let m and p be positive integers. An minput poutput linear causal timevarying input/output map f is an R1 inear map f such that if u(k) = 0 for k < k,,, for some u in then f(u)(k) = 0 foM<_^. It is well known that for any input/output map f, there exists p X m matrix function W^(i,j) such that for any u in aÂ’J, f(u)(i) = W^(i,j)u(j). a
PAGE 16
9 The matrix function is called the unitpulse response function associated with the input/output map f. Note that by causality, W^(i,j) is not defined for i < j. Our next concept is the notion of a system. (2.3) DEFINITION. Let B be a fixed difference subring of A containing 1. An minput poutput n dimensional linear timevarying system over B is a quadruple E = (F,G,H,J) of matrices over B where F isnxn, Gisnxm, Hispxn, and J is p x m .  [ For any matrix M over B, define its conjugate MÂ’*Â’ by M'''(k) = M'(k) where ' denotes the transpose. The dual system I''' of ^ = (F,G,H,J) is defined by E'*Â’ = (F''',H^,G''Â’,J''Â’) . This timereversal is an essential part of the natural notion of duality for timevarying systems. With a system E = (F,G,H,J), we shall associate the dynamical equations x(j + 1) = F(j)x(j) + G(j)u(j), y(j + 1) = H(j)x(j) + J(j)u(j), where x(j), y(j), and u(j) have the usual interpretation. In the above definition of a system, it is important to observe that by selecting the difference subring B, we can restrict our attention to
PAGE 17
10 a particular class of systems. For example, we can study the class of linear timevarying systems with bounded coefficients by choosing B = aÂ”(Z) = the set of all bounded functions from Z into R. Let E = (F,G,H,J) and E = (F,G,R,i3) be two minput, poutput, ndimensional systems over BCA. Then, E and E are said to be (algebraically) B i somorphi c if and only if there exists an n x n matrix T over B, with inverse T"^ over B, such that As is well known, two isomorphic systems E and E are related via a coordinate transformation x(k) = T(k)x(k) of their states. Also, the unit pulse response function Wj. associated with the system E = (F,G,H,J) is given by The input/output behaviour of E is described by its input/output map fj; where F = FT, R = HT g = (a"^TÂ‘^)G, J = J Wj.(i,j) = J(j) H(i)F(il)F(i2) ... F(j+l)G(j),i>j not defined .i
PAGE 18
11 Given an input map f : a'J ^ aJ, a system E = (F,G,H,J) is said to be a real i zati on of f if and only if f = fj.. For further results on realizability, we refer the reader to WEISS [1972], EVANS [1972], and FERRER and KAMEN [1984]. A system E = (F,G,H,J) or the pair (F,G) over A is said to be reachable in N steps if and only if there exists a positive integer N such that for any j in Z and any x in R*^, there exists an input sequence u(jN), u(jN+l), ..., u(jl) which drives E from x(jN) = 0 to x(j) = X. The dual notion of observability in N steps has the obvious systemtheoretic interpretation. Also, a system E is observable in N steps if and only if its dual E+ is reachable in N steps. A system E is said to be canonical if and only if it is both reachable and observable in N steps. Let R^ denote the Nstep reachability matrix Rn := [G F (aG) ... F(oF) ... ( a^'^F) ( a^'^G) ] . WEISS [1972] has obtained the following characterization of reachability. (2.4) LEMMA. The pair (F,G) over A is reachable in N steps at all times if and only if rank Rm(j) = n for all j in Z,
PAGE 19
12 i .e. , 1f and only if R ^ is rightinvertible over A . [ [ (2.5) REMARK. For timevarying discretetime systems, it can happen that a pair (F,G) is reachable in N > n steps (see example (2.6)). This is due to the lack of a "CayleyHamilton" type theorem in this setting. Q (2.6) EXAMPLE. Consider the pair (F,G) over A where F(k) = 1 for all k in Z, and G(k) = 1 if k is even and G(k) = 0 otherwise. The pair (F,G) is easily seen to be reachable in two steps but not in one step. Q In some instances, one may be interested in a slightly different notion of reachability. Let T. = (F,G,H,J) be a system over a difference subring B of A. Then, E is said to lDe Breachable in N steps if and only if Rj^j is rightinvertible over B. For example, if B = JiÂ“(Z), this notion of reachability is equivalent to requiring uniform boundedness (with respect to j) of the inputs u(jN), U(jN+1), ..., in the definition of reachability given earlier. In fact we have the following result. (2.7) PROPOSITION. Let I = (F,G,H) be a timevarying system over Â£~(Z). Then, E is Â£Â“(Z)reachable if and only if there exists a real number e such that
PAGE 20
13 > e > 0 PROOF. Suppose D is Â£ (Z) reachable. Then, R^ is rightinvertible with inverse U over Â£"(Z). Since U is over Â£"{Z), this implies that there exists a real number M such that all the nxn minors u^of U are bounded by M. Let r^ denote the nxn minors of R[^. Then, by the CauchyBinet formula (see GANTMACHER [1959]) it follows that for any time k The last inequality in particular implies that there exists an e > 0 Conversely, suppose det(R^RJ^) (k ) > e > 0 for all time k. This implies that is rightinvertible with right inverse det(R^R')(k) = Z r^(k) det(R^U)(k) = 1 = S r.(k)u.(k) < M E r.(k) such that Z r:(k) > e which in turn implies that det(R^Rj^) (k ) > e. It is now clear that V must be bounded since det(Rj^Rj^) > e. This completes the proof. We shall also need the following result on reachability.
PAGE 21
14 (2.8) PROPOSITION. Let I = (F,G,H,J) over A be reachabl e in N steps at all times. Then, there exists an n x m matrix L over A such that (FGL) 1s gni1 potent, 1.e., for some integer q , (F GL)a(F GL) ... a^(F GL) = 0. PROOF. Consider the linear timevarying system described by x(k + 1) = F(k)x(k) + G(k)u(k) . Since the pair (F,G) is reachable in N steps, it is also controllable to the origin in N steps, i.e., any initial state x(0) in can be driven to zero final state in N steps. Therefore, there exist controls u(0), u(l), ..., u(N 1) in such that / x(l) = F(0)x(0) + G(0)u(0) x(2) = F(l)x(l) + G(l)u(l) x(N) = F(N l)x(N 1) + G(N l)u(N 1) = 0 . Let C be the subspace consisting of all initial states x(0) in R^ that can be driven to zero final state in i steps or less. Cl early V^C V^C ...iZ = r"" .
PAGE 22
15 We now select a linearly independent set of vectors bp b 2 , .... b^ that span Vp We then extend this set to form a basis for V^. In this manner, we find a basis B = {bp b^} for = R^. Suppose some b^ in B is in but not in V^_p For the initial state x(0) = bp we can therefore find a control sequence u^O), u''(l), u^(N 1) where uÂ’(k) = 0 for k > t which drives x(0) to the final state x(N) = 0. In this manner, we determine {uÂ’(k) ; k = 0, 1, ..., N 1 ; i = 1, 2 n} . Let B be the n X n invertible matrix over R whose ith column is bi . Let Uq, Up ..., Uy _2 be m X n matrices over R where the ith column of U^ is u^(k). Also, recursively define the matrices X^, X^, . . . , X^ by ^k+1 Â“ \ It then follows that Xf^ = 0. Suppose 5 is in Ker (X^). This means that the initial state x(0) = B C can be driven to zero final state in i steps. Therefore, B 5 is in Vp and then by our judicious choice of controls, it follows that
PAGE 23
16 U^ 5 = 0 , for k > i Specifically, 5 = 0 or Ker(X^) _ Ker(U^. )Â• Hence we can solve the N linear equations U. = L.X. , i = 0, 1, .... N 1 for L^. Observe now that \+l = \ = (F(k) G(k) L^) X^ . It then follows that 0 = X,^ = (F(Nl) G(Nl) ... (F(0) G(O)Lo) X^ . However, Xq = B is invertible. Consequently, (F(Nl) G(Nl) L,^_i)(F(N2) G(N2) Lf^_2) . . . (F(0) G(0) L^) = 0 . Since (F, G) is reachable in N steps at all times, we can repeat the above argument and find matrices ... , L 2 i^_i such that (F(2N1) G(2N1) L2n_i) ... (F(N) G(N) L^) = 0 .
PAGE 24
17 In this fashion, we find matrices L^ for each integer k. Define the m X n matrix L over A by L(k) := L^. It is then apparent that (F GL)a(FGL) ... (FGL) = 0 , i.e., (F GL) is anilpotent. EH (2.9) REMARK. It is interesting to note that the above proposition implies the existence of a "deadbeat" control law for linear timevarying systems that are reachable in N steps. More precisely, with the matrix L as described in Proposition (2.8), it follows that starting from any initial state ? in at any initial time k^, the state trajectory x(k) of x(k+l) = (F GL)(k) x(k) is zero for k > k^ + q.
PAGE 25
CHAPTER THREE THE TRANSFERFUNCTION FRAMEWORK In this chapter, we describe in detail the elements of a "transferfuncti on" type approach to linear timevarying systems introduced by KAMEN and KHARGONEKAR [1982]. Much of the remainder of this dissertation is based on this theory. The commutative rings of polynomials, power series and formal Laurent series all with coefficients in the reals R, play a central role in the transferfunction theory of linear timeinvariant systems. For timevaryi ng systems the analogous objects are skew (noncommutati ve) rings with coefficients in the ring of time functions. More precisely, with z equal to an indeterminate, let A((z"^)) denote the set of all formal Laurent series of the form rlN ^"''Â“r Â’ Â“r in A Note that the coefficients above are written on the right. This is because we will now impose a noncommutati ve ring structure on A((z"^)). With the usual addition, and with multiplication defined by 18
PAGE 26
19 (3.1) az = z(aa) , a in A , where (aa)(k) = a(kl), A((zÂ“^)) is a noncommutati ve ring with identity, called the skew ring of formal Laurent series over A. There are two important subrings of A((z"^)): The skew ring of polynomials A[z] and the skew ring of formal power series A[[zÂ”^]]. These have the obvious definitions. The noncommutati ve multiplication in A((z'^)) defined above captures in a very natural way the timevariance of our systems, and thus plays a central role in this entire theory. This is illustrated by the following example and will become more evident in the remainder of this chapter. (3.1) EXAMPLE. Consider the following singleinput singleoutput timevarying difference equation: a(k+l)y(k+l) = u(k) The indeterminate z~^ will, as in the timeinvariant theory, represent a delay operator. The above equation can be written (we show this more formally later) as
PAGE 27
20 z(ay) = u We can also write the above difference equation as where 3 = (o ^a) , and this can be represented in the "frequencydomain" by zy = ju, or, multiplying on the left by 3, 3zy = u. Comparing this with the previous representation, we obtain z^^ = 3z = (a"^a)z which is precisely the manner in which we have defined our noncommutati ve multiplication in A((zÂ“^)). ED Define a projection map TT : A((z'^)) > A((zÂ‘^)) ^ z'^^ . For any a in A((z~^)), let (a)^ := air(a) = the polynomial part of a. By (a)^, we shall mean the constant coefficient of a, and, a is said to be strictly proper if and only if Tr(a) = a. Given an r x r skew polynomial matrix (i.e., a matri'x with entries in A[z]), i 0 = 2 z'o.. , i=0 ^
PAGE 28
21 the degree of Q, written deg(Q), is the largest integer q such that Oq * 0. Further, Q is said to be monic if Qq = I (the r x r identity matrix), and, Q is said to be ri ghti nverti b1 e if and only if there exists an r X r matrix over A((z"^)) such that = I. We shall need the following result on invertibil ity. (3.2) PROPOSITION. Let Q be an r x r polynomial matrix. Then, Q is righti nverti bl e if and only if there exists an r x r polynomial matrix T such that QT is monic. Further, T can b e chosen such that deg(Q) = deg(QT) . PROOF. Suppose Q is rightinvertible, i.e., there exists a in ATxr ^^2 1)) such that Qi; = I. Let deg Q = d. We can now write z^I = Q il' = Q(^zÂ‘^)^ + Qtt(4'Z^) Notice that deg (Qir(i)z^)) < deg Q = d. Therefore the highest degree term of Q(i^z^)^ is z^I. Choosing T = (ij^z^)^ which is polynomial , proves the necessity. Notice that deg QT = deg Q = d. Now assume that there exists a T in A''^''[z] such that QT is a nionic rxr polynomial matrix. We can then do right division of I by QT and find a
PAGE 29
22 The degree condition deg(Q) = deg (QT) is merely a technical fact which will be useful in proving several later results. An analogous result holds for leftinvertibility of Q. In general, 1 efti nverti bi 1 i ty is not equivalent to rightinvertibility, the pathology being due to the skew nature of our rings (see Example (3.3)). We shall almost always deal with polynomial matrices that are both leftand rightinvertible, in which case we shall call them invertible to avoid use of cumbersome prefixes. The following examples illustrate the skew multiplication (3.1) in our rings and contrast this with multiplication in R[z]. (3.3) EXAMPLE. Define a^, og, u in A by o(la^), p := OgOj^ Also define q(z), hj(z), h 2 (z) by q(z) = a^z + a^, h^(z) = a^z + a^(a^) , h^{z) = pz a(y) . Using multiplication in the skewring A[z] defined by (3.1) it is an easy computation to show that q(z)h^(z) = z , h 2 (z)q(z) = 0
PAGE 30
23 Thus, q(z) is rightinvertible but leftinvertible, that does not occur in R[z], a phenomenon (3.4) EXAMPLE. Define 3 in A by 3(k) k is even k i s odd Consider the scalar poynomial q(z) := 3z + 1. It is easy to verify that q(z)h(z) = h(z)q(z) = 1 where h(z) = 3z+l. Thus, q(z) is invertible and its inverse q ^(z) = h(z) is polynomi al . This situation also is peculiar to the skewring A[z]. []]] We now describe a transferfunction approach to timevarying systems based on the skew rings defined earlier. All proofs are omitted; they can be found in KAMEN, KHARGONEKAR and POOLLA [in press]. Again, let A+ denote the subring of A consisting of all functions a : Z R with support bounded on the left. Let A denote the unit pulse at the origin, i.e..
PAGE 31
24 if k=0 otherwi se Given any u in A^, the (generalized) z transform of u written U(z) is defined to be the skew Laurent series (3.5) U(z)= SzÂ“^u(r)A This generalized ztransform is simply the usual ztransform (imbedding R((z"^)) in A((z'^))) multiplied by A. Lef f be an input/output map, and let denote the unitpulse response function associated with f. For each integer r > 0, define a p X m matrix over A by Wp(k) = W^(r + k, k) , k in Z (3.6) DEFINITION. The (formal) transferfunction matrix W^(z) associated with the input/output map f is the p x m matrix over A[[z~^]] defined by Wf(z) z
PAGE 32
25 This definition of transferfunction for time varying systems will be seen by the following Propositions to be the natural definition to capture the timevarying behaviour of our systems. (3.7) PROPOSITION. Let f be an input/output map. Let y be the output resulting from the input u in a'^. Let Y(z) and U(z) denote the (generalized) ztransforms of y and u respectively. Then (3.8) Y(z) = W^(z)U(z) . Note the close resemblance of (3.8) to the timei nvari ant transferfunction theory. Proposition (3.7) is a result one would desire of any definition for transferfunction . This analogy to timeinvariant systems is further illustrated by the (3.9) PROPOSITION. Let I = (F,G,H,J) be a linear time varying input/output map f^ . Then, the transferfunction matrix A Wj. associated w ith f^. is given by (3.10) Wj,(z) = H(zl F)Â“^ + J . Despite the close resemblance these two results bear to the timeinvariant theory, it must be emphasized that (3.8) and (3.10) are computed via the skew (noncommutati ve) multiplication defined earlier. For instance, (zIF)"^ is determined by the formula
PAGE 33
26 (zIF)'^ = z'^ + (cF)z"^ + (aF)(a^F)z"^ + ... . We conclude this chapter by relating polynomi al factorizations of transferfunction matrices to collections of input/output difference equations. Consider the collection of input/output difference equations with timevarying coefficients described by q r i?0 )x(k+i ) = R. (k+i )u(k+i ) (3.11) t y(k) = p.(k+i)x(k+i) . Here, u matri ces matri ces e X e y e A^, and of appropriate dimension over A. the Q^, Defi ne , and R^are the polynomial Q(Z) = (io z'Q, , R(Z) . r i?0 zÂ’r. 1 P(z) iio z"" P. 1 and assume that Q(z) is invertible. Then, it is an easy computation to verify that the transferfunction matrix associated with the input/output map defined by (3.11) is given by W^(z) = P(z)0'^(z)R(z) (3.12)
PAGE 34
27 Conversely, given an input/output map f whose associated transferA function matrix W^(z) admits a polynomial factorization (as in (3.12)), one can readily derive a collection of input/output difference equations (as in (3.11)) that correspond in a natural way to the particular factorization (3.12). For linear timeinvariant systems this correspondence was first observed by ROSENBROCK [1970]. Polynomial factor! zat ions of transferfunction matrices are investigated extensively in the next three chapters. /
PAGE 35
CHAPTER FOUR POLYNOMIAL REALIZATION THEORY We will now consider polynomial matrix fraction representations of a given transfer function. Let P, Q, R, and S be polynomial matrices sucn that Q is invertible. Let f be an input/output map with the associated transfer matrix W^(z) = PQ'^R + S For timeinvariant systems over fields, FUHRMANN [1976] gave a realization for f in terms of the polynomial matrices P, 0, R, and S. For timeinvariant systems over arbitrary commutative rings, KHARGONEKAR [1982] has obtained corresponding results. We now proceed to derive results for time vary i nq systems analogous to those of Fuhrmann and of Khargonekar. We first develop the machinery with which we can obtain these natural realizations for timevarying systems . Let Q be an rxr (skew) polynomial matrix, invertible over A((z"^)). Define a right Amodule Xq := {x in A*'[z] : Q~^x is strictly proper} 28
PAGE 36
29 Define a right A1 inear projection map : A>'[z] Xg : X > Qtt(O'^x) where tt(Q"^x) is the strictly proper part of Q'^x. Clearly, irg is surjective. The map wg corresponds to viewing polynomials in AÂ’'[z] modulo Q. (4.1) PROPOSITION. Xg is a finitely generated, free right Amodule ( j.e., Xg is isomorphic as a right Amodule) to A*^ f or some i nteger n . PROOF. The proof of this technical therefore put it in Appendix A. fact is rather long, and we have (4.2) EXAMPLE. Let us consider the monic rxr polynomial matri; Q(z) = z" + + ... + zQ^ + Og . Let denote the jth column of the rxr identity matrix. Notice that for i = 0, 1, ..., n1; j = 1, 2, ..., r, z^^j is in Xg since is strictly proper. In fact the set ^ ^ ,Â®.j Â• ^ Â“0> 1Â» Â•Â•Â• , n1 ;j = 1, 2, ... , r} forms a basis for X^, and X = A nr
PAGE 37
30 Suppose Q(z) is an (not necessarily monic) rxr polynomial matrix of degree n. It follows from the proof of Proposition (4.1) that the set (4.3) {TT^(z^ej.) : i = 0, 1, ..., n1; j = 1, 2, ..., r} where is the jth column of the rxr identity matrix, generates Xg. The difficulty however, is to extract a basis for Xg which we shall require to obtain our natural realizations (see Remark (4.5) and Theorem (4.6)). The following example illustrates what can happen if 0(z) i s not moni c . (4.4) EXAMPLE. Let q(z) = 3z^ + z^ where B(k) = 1 if k is even, and 6(k) = 0 otherwise. Notice that qÂ“l(z) = z'^ zl(a0). We compute a set of generators for X^ using (4.3): = q^(qÂ‘^) = qqÂ’^ = 1 , 1 %(z) = qi^Cq'^'z) = qz'^ = 3z^ + z ^q(z ) = 0 Thus, Xq is generated by {1, 3z + z} which also happens to be a basis for Xq, and Xq = A^. Q
PAGE 38
31 (4.5) REMARK. The right Amodule Xg will serve as the state space for our natural realizations. Proposition (4.1) tells us that for any polynomial matrix Q, our state space Xg = a" for some integer n. This means that the dimens i on of our natural state space (see Theorem (4.6)) does not vary with time: a comforting fact. It is shown in the proof of this proposition that the dimension of Xg is given by the rank of a certain matrix which can be computed for any given Q. We can also view Xg as a right A[z]module by defining rightmultiplication by z in Xg as X * z = irg(xz) , for any x in Xg This right A[z]module structure is fundamental for obtaining the natural realizations alluded to at the beginning of this chapter. We are now in a position to state the following result. (4.6) THEOREM. Let P, Q, R, and S be pxr, rxr, rxm, and pxm polynomial matrices such that Q is invertible. Let f be an input/ output map with the associated transfer matrix W^(z) = PQ'^R + S . Define the maps (p, r, and by
PAGE 39
32 <}> : Xp ; X > TT^(xz) r : a"Â’ Â» Xq : u > Hp(Ru) 4: Xq ^ aP : X > (PQ"^xz)^ . Â— LÂ» . and H be the m at rix representations of (), r, and if) respective ly with reference to some (fixed) basis of Xp. and Ipf. Â— Â•" i ^ Â• Then Z(P, Q, _R , $) := (F, G, H, J) is a real ization of f. PROOF. Let {b^, b2, b^} be a basis for Xq, Any element x in Xq has the unique representation x = E b. a., a. in A, since Xn i=l 1 1 1 0 is a free right Amoduie (see Proposition 4 . 1 ). We will write this as X = B where B [b^ b2 ... b^]; using this notation, the matrices F, G, and H with respect to the basis {b^, b2. ..., b^} are determined by
PAGE 40
33 BF = 4,(B) , (4.7) BG = r(I ) , m H = tj;{B) , where is the mxm identity matrix (over A). Notice that the maps and ij; and Asemi 1 i near , i.e., (xa) = (j)(x) a(a) 'l^(xa) = i)(x)a(a) for any a in A. The map r is Alinear. Let o denote composition of maps. We now compute using (4.7) A^ := H(aF) ... (a'^'^F) (0*^3) = 'i^(B)(cF) ... (a''^F)(a''G) = ip 0 [BF(aF) ... (a'^"^F)((j'^"^G)] The last equality above follows from the Asemi 1 i neari ty of \). Continuing our computation, we see that
PAGE 41
34 0 [8F(aF) ... (/"^F)(a'^Â’^G)] = ip 0 [<^(B)(aF) ... (a^Â‘^F)(a'^"^G)] = (40 <.)[BF(aF) ... (aÂ’^'^F)(/Â‘^G)] where the last equality was deduced from the Asemi 1 i nearity of '^"^)(tTq(R)) = (40 <()'^"^)(TrQ(Rz)) = (4)(tTq(Rz'^"^)) = (PQ"^Tr^(Rz^"^)z)
PAGE 42
35 = (PQ'^R . From Proposition (3.9) it follows that the transfer matrix Wj,(z) of the input/output map where Z = (F, G, H, J) is given by Wj, (z) = H(zl F)"^ G + J 00 = A, z"*^ + J k=l GO = z (POÂ’^R z*^)^ z'*^ + (PQ^ R + s) k=l Â° 0 = PQ'^R + S , since (PQ'^R + S) is proper. Hence, Wj,(z) =W^(z), i.e. Z(P, 0, R, S) = (F, G, H, J) is a realization of f. CD (4.8) REMARK. We could have chosen a different basis {b,, b^, A X C. ...Â» b^} for Xg and obtained a different realization A ^ A A Z (F, G, H, J) for the input/output map f. In this event, Z and Z = (F, G, H, J) are Aisomorphic. We shall call any realization of f obtained as in Theorem (4.6) the Fuhrmann real i zati on associated with the polynomial matrix function representation W^(z) = PQÂ“^R + S. [D
PAGE 43
36 We now illustrate Theorem (4.6) with two examples. (4.9) EXAMPLE. Let f be an input/output map described by the following collection of input/output difference equations C(k+2) + q^C(k+l) = q^c(k) = r^u(k+l) + rgu(k) y(k) = p^5(k+l) + pQ?(k) As shown in Chapter Three (see equations (3.11) and (3.12)) the A t ransf erf uncti on W^(z) associated with f is given by the polynomial representati on W^(z) = PQ^ = (p^z + pg)(z^ + zq^ + qQ)'^zr^ + r^^) . where Pi = o It follows from Example (4.2) that the set {1, z} forms a basis for Xq. Let us represent any x = oq + za^ in Xq by the vector [Â“o We first compute the action of the maps (), r, and (as in Theorem (4.6)) on this basis of Xq: <(>(1) = ^q(1 Â• z) = z <}>(z) = TTj^(z Â• z) = Qtt(Q"^z^) = z^ Q(Q"^z^)^ = z^ Q(z) = zq^ qg ,
PAGE 44
37 r(l) = tt^(R Â• 1) = + Tq) = zr^ + , iÂ»(l) = (PQÂ‘h Â• z)q = {(p^z + Po)(z'^ + ... )}q = 'Kz) = (PQ'^zz)q = {(p^z + Pq)( 1 zÂ“^a'2(q^) + ... )}^ _2 = Pq PiÂ° (^i) Also, J = (PQ ^R)q = p^T]^. Thus, the Fuhrmann realization T(P,Q,R) = (F, G, H,J) of (with respect to the basis chosen) is given by "0 qj" 1 o u 1 F = Â» G = Â’l H = [Pq PjaÂ”^(q^] J = Pir^ (4.10) EXAMPLE. Let f be an input/output map whose associated A transfer function W^(z) admits the polynomial representation (see Example (4.4)) (z) Q'^R = (6z^ + z^) + <'n) Observing that <}>(1) = TT^(1 Â• z) = 3 z^ + z
PAGE 45
38 4>(3z^ + z) = ^g(3z^ + z^) = 0, r(l) = TT^(zr^ + Pq) = (ez^ + z)r^ + , 'i'(l) = (Q"^* 1 Â• z)q = (cj3) , ^(3z^ + z) = (Q"^(3z^ + z^))q = 1 , we can immediately write down the Fuhrmann realization E(I,Q,R) = (F,G,H,J) of f as 1 o 1 J o . H = [{as) 1] , _i q_ ^1In the case where 0 is monic , one can readily obtain the general form of the Fuhrmann realization: Let P(z) ^ i = P.z\ Q(z) i=0 ^ s Q. , R(z) = E z^R. i=0 ^ 9 i=0 and assume that Oq = I. Let f be an input/output map whose associ ated transferfuncti on ma t r i X A W^(z) admits the polynomi al representation
PAGE 46
39 W^(z) = P(z)0Â“^z)R(z) Then, the Fuhrman realization J:(P,Q,R) = (F,G,H,J) of f is given by F = 0 0 0 I Qi 0 ... I 0^"^ q1 G = O' 6 (4.11) H = [Pq P^...P^0...0] 0 0 . 0 0 . 0 I aÂ‘^(B2) oÂ‘^(B i) 0 q J J = (PQÂ‘^R)o . where AÂ“^(z) = z"^I + zÂ‘^"^B 2 + 83 + ... The Fuhrmann realization is "natural" in the sense that control theoretic properties of the realization can be characterized in terms of algebraic properties of t'ne associated factorization. We have, for instance, the following result characterizing reachability in N steps .
PAGE 47
40 (4.12) THEOREM. Let P, Q, and R be pxr, rxr, and rxm polynomial matrices over A such that Q 1s invertible over A((z'^)). Let f be an input/output map with associated transfer matrix W^(z) = PQ'^R . Then, the Fuhrmann realization Â£ (P, Q, R) = (F, G, H) is reachable in N steps for some positive integer H ir and only if there exist polynomial matrices Y and such that QYi + RY2 = I PROOF. Let i:(P, Q, R) = (F, G, H) be the Fuh rmann realization of Wf(z) = PQ R relative to some (fixed) basis {b^, b 2 , b^} of Xq. We continue using the notation used in the proof of Theorem (4.6): any x in Xq has a unique representation x = r b. a. , {a.} i=l 1 T T in A, and we shall write this as a 1 X [bj bj Suppose E (F, G, H) is reachable in N steps for some integer N. This means that there exist matrices Cq, C^, ..., C(^ ^ over A such
PAGE 48
41 that rcÂ„ n [GF(0G) ... F(ctF) . (a^^F)(a^^G)] = I Let ^q(I) = BT, where T is an nxn matrix over A. It then follows that ^q(I) = B(IT) = B [GF(aG) ... F(aF) ... (a^'Â“F) (a^'^G)] Â’V C^T Â‘V = [r(l)(or)(i)...(/lor)(i)] LW= ^q(RY2)
PAGE 49
42 N1 where 'i := E z^C.T. Hence, it (I RY.) = 0. This implies ^ i =0 ^ that there exists an rxr polynomial matrix over A[z] such that QY^ + RY;, = 1. Now suppose that there exist polynomial matrices Y^ and Y 2 such that QY^ + RY 2 = I. Then, for any x in Xq, RY2X = X QYjx Hence, tTq(RV 2 x) = therefore, write B,= tTq(RY2B) ' , N1 where B is as before. Let Y 2 B = E z^D.. It then follows that N1 . B = w (R E z'd.) ^ i =1 ^ r(i) + (<(. 0 r)(i) + ... + (/I 0 r)(i) B [GF(aG) ...F...(o^Â“^F)(a'^"^G)] lVu Since {b]^, b 2 , ..., b^} is a basis for Xq, this implies that
PAGE 50
43 [GF(aF)...F(aF)...(a^^F)(a^lG)] Td^ 1 l'^nU Consequently, the system E = (F, G, H) is reachable in Nsteps. (4.13) REMARK. A dual version of Theorem (4.12) characteri zes observability in Nsteps. We state this without proof. The Fuhrman n realization E(P, Q, R) = (F, G, H) is observable in N steps if and only if there exist polynomial matrices and Y/ such that Y 3 O + Y 4 P = I . It is a simple calculation to verify that the Fuhrmann realization associated with the polynomial matrix representation W^(z) = H(zl F)"^G is itself Aisomorphic to E = (F, G, H). Consequently, from Theorem (4.12) we immediately have the following analog of the familiar timeinvariant theory result: (4.14) COROLLARY. The pair (F, G) is reachable in N steps if and only if there exist polynomial matrices Y ^ and Yg such that (zl F)Y^ + GY 2 = I
PAGE 51
44 Let Q and Q be any rxr and IkZ invertible polynoiniai Miatricas. We ^ill now give necessary and sufficient conditions for Xq and Xp to be ACz]modu 1 e isonorphic. Tor the case A = a fieÂ’ i, A(z) is the usual ring of polynomials, this issue was resolved by FURHMANN [1976, Thim. 4.7], cind for the case A = an arbitrary commutative ring, by KHARGONEKAR [1982, Thm. 3.6]. The results we have obtained are natural extensions of those of Fuhrmann and Khargonekar. (4.15) PROPOSITION. Let Q and Q be rxr and IxZ invertible polynomial matrices. A map ^ : X ^ > X^ _ls a ri ght A[z]module isomorp hism if and only if there exist poly nomial [matrices C, 0, Y^, Yp, Y ^, and such t hat fo r any x in Xq (4.16) i^(x) = (Cx) and such that (4.17) CQ = QD (4.18) CY^ + QY^ = I (4.19) Y 3 D + Y^Q = I
PAGE 52
45 We omit the proof of this Proposition since it closely resembles that of KHARGONEKAR [1982, Thm. 3.6]. This above proposition is used later in the dissertation to prove further results. Let f be an input output map. In general, there will exist several polynomial matrix franction representations for W^(z). We now address the following problem: If W^ = PQ"^R + S = P Q"^ R + S are two polynomial matrix representations for W^ what conditions A A A A must P, Q, R, S, P, Q, R, and S satisfy in order that the Fuhrmann realizations Z(P, Q, R, s) and E(P, Q, r\ s) be Aisomorphic? This problem is closely related to the problem of strict system equivalence (see ROSENBROCK [1970] and FUHRMANN [1977] for strict system equivalence for systems over fields). Our results closely resemble the results obtained by FUHRMANN [1977, Thm. 4.1] and by KHARGONEKAR [1982, Thm. 4.3] (for systems over rings). We state without proof the (4.20) THEOREM. Let f be an input/output map whose associated transfer matrix W^(z) = PQ'^R = p Q'^R
PAGE 53
46 Th en, the Fuhrmann realizations Z(P, Q, R) and l{P, Q, R) are Aisomorphic if and only if there exist polynoinial matrices C, T, such that (4.21) + QYg = I , Y 3 D + Y^Q = I Essentially equations (4.21) oTthe above theorem state that the system matrices (in the sense of ROSENBROCK [1970]) must be polynomially equivalent for their respective natural realizations to be Aisomorphic. In the next chapter we continue developing the polynomial model theory and investigate Rezout polynomial factorizations of transferfunction matrices.
PAGE 54
CHAPTER FIVE POLYNOMIAL FACTORIZATIONS OF TRANSFERFUNCTION MATRICES In this chapter, we shall consider the issues of existence and computation of polynomial matrix fraction representations . Chapter Six explores application of the theory developed here to the design of feedback control systems. The controllers designed will be specified in terms of a polynomial matrix fraction representation of the transferfunction matrix and can be implemented using the polynomial realization theory described in the previous chapter. We would like to emphasize that all of our results are constructive . Many frequency domain methods used to design controllers for timeinvariant systems begin with polynomial factorizations of the plant transferfunction matrix G(s) of the form G(s) = Ql(s)R(s), with (5.1) Q(s)Yi(s) + R(s)Y2(s) = I . See for example the books of ROSENBROCK [1970] and WOLOVICH [1974]. Here, q, R, Y^, and Y^ are polynomial matrices, and (5.1) is referred 47
PAGE 55
48 3 l6ftBezout polyriorni a1 factorization . For timeinvariant systems, the existence of these factorizations is not an issue; any plant transfer function matrix always admits leftor rightBezout polynomial factorizations. This however, is not the case with timevarying systems (see Example (5.9)). We therefore need to understand what class of time varying systems admit such factorizations before we use them to design control systems. Theorem (5.2) answers just this issue and essentially states that a transferfunction matrix admits Bezout factorizations if and only if the associated input/ output map admits a canonical realization. (5.2) THEOREM. Let f be an input/output map, and let W^(z) be the trans ferfunction matrix associated with f. Then, the following statements are equivalent . (a) There exist polynomial matrices Q, R, , and Yp over A[z1 with Q invertible and such that Wf(z) = Q"^R , QYj + RY^ = I Â— (z ) admits a rightBezout polynomial factorization . (t*) There exist polynomial matrices ?, Q, Y,, and Y/ wi th invertible and such that Wf(z) = PQ"^ , Y3P + Y^Q^ = I ,
PAGE 56
49 Â• * llffizJ admits a 1eft8ezout polynomial factorization . (c) f admits a canonicdl realization . PROOr . Tne proof of this theorem is extremely long and rather intricate, and may be found in Appendix B. We would like to emphasize that this proof is constr uctive. We now present a systematic procedure for obtaining the polynomial factorizations (i.e., the matrices Q, R, Y^, and Y 2 ) of part (a) of this theorem. An almost identical technique can be used to obtain the polynomial factorizations of part (b) of the above theorem. Let Z = (F, G, H) be (given) a canonical realization of the input/output map f. _STEP_J_. Since the given system Z is canonical, it is reachable in N steps for some integer N. Let Rf^j := [G F(aG) ... F(oF) ... (o' F)(a'^~^G)] be the Nstep reachability matrix for the pair (F, G). Compute a rightinverse U for over A. Compute the polynomial matrices 0 I zl n 0 I U \ = (c\) 0
PAGE 57
50 X 2 = [I zl ... z^'h] U . Here, each of the identity matrices in an nxn block. It will happen that, with and X 2 defined as above, (zl F)X^ + GX 2 = I STEP_H. Find matrices X and Y over A such that is invertible over A. Different choices for the matrices X and Y will result in unimodularly related Bezout polynomial factorizations (see Appendix B). STEP III . Compute Nq := Mq^ and partition it as where A is an nxn matrix. It will happen because of the observaoi 1 ity of the pair (F, H) (see Proposition (8.1)) that
PAGE 58
51 the pair (A', B') is reachable in K steps for some integer K. Determine (using Proposition (2.6)) an nxp matrix S over A such A that A := A' + B'S' is anilpotent, i.e., for some integer L, A(aA)(a^A) ... (aÂ‘'~^A) = 0 . STEP IV . Define the matrix W : = I 0 (a'^S) 0 and compute the polynomial matrix V = (2 (N(,Wz)')N(, . i=0 Partition V as (5.3) V = '^0 D 0 where Q is a pxp matrix. ^TEP_V. Define the polynomial matrices R := DG , Yj := HXi(zS + X) Y ,
PAGE 59
52 Y2 = X2(zS + X) Then, R, Y^, Y 2 and Q (as defined in 5.3) determine the desired A polynomial factorization of W^(z) as in Theorem (5.2). We now illustrate this procedure using two examples. (5.4) EXAMPLE. Consider the linear timevarying system X = (F, g, h) where F(k) = 1 for all k in Z, and g = h = a, where a(k) = 1 if k is even and a(k) = 0 otherwise (see Example (2.5)). We systematically employ the procedure described earlier to obtain a rightBezout polynomial factorization for W^(z) = h(zl F)"^g. STEP Â— The pair (F, g) is easily seen to be reachable in two steps, and R 2 = [g F(ag] = [a 1a]. A rightinverse for R 2 is simply U = [1 1]. Then, '0 f = (aR2) U = a 1 _0 0 _ X 2 = [1 z] U = z + 1 STEP II. With X = 0, and Y = 1. the matri F X 1 0~ % : = = Y
PAGE 60
53 is invertible over A. STEP III. 1 0 ~ a 1 We need to find an S in A such that (1 + aS) is anil potent. By inspection, it is clear that S = 1 will suffice. := M 1 STEP IV. Defining W : = 1 1 0 0 we compute the polynomial matrix V: V = ( E (N Wz)') N i =0 ^ ^ 1az az^+z a+az 2 az az+1 STEP V . The desired factorization is then W^(z) = QÂ‘^R , QY^ + RY^ = 1 ,
PAGE 61
54 where Q = az^ + az 1 , R = a 2 + z (5.5) EXAMPLE. Let us consider an armaturecontrol 1 ed dc motor described by the input/output differential equation Here, the input u(t) is the applied armature voltage, the output 0(t) is the angular position of the motor shaft, and the timevarying coefficients a(t) and 3(t) are given by a(t) = +0.74 + 0.3k(t) , 3(t) = 5.56k(t) where k(t) is the (normalized) effective motor torque "constant". The nominal value of k(t) is 1, however during operation k(t) may vary significantly due to motor shaft loading and heating effects. We assume that the timevariation of k(t) is known a priori; for example the motor may be part of a machining operation and k(t) could be identified offline, say by measurements taken during test runs. (5.6) + a(t) = e(t)u(t)
PAGE 62
55 By sampling the continuoustime system (5.6) with a suitably small sampling period T we obtain the following timevarying sampleddata state model Z =(F, G, H) that approximates the motor characteristics: e(kT+T)Â“ 1 T ~ 0(kT) " 6T^/2" = + _o)(kT+T)_ _ 0 1Â«T_ _o)(kT) _ ST u(kT) > (5.7) 6(kT) = [l Â•0(kT) Â“(kT), Here oj(kT) is angular velocity fo the motor shaft. It can be easily verified that Z = (F, G, H) is canonical. We wish to obtain a rightBezout polynomial factorization for the transferfunction matrix ~ 1 Wj,(z) = H(zl F)Â“ G. We shall later use this factorization (see Example (6.7)) to design a deadbeat controller for the motor. We systematically employ the procedure described earlier. STEP I . It can easily be shown that (zl F)Xj + GX 2 = I where
PAGE 63
56 T/2 2 T74 1 T/2 } Cz + (1 ctT) X STEP II . With X' = [0 1/T], Y = 0 F X ~l T = 0 1+aT H Y 1 0 is invertible over A. STEP III. N,^ := M, 1 0 1/T 1+aT 0 0 T I X , the matrix 0 1/T 0 1 1/T 1+aT . Notice that the matrix A := the upper 2x2 block of Nq is already onil potent. Hence we may choose S = [0 0]'.
PAGE 64
57 STEP IV . Defining I (o's)' 1 0 0 w = = 0 1 0 _0 0 _0 0 0 _ we compute V : N 0 0 1/T z+aT1 0 1 0 1/T(z1) T z^+(2aT)z+aTl STEP V. The desired factorization is then Wj.(z) = Q^R QYj + RY 2 = 1 , where (5.8) 2 2 Q = Z^ + (aT2)z + (1 aT) , R = + ^(aT+1) , ^ ^ Â’ ^2 = XT Â• aTÂ‘ Recall that X 2 + T.
PAGE 65
58 We would like to remark that not all time varying systems admit canonical realizations. This is demonstrated by the following example. (5.9) EXAMPLE. Consider the linear time varying system L = (F, g, h) where F(k) = h(k) = 1 for all k in Z, and g = A = the unit pulse concentrated at the origin. Let fj. be the input/output map associated with E. We will show that fj. does not admit a canonical A realization, and therefore, by Theorem (5.2) W^(Z) does not admit leftor rightpolynomial factorizations. Suppose fj. did admit a canonical realization. Then, by Theorem (5.2), there exist polynomial matrices P and Q with Q invertible and such that (5.10) h(zl F)lg = (z I)'! A = PQ'l . n Let 0 = E z^Q. . From (5.10) it follows that (z 1)~^A Q = P is a i=0 ^ polynomial. The coefficient of z"^ above must be zero: 0 = AQg + (aA)Q^ + ... + (o%)Q^ . Multiplying this equation on the left by (a^A) we conclude that (a^A)Q^. = 0 for i = 0, 1, ..., n. Thus AQ(z) = 0 which is impossible since Q is assumed to be invertible. CH
PAGE 66
59 The central theorem of this chapter, Theorem (5.2), is stated for canonical linear timevarying systems. Under the weaker hypothesis of reachability we can obtain the following theorem (which is actually an intermediate result used in proving Theorem (5.2)), the proof of which may be found in Appendix B. (5.11) THEOREM. Let T. = (F, G, H) be a linear timevarying system that is reachable in N steps. Then, there exist polynomial matrices P and Q with Q invertible such that Wj.(z) = H(zl F)"^G = PQ~^ and the Fuhrmann realization Â£(P, Q, I) is Aisomorphic to T. = (F, G, H) . I Â— I We would like to remark in closing that the systematic procedure given in this section for computing Bezout polynomial factorizations for timevarying systems carries through with minor modifications for systems over a principal ideal domain. The interested reader may refer to POOLLA and KHARGONEKAR [1983] for more details. In the next chapter, we apply the theory developed here and in Chapter 4 to the design of feedback control systems.
PAGE 67
CHAPTER SIX APPLICATIONS OF THE POLYNOMIAL THEORY TO FEEDBACK CONTROL We will now explore applications of the theory developed in earlier chapters to the design of feedback control systems. In particular, we will show that via dynamic output feedback it is possible to "coefficient assign" canonical systems. We would like to stress that our design techniques are constructive . The controllers we obtain will be specified in terms of a polynomial matrix fraction representation of their transferfunctions, and can be then implemented (realized) using the polynomial realization theory of Chapter Four. Let us consider a canoni cal linear timevarying system A Z = (F, G, H) over A. From Theorem (5.2) it follows that WÂ„(z) = la H(zl F)"^G admits a rightBezout polynomial factorization: (6.1) Wj.(z) = Q"^R , QY^ + RY 2 = I . Consider now the feedback control system shown below. 60
PAGE 68
61 V(z) E(z) > U(z) > > Y(z) Here is the controller (to be designed). Suppose E^ is specified in terms of the following collection of input/output difference equations : U(z) = P^C(z) ( 6 . 2 ) 0^5(z) = R^E(z) . This is the socalled Rosenbrock representation (see discussion at the end of Chapter Three), and here P^, 0^, and R^. are polynomial matrices over A[z] and is assumed to be invertible. Then the controller transferfunction matrix has the form (z) = P 0Â“ R . E^ ' ' c c c c We can then represent the closedloop system shown above by the equati ons (6.3) ' Q Rp; "y(z)Â“ " 0 ' R L c J(z)_ R L c J
PAGE 69
62 Thus, we see that the closedloop system dynamics are determined by the inverse of the matrix tt(z) = In particular, the closed loop system is internally uniformly asymptotically stable if and only if is a stable Laurent series (see Chapter Seven). We now address the following question: To what extent can the matrix ir'^(z) be "assigned" by selecting P^,, Q^, and R^, i .e. by designing the controller. Recall that we must constrain to be invertible and W^. (z) = to be proper in order to realize the controller. In the timeinvariant case this problem has been extensively studied. It has been shown that (see for example EMRE and KHARGONEKAR [1982]) with R^. = I, it is possible to arbitrarily assign the coefficients of the polynomial detir(z) = det(QQ^ RP^) For the timeinvariant case, since the roots of the equation detir(z) = 0 are the closedloop system poles, this assignability result implies that it is possible to arbitrarily alter the closedloop system dynamics by appropriately designing the controller (i.e. selecting P^ and Q^.) .
PAGE 70
63 In the timevarying case, the matrix ir(z) is defined over the noncommutative ring A[z] and there is no known definition for the determinant of a matrix over a noncommutati ve ring. Thus, we cannot speak of assigning the coefficients of detir(z). However, we can still consider the problem posed earlier; to what extent can tt~^(z) be assigned by choosing W^, (z). In this regard, we have the following result: (6.4) THEOREM. Let Z = (F, G, H) be a canonical linear timevarying minput poutput system over A, and let Wj,(z) = H(zl F)'^G = Q'^R , QY^ + RY^ = I be any pxp monic polynomial matrix wi th deg() > 2 deg(Q) + max{deg( Y^) ,deg( Y2)1} . Then, there exists a controller E with proper transfer function matrix W. (c) = P Q" ^RÂ„ where P^, 0^ and Â“c R^ are polynomial matrices, and there exi St unimodular polynomial matrices T^(z) and Tg(z) such that = Ti(z) I 0 T2(z)
PAGE 71
64 PROOF. Since Q is an invertible polynomial matrix, it follows from Proposition (3.2) that there exists a polynomial matrix T such that M := TQ is monic and deg(M) = deg(Q). We now divide (}> on the left by TQ to obtain (6.5) () = MjTQ + N where deg(N) < deg(Q). Define the polynomial matrices P^, Q^, and by (66) Pc := Y2 , Qc := M^T + NY^ , R^ := N . We shall show that the desired controller has transfer function matrix Â“ ^c^c *^c '^bere P^, Q^, and R^ are defined by (6.6) . Before we do this, we must first ensure that (i ) is invertible and that (ii) PcQc^Rc is proper. To prove (i), notice that deg(T^) = deg(QMÂ“l) = deg(Q) deg(M) = 0 , deg(Mj) = deg((t>) deg(M) > deg(Q) + deg(Y^) It then follows that Qc = MiT(I + T^M];1nYj) , and that
PAGE 72
65 deg(TlMilNYi) < deg(Tl) deg(M^) + deg(N) + deg(Yi) < 0 deg(Q) deg(Y^) + deg(N) + deg(Y^) < deg(N) deg(Q) < 0 . Thus, I + is monic , and hence is invertible. We now prove (ii). Notice that we can write Pc^c^^c " '^2^^ ^ TÂ“^MÂ‘^NY^)Â‘^T'^M^^N . However, since deg(Tl) = deg(I + T1 m];1nYi)1 = 0, it follows that deg(P^QlR^) < deg(Y 2 ) + deg(N) deg(M^) < 0 . Thus, the controller transferfunction matrix W_ (z) = P 0"^R is ' ' c c c well defined. With the definitions (6.6) for P^., Q^, and R^, it can now be mechanically verified that Q I I 0 4 > =: T^(z) 0 T2(z)T3(z)
PAGE 73
66 The above equation together with the observation that T2, and T3 are unimodular polynomial matrices completes the proof. Our technique in the above proof closely resembles that of KHARGONEKAR and OZGULER [in press]. (6.6) REMARK. We could, for instance, choose ()(z) = z''l with r a sufficiently large integer to satisfy the degree constraint on ({>. In this event, the controller designed as in the above theorem will result in a "deadbeat" closedloop system response. In other words, with this controller, the closedloop system response resulting from any initial state will become zero after a finite number of steps (assuming the external input V(z) = 0 ). We now present a systematic procedure for designing a controller = (F^, G(., H^) to yield a desired closedloop system response (specified by the polynomial matrix (t>). This procedure is essentially distilled from the proof of the previous theorem and is exhibited explicitly for the reader's convenience. STEP I . Given a canonical linear timevarying system E = (F, G, H), first obtain a rightBezout polynomial factorization for the transferfunction matrix as Wj.(z) = Q'^R , QY^ + RY2 = I
PAGE 74
67 This can be done by systematically employing the procedure described in the previous chapter. STEP II . Let (J)(z) be a (given) monic pxp polynomial matrix which represents the desired closedloop system response as in the statement of Theorem (6.4). Recall that there is a contraint on (j) : deg((j)) > 2 deg(Q) + max{deg(Y^), deg(Y 2 ) 1). Left divide by 0 to obtai n () = XQ + N with deg(N) < deg(Q) . STEP III . Compute the polynomial matrices Pc := Y2 , Qc := (X + NY^) , R^ := N . A Then, the controller transferfunction matrix is given by W^. (z) = STEP IV . Realize the controller by a state model = (F^.,G^,H^.) . This can be done using the polynomial realization theory of Chapter 4. If Q is monic, it will happen that Q^, is also monic, in which case explicit formulae for the state model = (F^,G^,H^,J^.) are given by (4.11) .
PAGE 75
68 We conclude this chapter with an example illustrating the design procedure outlined above. (6.7) EXAMPLE. Consider again the armaturecontrol led dc motor described in Example (5.5), with a timevarying motor torque "constant". The timevariation could be due to motorshaft loading or heating effects. We shall use the systematic procedure described above to design a "deadbeat" controller for the motor. STEP I . The timevarying sampleddata state model Z = (F, G, H) of the motor is specified by Equations (5.7). We have already, in Example (5.5) computed a rightBezout polynomial factorization for Wj.(z) , and obtained STEP II . Since we wish to design a "deadbeat" controller for E we W^(z) = QÂ’^R , QY^ + RY^ where Q = z^ + (aT 2)z + (1 aT) ^1 4X Â’ ^2 " 20^ 20^ XT choose (j)(z) = z^ [see Remark (6.6)]. Notice that
PAGE 76
69 degree constraint deg(<.) > 2deg(Q) + max{degYj, degY 2 1}. Leftdividing by Q as (}i = XQ + N we obtain X(z) = z2 z(al(aT 2)) + c , N(z) = [al(aT 2) a'^d aT) (aT 2)c]z (1 ctT)c , c = a"2(aT 2)a"l(aT 2) aÂ“l(l aT) STEP III. The controller transferfunction matrix is then given by Wj. (z) = Y^CX + NY^)"^N and can readily be realized using the formulae (4.11) since in this case, Q(z) is monic. We leave the details to the interested reader.lZ] In this first part of the dissertation we have examined in 3 polynomial theory for linear timevarying systems. For the remainder of this dissertation we turn our attention to the study of linear timevarying systems based on the ring of stabl eproper rational functions in JlÂ“(Z)[[z"^]].
PAGE 77
CHAPTER SEVEN STABILITY In this chapter, we briefly review some important wellknown concepts dealing with the stability of linear timevarying systems. We also relate some of these concepts to the transferfunction theory based on skew rings developed in this dissertation. Some of the material in this chapter is condensed from Section 4 of KAMEN, KHARGONEKAR and POOLLA [1984]. The reader is referred to this paper for further details. In the past, a good deal of research on the stability of both continuousand discretetime linear timevarying systems has been done. For example, the interested reader may consult articles by WILLEMS [1970], ANDERSON and MOORE [1969, 1981a] (for Liapunov stability theory), CESARI [1963], STARZINSKII [1955] (for stability of special classes of timevarying systems such as periodic systems), FREEDMAN and ZAMES [1968] (for stability of si owlyvaryi ng systems), etc. For any vector x in R" let iixil denote the Euclidean norm of x. For an mxn matrix M over R, let HMH := sup dMxII/HxH. For an nxm matrix N over A let the norm of N be defined by IINÂ» := sup HN(t)il. t 70
PAGE 78
71 Let f : a'J Â» aP be an input/output map. We shall say that f is bounded inputbounded output (BIBO) stable if and only if for any bounded input sequence, i.e., for any u over ^(Z)^. the corresponding output sequence y = f(u) is bounded, i.e., y is over 00 A (Z)^. Let I. = (F, G, H, J) be any (fixed) realization of f. Consider the free behaviour of the system Z described by the vector difference equation (7.1) x(k + 1) = F(k)x(k) . The system I is said to be internally uniformly asymptotically ^ (u.a.s.) if and only if for every real number e > 0, there exists a positive integer such that for any initial time t^ in Z and any initial state x(tg) with nx(t^)ii < 1, we have that iix(tQ + i)n < e for all i > N^. Here, x(tg + i) is the solution of (7.1) at time t^ + i starting from initial state x(t^). Let P(z) be an nxm matrix over the skew ring A((zÂ“^)), i.e. P(z) is a matrix Laurent series of the form oo P(z) = E z"Â‘'p. i=N ^ Then, P(z) is said to be stable if and only if n P^. a > 0 as i Â» <Â»
PAGE 79
72 In terms of this notion, we can characterize internal stability as follows (see GREEN and KAMEN [1983] and KAMEN, KHARGONEKAR and POOLLA [1984, Prop. (4.4)]). (7.2) PROPOSITION. Let I = (F, G, H) be a linear time varying system over A. Then, Z is internally u.a.s. if and only if (zIF)~^ is a stable matrix power series. Note the similarity this result bears to the timeinvariant theory. Let f be an input/output map, and let Z = (F, G, H, J) over A be any (fixed) realization of f. In contrast with the timeinvariant case, internal stability of Z does no^ in general imply BIBO stability of f. However, for the class of bounded linear timevarying systems, we have the following result (the proof is omitted on account of its relative ease). (7.3) PROPOSITION. Let f be an input/output map and let Z = lf.Â» 0, H, J) ove r i (Z) be any (fixed) realization of f. Suppose Z is internally u.a.s. Then f is BIBO stable . Q] The following examples illustrate phenomena peculiar to timevarying systems with unbounded coefficients. (7.4) EXAMPLE. Consider the timevarying system Z = (F, G, H) over A where F = 1/2, G = 1, and
PAGE 80
73 From Proposition (7.2) it is evident that E is internally u.a.s. This system, however, is not BIBO stable because the output y(k) resulting from the input u = A = unit impulse at the origin grows without bound. Notice that Proposition (7.3) does not apply here because H is not over tÂ°Â°(Z). (7.5) EXAMPLE. Consider the linear timevarying system E = (F, 0, H) over A defined by G = H = 1, and F(k) i O, k<0 1, k>0 Define a timefunction T over A by T(k) k<0 k>0 Notice that T is not an element of I (Z) and that T has an inverse T"^ over A. Consider the system E = (F, G, R) over A where
PAGE 81
74 F = (aÂ‘^TÂ‘^)FT , R = HT 5 = (a"^r^)G The systems E and E are algebraically Aisomorphic (see Chapter 2). However, E is not internally u.a.s. while E 2 _s internally u.a.s. This is because the transformati on R is not Liapunov (recall that an nxn matrix T is a Liapunov transformati on if Let f be an inputoutput map with associated transferfunction A matrix W^(z). Suppose there exist polynomial matrices P, Q, and R A of appropriate sizes with Q monic and such that W^(z) admits the polynomial representation In Chapter Four, we have described in detail a technique to obtain the (natural) Fuhrmann realization E(P, Q, R) = (F, G, H, J) of f associated with the particular polynomial representation (7.6) (the explicit formulae for F, G, H and J are given in equation (4.11)). The following theorem characterizes internal stability of the Fuhrmann realization in terms of the polynomial representation (7.6). 1 oo and only if T and TÂ“^ are over Â£ (Z)). (7.6) W^(z) = PQ'^R
PAGE 82
75 (7.7) THEOREM. Let f be an input/output map whose associated A transfer function matrix W^(z) admits the polynomial repre sentation W^(z) = PQ'^R. Further assume that P, Q, and R are polynomial matrices over Z (Z) [z], with Q monic. Then, the Fuhrmann realization E (P, Q, R) = (F, G, H, J) is internally u.a.s. if and only if Q~^(z) is a stable matrix Laurent series. PROOF. Since the Furhmann realization associated with the polynomial factorization W^(z) = H(zIF)"^G + J is itself 2 = (F, G, H, J), it follows that in particular Xg and X^j_p must be isomorphic as right oo IL (Z)[z]modules. This, by Proposition (4.15) implies that there exist polynomial matrices C, D, and ^2 ^Â”(Z)[z] such that CQ = (zIF)D , + YgO = I From these equations, we can write Q'^ = YiDQ"^ + Y^ = Y^(zl F)"^C + Y 2 . It is now clear from the last equality that if (zl F)Â“^ is stable then 0Â“^ is stable. The converse follows from an identical argument.
PAGE 83
76 In subsequent chapters, we shall deal only with linear timevarying systems with bounded coefficients, i.e., defined over the difference subring Z (Z)CA. We shall also require that any controller we design be over i (Z). These are physically reasonable constraints since most timevarying plants that arise in practice have bounded time variation, and, implementation of controllers with unbounded coefficients would be numerically illconditioned. (Technically, these constraints substantially complicate proofs and make results harder to obtain). /
PAGE 84
CHAPTER EIGHT STABILITY AND ASYCONTROLLABILITY In this chapter, we introduce the key notion of asycontrol 1 abi 1 ity , which is closely related to being able to stabilize a linear timevarying system by dynami c state feedback. ANDERSON and MOORE [1981a] have defined the notion of stabi 1 i zabi 1 ity which is equivalent to being able to stabilize a linear timevarying system by nondynami c (i.e., memoryless) state feedback. One of our main results in this chapter. Theorem (8.9), shows the equivalence of stabi li zabi 1 ity and asycontrol lability. striking conclusion of this theorem is that dynami cs in state feedback buy nothing extra as far as the problem of stabilization is concerned. Let us consider a linear timevarying system l = (F, G, H, J) defined over the difference subring JiÂ”(Z) A. Recall that by Corollary (4.14), Z is reachable in N steps if and only if there exist polynomial matrices Y^ and Y 2 such that (8.1) (zl F)Yi + GY 2 = I Motivated by equation (8.1) we have the following key definition: 77
PAGE 85
78 (8.2) DEFINITION. The system Z is said to be asycontroHable if and only if there exist stable matrix Laurent series and Yq such that (zl F)Yi + GY 2 = I . This technical definition has a rather complex precise systemtheoretic interpretation (see Appendix C and also KHARGONEKAR and POOLLA [1984b]) in terms of stabilizing I via an openloop control law. Roughly speaking, a system is asycontrol 1 able if and only if it can be driven to zero "final" state asymptotically, using uni formly bounded input sequences along uni formly bounded state trajectori es . In particular, this implies that if E can be stabilized using a dynamic statefeedback controller, then E is ascontrollable. The phrase asycontrol 1 abi 1 i ty (from asymptotically control lable ) is borrowed from KHARGONEKAR and SONTAG [1982] for timeinvariant systems over rings. We shall also need the following definition. A system E = (F, G, H, J) over l (Z) is called ri ghtrational asycontrol 1 able if and only if there exist polynomi al matrices A, B, and S over 00 4 I (Z)[z] with S invertible and a stable Laurent series and such that (zl F)AS"1 + GBS1 I
PAGE 86
79 We now relate rightrational asycontrol lability to the existence of a stabilizing controller with the following result. (8.3) THEOREM. Let I = (F, G, I) over Â£ (Z) be a linear timevarying system that is rightrational asycontrollable. Then, there exists a controller = (F^, H^) over &Â°Â°(Z) such that the closed loop system defined by X Pf GH 1 X rÂ“ G c (k + 1) = (k) (k) + X G. F L c c J 0 _ is internally uniformly asymptotically stable (u.a.s.) . SKETCH OF PROOF. We would like to remark that the proof of this result is constructive enabling one to explicitly compute the controller using operations in the skewring A[z]. This proof closely resembles the proof of Theorem (6.4). We outline the basic elements of the construction. (i) Define K := max{deg(ASÂ“^) , deg(BS"^) 1). Pick any moni c polynomial matrix over Â£*(Z)[z] with deg(Sj^) > K + 2 and such that is a stable matrix Laurent series. (ii) Divide on the left by (zl F) to obtain
PAGE 87
80 S^ = M(zl F) + N with deg(N) = 0. (iii) Define the polynomial matrices P(. := B , := MS + NA , := N . Then, the controller transfer function is function is given by W^. (z) = Â• c (iv) Implement the controller using the polynomial realization theory of Chapter 4. Explicit formulae for the controller in terms of the coefficient matrices of P^., Q^, and are given (for the case when Q^. is monic) by equations (4.11). Let 2 = (F, G, H) be a linear timevarying system over Jt"(Z). ANDERSON and MOORE [1981a] have defined a notion of stabilizability which intuitively corresponds to requiring that unstable modes be controllable. The authors then show that this notion is equivalent to the existence of a stabilizing memory less state feedback law u(k) = L(k) x(k). We shall (perversely) take this to be the definition of stabilizability. In terms of our skewring framework, we phrase this as follows:
PAGE 88
81 (8.4) DEFINITION. Let T. = (F, G, H) over jtÂ”(Z) be a linear timevarying system. Then, Z is said to be stabilizable if and only if there exists an mxn matrix L over such that (zl F + GL)~^ is a stable matrix power series. One can similarly define the dual notion of detectabi 1 i ty . Let Z = (F, G, H) over aÂ”(Z) be a stabilizable linear timevarying system. Let L be an mxn feedback matrix as in Definition (8.4) . Notice that we can write (zl F)Y]^ + GY 2 = I where Y^ = (zl F + GL)~^ ^2 ~ *^l* Since Y^ and Y 2 are stable Laurent series, it follows from Definition (8.1) that Z is asycontrol lable. Thus, stabilizab^ility impl i es asycontrol 1 abi 1 i ty . The much more difficult converse is also true. (8.5) THEOREM. Let Z = (F, G, H, J) be a linear timevarying system over I (Z). Then, Z is asycontrol 1 abl e if and only if Z is stabilizable . PROOF. The proof of this theorem is extremely long and may be found in Appendix C. We would like to remark that the essential technical difficulty in the proof is ensuring that the feedback matrix L is over tÂ”(Z). n
PAGE 89
82 We also immediately obtain the following: (8.6) COROLLARY. A system H is rightrational asycontrollable if and only if it is asycontrollable . PROOF. We have the following sequence of implications proving our cl aim. stabil izabil ity (a) Â— ^ rightrational asycontrol labil ity (b) V asycontrol labil ity Here, (a) follows from the discussion preceding Theorem (8.5), (b) follows trivially from the definition of rightrational asycontrollability, and (c) is a restatement of Theorem (8.5). CH Recall (from the discussion following Definition (8.2)) that if a linear timevarying system Z can be stabilized by a dynamic statefeedback controller, then Z is asycontrollable. This observation together with the Definition (8.4) of stabilizability immediately offers the following surprising conclusion.
PAGE 90
83 (8.7) COROLLARY. If a linear timevarying system I can be stabilized using dynamic statefeedback, then Z can also be stabilized using memoryless statefeedback . [  (8.8) REMARK. Indeed the above corollary is not (necessarily) expected, because there are classes of systems (for example delay systems, see KAMEN [1982, Ex. 3, p. 371]) for which it is not true. For timeinvariant systems over a field, it is a wellknown fact that dynamic state feedback is equivalent to memoryless state feedback as far as the problem of stabilization is concerned. The proof of this fact (see any modern text on control theory) relies heavily on the Kalman canonical decomposition. No such decomposition exists for timevarying systems because the "dimension" of the reachable space could depend on time. QJ Since stabi 1 izabi 1 ity and asycontrol lability are equivalent notions, we shall henceforth only speak of stabi lizabil ity. It is important to find "nice" necessary and sufficient tests for stabi lizabil ity. This problem appears to be quite formidable unless one specializes to particular classes (e.g., periodic) of timevarying systems. We do have, however, the following sufficient condition. (8.9) THEOREM. Let E = (F, G, H, J) over ilÂ”(Z) be a linear timevarying system. Suppose that one can find integers N and K and a
PAGE 91
84 real number e > 0 with the following property: For any integer there exists some with < T < + M such that (8.10) det[R^R(](t^) > e > 0 where R [ y is the N step reachability matrix R^ := [G FoG ... FaF . . . a^"^Fa^Â“V,] Then, ^ is stabilizable . PROOF. Essentially condition (8.10) corresponds to the system being i (Z)reachable in N steps but not at all times. Given any initial state C in R'^ and any initial time t^, we construct an openloop stabilizing control law as follows : we apply zero control (i.e. u(t) = 0) for the time t^ < t < t^. Then, we drive the system to zero state in N steps. This can be done because from (8.10) I is reachable at time t^. Moreover, the input sequences applied are uniformly (in tj^) bounded in norm. Having brought the system to zero state we apply no further inputs. We can thus stabilize Z by an openloop control law. This implies that Z is asycontrol lable, which by Theorem (8.5) implies that Z is stabilizable. Q
PAGE 92
CHAPTER NINE STABLEPROPER FACTORIZATIONS Of late, the use of stableproper factorizations for control system design has become increasingly popular, one advantage being that properness of controllers is automatic. See for example VIDYASAGAR [1978], DESOER et al . [1980], SAEKS and MURRAY [1981], KHARGONEKAR and SONTAG [1982], etc. In this chapter we investigate in detail stableproper factorizations for timevarying systems. Throughout this chapter, we shall treat only bounded timevarying systems, i .e. systems defined over the difference subring 00 i (Z). This is a reasonable physical constraint as discussed at the end of the previous chapter. The results we obtain in this chapter extend to some classes of timevarying systems defined over a difference subring B _ Â£ (Z) such as periodic systems. Define the set RP to be \ RP := {<1. in Â£"(Z)[[z"^]] : = a(zl + 6} where a, S, y, and 6 are matrices of compatible dimensions over 00 Â£ (Z). Here, RP is a mnemonic for _rational or _realizable and Â£Toper . It is easy to verify that RP forms a ring with the usual 85
PAGE 93
86 00 1 skewmultiplication and addition defined in Â£ (Z)[[z~ ]]. Also, define the subring RP^ by RPg Â•Â“ + 5 in RP : (zl 3)'^ is stable} . We shall call RP^ the ring of stable, proper, rational functions (the subscript s denotes stable). We shall abandon rigorous nomenclature and loosely refer to elements in RP^ or matrices over RP^ as being stableproper. Let D(z) be an nxn matrix over RP. We shall say that D(z) is bi causal if and only if D has an inverse D"^(z) also over RP. We have the following simple lemma. (9.1) LEMMA. Let D(z) = M(zl M 2 )~^M^ + M^ be an nxn matrix over RP. Then, D is bicausal if and only if M/^ is invertible over tÂ“(Z), and, in this event D~^(z) is given by the formula (9.2) Dl(z) = MÂ’^ MÂ’^M^(zI M^ + M 3 M'^M^) ^M 3 M^ . PROOF. Suppose M 4 is invertible over tÂ“(Z). Then, it can be verified by direct multiplication that D"^(z) is given by (9.2). To prove the converse, suppose D(z) is invertible over RP. Let D"^(z) = Nq + N]^zÂ‘^ + N 2 Z Â“2 + ...
PAGE 94
87 Notice that DDÂ“^ = I. Thus, (M^ + terms in z"^, zÂ“^, ...)(Nq + N^z"^ + ...) = I Equating coefficients of zÂ° we obtain M^Nq = I. Notice that Nq is bounded (i.e. over Z (Z)). This proves our claim. Q The previous lemma is merely a generalization of the familiar timeinvariant theory result that a proper power series D in R[[z"^]] is invertible and its inverse is proper if and only if the constant coefficient of D is invertible. A Let f be an input/output map and let W^(z) be its associated A transferfunction matrix. Then, W^(z) is said to admit a rightBezout stableproper factorization if and only if there exist stableproper matrices (i.e., over RP^) N, 0, X, and Y with D bicausal and such that (9.4) W^(z) = ND"^ , XN + YD = I One can also similarly define leftBezout stableproper factor! zati on . Such factorizations have been found to be extremely useful in tackling many control theoretic problems (for example, optimal controller design, see ZAMES and FRANCIS [1983]). As with the polynomial factorization theory of Chapter Five, we begin by characterizing input/output maps whose transferfuncti ons admit left
PAGE 95
88 and/or rightBezout stableproper factorizations. We have the following central result. (9.4) THEOREM. Let f be an input/output map and let W^(z) be its pxm associated transferfunction matrix over &Â°Â°(Z)[[z~^]]. Then, the following are equivalent : ( a ) W ^(z) admits a rightBezout stableproper factorization . A ( b ) W^ (z) admits a leftBezout stabl eproper factorization . (c ) f admits a stabilizable and detectable realization . PROOF, (c) > (a). Let Z = (F, G, H, J) be a stabilizable and detectable realization of f. Consequently, by definition, there exist matrices L and K over Â£"(Z) that (zl F + GL)"^ and (zl F + KH)Â“^ are stable matrix power series. Define the stableproper (i.e., over RP^) matrices N, D, X, and Y by N = (H JL)(zI F + GL)1 g + J , D = I L(zl F + GL)Â“1 g (9.5) X = L(zl F + KH)"k , Y = I + L(zl F + KH)1(G KJ)
PAGE 96
89 Notice by that Lemma (9.1) D is bicausal. It can be mechanically verified that with these definitions W^(z) := H(zl F)"^G + J = ND"^ . XN + YD = I A Thus, W^(z) admits a rightBezout stableproper factorization. A (a) > (c). Now suppose that W^(z) admits a rightBezout stableproper factorization, i.e. these exist stableproper matrices N, D, X, and Y with D bicausal and such that W^(z) = ND"^ XN + YD = I . From the definition of a stableproper function (i.e., the ring RPg), it follows that we can write N = Ni(zl N2)1 N 3 + N 4 D = D^(zl D 2 ) ^Dg + D^ where ... , N^, D^, ..., D^ are matrices of appropriate sizes over t (Z), with (zl N 2 )~^ and (zl M 2 )~^ stable matrix power series. Since D is bicausal, it follows from Lemma (9.1) that without loss of generality, D^ = I. Define the linear timevarying system (over I = (F, G, H, J) by
PAGE 97
90 X3Â°l Â‘Na' F = G = 0 Dj OjDj_ 1 CO Q __ 1 (9.6) H = [N^ J = . The system l is stabiiizable because with L = [0 (which is over ZÂ°Â°(Z)), (zl F + GL)"^ = (zl 0 0 (zl which is stable . We now show that E, defined by (9.6) is detectable. Define the stableproper matrix Q by Q = (zl N 2 )'^ N 3 (zl D^)^ D 3 It is easy to verify that (zl F)Q GD = 0 and HQ + JD = N. Combining these equations with XN + YD = I, we can write
PAGE 98
91 zIF G _ (zIF+GL)"^ Q I 0 _ XH (Y+XJ)_ _L(zIF+GL)Â“^ D_ _V I where V is some stableproper matrix whose exact formula is not critical to our needs. Define the matrix zIF = XH G (Y+XJ) It is clear from (9.7) that = zIF 0 (zIF)^G (Y+XJ) The matrix is clearly both leftand rightinvertible. Also, (Y + XJ) must be of the form (Y + XJ) = I + terms in , z"^, ...
PAGE 99
92 since XN + YD = I. Thus, the leading (zÂ°) coefficient of (j )2 is of the form I 0 * I Consequently,
PAGE 100
93 (9.9) REMARK. In proving the above result, we have made critical use of Theorem (8.5); the fundamental result of the previous chapter. Recall that Theorem (8.5) states that asycontrol lability is equivalent to stabilizability, and the rather difficult proof of this result is in Appendix C. There appears to be no way to circumvent Theorem (8.5) in studying the existence of stabl eproper factorizations. We would further like to remark that equations (9.5) enable one to compute stableproper factorizations once the stabilizing feedback matrices L and K are determined. Q] Theorem (9.4) essentially states that a timevarying system admits stableproper factorizations if and only if it admits a stabilizable and detectable realization. This characterization is nontrivial because (in glaring contrast with timeinvariant systems), not all timevarying systems admit stabilizable realizations (and therefore by Theorem (9.4), stableproper factorizations). We illustrate this with the following example. (9.10) EXAMPLE. (Also see Example (5.9)). Consider the linear timevarying system E = (F, G, H) over 4~(Z) where F(k) = G(k) = 1 for all k in Z and G = A = the unit pulse concentrated at the origin. Let fj. be the input/output map associated with E. Suppose fj; admits a stabl izable realization E = (F, G, H). This would imply that for any initial state C at time t=l, there exists an open loop control law that drives E from x(l) = C to zero final state (and
PAGE 101
94 therefore zero final output) asymptotically. However, from the unitpulse response function of fj., it is evident that application of inputs after t=l has no effect on the output. Thus, it must be that (9.11) lim n H(k)P(kl)F(k2) ... r(l) n = 0 . k*Â« However, W. (z) := H(zl F)"^G = zÂ“^A + zÂ’^ A + ... ' Z = fl(zl F)Â‘^G . Consequently, for all integers k > 1 R(k)F(kl) ... F(1)G(0) = 1 . This, together with the fact that 5 is over C{1), renders (9.11) impossible. Therefore, fj. does not admit a stabilizable realization which implies (by Theorem (9.4)) that W, (z) does not admit a stableproper factorization.
PAGE 102
CHAPTER TEN FEEDBACK CONTROL Having investigated in detail issues related to the existence of stableproper factorizations in the previous chapter, we now examine their role in the problem of stabilization. Let I = (F, G, H, J) and = (F^., G^, H^, J^) be linear timevarying systems over a"(Z) called the plant and controller respectively. Consider the feedback configuration shown below. v(k) >y(k) We have the following result. (10.1) THEOREM. The controller internally stabilizes z if and only if the following conditions are met: (a) Both Z and Z^ are stabilizable and detectable . (b) Let 95
PAGE 103
96 Wj.(z) := H{zl F)"^G = D'^N , NX + DY = I (z) := H (zl F )"^G =ND'^,XN+YD=I c' c' c cc cc cc c be any stableproper factorizations of the plant and controller transferfunction matrices (these exist by (a)). Then, SÂ”1 ;= (DOj. + NN^)1 is a stable matrix power series . SKETCH OF PROOF: The proof of the above theorem is tedious, but otherwise straightforward. Observe first that internally stabilizes E if and only if (10.2) : = is a stable power series. Suppose E^ stabilizes E. Then, (a) follows directly fron Corollary (8.7). To prove (b) one first computes Each of the four block submatrices of (}>"^ is stable and these four conditions imply (after a series of manipulations) that SÂ“^ is stable. zIF GH zIF
PAGE 104
97 Now suppose conditions (a) and (b) are met. To prove that internally stablizes I one mechanically verifies that with stable, each of the four (block) submatrices of <)~^ are also stable. (10.3) REMARK. Essentially, the above theorem permits us to design controllers (that i nternal ly stabilize the plant) working exclusively in the "frequencydomain". Internal stability is realization dependent, and. Theorem (10.1) says in effect that as long as we i mpl ement stabilizable and detectable realizations of our controllers, internal stability of the closedloop system is guaranteed if (and only if) SÂ“^ is a stable matrix power series. D We can now parameterize all controllers that stablize a given plant 2. We shall assume that D is stabilizable and detectable (and ther'efore admits stableproper factorizations); else by Theorem (10.1) Z cannot be stabilized. We have the following theorem (the proof closely resembles that of DESOER et al . [1980, Theorem 3] and is therefore omitted). (10.4) THEOREM. Let Z = (F, G, H, J) be a stabilizable and detectable plant. Let W^(z) = ND'^ = DÂ“^Nj , XN + YD = I , + D^Yj = I
PAGE 105
98 be any leftand rightstableproper factorizations of W^. Then, a controller internally stabilizes Â£ if and only if the controller transferfunction Wy (z) is of the form ^c (10.5) (z) = (MN^ + Y)"^ (MD^ + X) for some matrix M over RP^ . Q] Note the close resemblance (10.5) bears to the timeinvariant case (DESOER et al . [1980, Theorem 3]). Armed with the powerful tool of stableproper factorizations, we take a brief look at track! ngprobl ems for linear timevarying systems . Consider the feedback system configuration shown below. Here, Z is the (given) plant, is the (to be designed) controller, and Z[^ is the (given) exogeneous system that generates the reference signal r(k) via initial conditions Xp(kg) on Z^. The system Zp could in general be timevarying. For example, Z[^ could generate a reference signal r(k) which is a sinusoid of timedependent frequency, or r(k) could be a polynomial in time.
PAGE 106
99 (10.6) EXAMPLE. Suppose = (Fp, Gp, Hp) is described by F,(k) cos(TQ+Tj^+2kT^) sin(TQ+T^+2kT^) sin(TQ+T^+2kT^)' cos (TQ+T^+2kT^) HpCk) = [1 0] Gr(K) where Tg and are (fixed) real numbers. Then, for the initial condition x(0) = [A 0]', the reference signal r(k) generated by Sr is r(k) 0 k<0 Acos(kT(k)), k>0 where T(k) = Tg + kT^. Thus, in this case the reference signal r(k) is a sinusoid whose frequency is modulated by a ramp. Q We consider the following design problem, called Tracking with Internal Stability (TIS):
PAGE 107
100 Find (if possible) a controller such that (i) internally stabilizes E. (ii) For any initial time kg and any initial state C, the output y(k) of the closed loop system asymptotically tracks the reference signal r(k) generated by Ej^ with initial condition Xp(kg) = 5) uniformly in Kg. The TIS problem can be reformulated in the "frequency domain" as follows. Let P(z), C(z), and T(z) denote the plant, controller, and exogenous system transfer function. Then, find (if possible) C(z) that (i) (I + PC)Â“^ is a stable matrix power series. (ii) (I + PC)"^T is a stable matrix power series. Observe that by Theorem (10.1) E^ stabilizes E internally only if both E and E^ are stabilizable and detectable. Hence, by Theorem (9.4), the plant and controller tranferfunctions P(z) and C(z) must admit stableproper factorizations in order to meet requirement (i). We further assume that the exogenous system transferfunction T(z) admits a leftBezout stabl eproper factorization. In the event that Ej^ is timeinvariant, obviously such a factorization always exists.
PAGE 108
101 SAEKS and MURRAY [1981] have formulated the TIS problem in a general axiomatic setting based on stableproper factorizations. The TIS problem for time varying systems falls into this general framework. The difficulty involved however, is in showing the existence of stableproper factorizations for timevarying systems and this was done in the previous chapter. In terms of these factorizations following SAEKS and MURRAY [1981, Theorem 2] we give a partial solution to the TIS problem. Let us consider leftand rightstableproper facotrizations of the plant and exogenous system transferfunctions as P(z) = NO"^ = (10.7) XN + YD = I = + D^Y^ T(z) = W^V , Wy^ + VU 2 = I . Then, we have the following result (which follows directly from SAEKS and MURRAY [1981, Theorem 2]. (10.8) THEOREM. The problem TIS admits a solution if and only if there exist stableproper matrices M^ and M 2 $uch that (10.9) NMjDj + M 2 W = NX I In this event, the set of all controllers that solve the TIS problem can be parameterized as
PAGE 109
102 C(z) = (M^Nj + Y)l(MiDi + X) where is any stableproper solution of (10.9) .  [ It is desirable to have nice necessary and sufficient conditions for the solvability of the linear skew equation (10.9), and, in the event that (10.9) is solvable, one would desire constructive procedures to compute the solution. This open problem appears to be quite formidable. We would also like to remark in closing that, as is well known, the problem of disturbance rejection can be reformulated as a TIS problem and handled similarly.
PAGE 110
CHAPTER ELEVEN CONCLUDING REMARKS In this dissertation we have developed a "transferfunction" type theory for linear timevarying discretetime systems. Using this framework, in the first part of the dissertation, we have been able to generalize much of the existing polynomial model theory. Specifically we have treated polynomial realization theory (FUHRMANN), polynomial factorization theory, and applications to feedback control. In the second half of the dissertation, we have treated the problems of stabilization, of existence of stableproper factorizations, and taken a cursory look at the tracking problem for timevarying systems. One of our deepest results is the equivalence of dynamic and memoryless state feedback as far as the problem of stabilization is concerned. Many open problems remain in the area of timevarying systems. We outline some of these below, and expect that the tranferfunction theory developed in this dissertation will prove useful in solving some of these problems. (a) Regulation and Tracking. We have shown in Chapter 10 that the problem of Tracking with Internal Stability reduces to the solution of a linear matrix equation (10.9) over the 103
PAGE 111
104 skewring RP^. It would be very desirable to give explicit techniques for the solution of such equations, perhaps for a more restricted class of timevarying systems. It would be also interesting if one can extract a form of the Internal Model Principle (WONHAM [1979]) in this framework. (b) Many of the results of this dissertation are not concrete enough that they may be called "completely computable". This is because we treat completely general timevarying systems. To use the theory to design and implement controllers one has to obtain much more computable results, and, perhaps an extended study of particular classes of timevarying systems (such as periodic, or slowlyvarying) may yield such results. Specifically, we would like to solve the sensitivity minization problem of ZAMES and FRANCIS [1983] and any of the numerous problems in robust multivariable control system design (see DOYLE and STEIN [1981] for example) for these particular classes of timevarying systems. (c) ANDERSON and MOORE [1981b] have pointed out that for timeinvariant large scale systems it is possible to eliminate "fixed modes" by using decentralized timevaryi ng feedback. It thus appears that the use of timevarying controllers for timeinvariant plants may be advantageous. A detailed study of this problem would prove quite
PAGE 112
105 interesting. For example can one improve stability margins by the use of timevarying feedback, what remains invariant under alj_ types of feedback, etc.? (d) As an extended project, one might consider continuoustime generalizations to the transferfunction theory developed in this dissertation. This problem is quite nontrivial as much of the dissertation inextricably uses the "discretetime nature" of the problem. For instance the action of z is a right timeshift of the coefficients for discretetime systems, and is a derivation for continuous time systems. The interested reader may consult KAMEN [1976] for related results on continuoustime timevarying systems.
PAGE 113
APPENDIX A PROOF OF PROPOSITION (4.1) Before we establish Proposition (4.1), we shall require several n intermediate lemmas. Let Q(z) = be an r x r invertible polynomial matrix over A[z] and let its inverse over A((z"^)) be Q ^(z) = j=^N ^ ^ Qi Â•= 0. Define a right A1 inear projection map ttq (as before) by Tfq : aÂ’'[z] > A^[z] : x > Qtt{Q'^x} Also, define a left A1 inear projection map Xg by : A^[z] A^[z] : y Â» ir{yQÂ‘bQ Here, elements y in A^[z] are to be thought of as row vectors. Notice that TTg(x) and Xg(y) are polynomial vectors of degree less than n. We can therefore write (A.l) TT (iz*^) = E z^ <) , H j=0 106
PAGE 114
107 4 Xp(lz') n1 Z j=0 Notice that 7 Tq(Iz^) = Qtt{Q^z''} n = E i=0 1=1 i n1 E j=0 Equating powers of z, we get after some computation. (A. 2) n = E t=l 8t.k> Similarly, we can obtain (A. 3) j.k A A Now define the nr x nr block matrices Q, B, M, and N over A by (A. 4) l,jl N. . := ip. , . ,
PAGE 115
108 (A. 5) LEMMA. With the above definitions for Q, B, M, and N, M = Q B . N = a(B Q) PROOF. We verify the first equation by direct computation: tfi '^tj Â• Thus, M = Q B. The second equation can also be easily verified. Q Notice now that Q(Q'^z^)^ = I 7T (Iz*") is a polynomial matrix of degree < n. Thus, we can write j=0 n1 (A. 6) (z'^Q'^) Q = E 0. .zJ. j=0 n1
PAGE 116
109 A direct computation for a and g (in a manner similar to obtaining (A. 2), (A. 3)) yields X Â• 1 J,k n = E t=l a" t ^^t+jn \+kn(A. 7) n E t=l (B t+kn ) (Q t+jn ) Define the nr x nr block matrices X, Y, U, and V over A by ^^i+jn1^Â’ Y. . i.J (B... J ^ i+jnr (A. 8) a. V. . 1 ,J = B (A. 9) LEMMA. With the above definitions. U = XY , a"Â’\v) = YX , M+U=I , N+V=I, M^=M
PAGE 117
no PROOF. The first two equations can easily be verified by computation. We derive the third equation: Notice that 7r^(z''l) + (Q Q1 z'')^ = z^I n1 = z j=0 * Â“j,k> Thus, j=k otherwi se It now follows from the definitions of M and U and M + U = identical fashion, we can obtain N + V = I. We can now prove that = M. Notice that n1 n1 a di rect I. In an The right A1 inear map ttq is a projection. Consequently,
PAGE 118
Ill j=0 t=0 ^t+l,j+l ^>l,k+l Equating powers of z, we get n1 ^+l,k+l " ^+l.s+l ^s+l,k+l* i.e., = M. This completes the proof. D We shall also require the (A. 10) PROPOSITION. Xq is a finitely generated right A modul e . Further, let M be the nr x nr matrix descri b ed above . Then Xg is isomorphic to Image (M) ( as a right A modul e ) . PROOF. We first prove that Xg is finitely generated. Since Q is an r X r invertible polynomial matrix, it follows from Proposition (3.2) that there exists an r x r polynomial matrix T such that QT=M is monic and such that deg (Q) = deg(M) = n. For any x in Xg, Q'^x =ir(Q"lx). Therefore deg (T^^Q^x) < deg (TÂ“^)l. Notice that
PAGE 119
112 T ^ is M ^Q. Thus, deg (T"^) = deg(Q) deg(M) since M is monic. However, deg(Q) = deg(M), therefore, deg (T^) = 0, i.e., is proper. Consequently, n + deg(x) = deg (MÂ“^x) < deg (T"^) l = _i Thus, for any x in Xq, deg(x) < n1. Thus, Xg is finitely generated and Â‘N.J ^ '"Â’In1 1 , 2 , r} where ej is the jth column of the r x r identity matrix, generate Xq. We now show that Xg = Image (M). Consider the map n1 r <) : X^ + Image (M) : E E b. . a. .> M a i=0 j=l where ^ = [ag^^ Â“0,rl Â“1,1 Â“l,rl Â•Â•Â• ]Â“nl,l Â•Â•Â• ^nl,r^ Â’ straightforward to verify that () is welldefined and right A1 inear. Also, since generate Xg,
PAGE 120
113 Notice however that n1 r n1 X = E E b. . a. . = E i=0 j=l 'Â’J i=o Thus, Ip is onetoone. Consequently, Xg This completes the proof. Tfg(z'') a. = 0 = Image (M). The following lemma is a simple (discretetime) version of a theorem of Dolezal [1964]. (A. 11) LEMMA. Let M be any p x q matrix over A. Then, the right A module Image (M) is free i f and o n ly i f rank M(t) = p = constant for all t in Z. PROOF. The proof, being straightforward, is omitted. D We are now in a position to prove the (4.1) PROPOSITION Xn is a finitely generated, free right A module . PROOF. We have already shown that Xg is finitely generated (Lemma (A.IO)). It follows from Lemmas (A. 10), (A.l) that in order to prove Xg is free, we need only to show that rank M(t) = p = constant, for all t in Z. We proceed to do this. Notice that from Lemma (A. 5) det [XI M(t)] = det [XI Q(t)B(t)]
PAGE 121
114 From Lemma (A. 9), we see that M = I U = I XY. Consequently, det [XI M(t)] = det [(Xl)I + X(t)Y(t)] = det [(Xl)I + Y(t) X(t) . Again from Lemma (A. 9), we have YX I = a^'^(VI) = a^Â“l(N). Therefore, det [XI M(t)] = det [XI N(t n+1)] It now follows from Lemma (A. 5) that det[XI M(t)] = det [XI B(tn)Q(tn)] = det [XI Q(tn)B(tn)] = det [XI M(tn)] Therefore, the eigenvalues of M(t) are identical to the eigenvalues of M(tn). Since M(t) is idempotent, i.e., w2(t) = M(t) (see lemma (A. 9)), it follows from standard linear algebra that M(t) is simple. Thus, rank M(t) = number of eigenvalues that are zero, which in turn implies that (A. 12) rank M(t) = rank M(tn)
PAGE 122
115 We could also write Q(z) = + H zÂ‘'o. = 0*(z) i=0 ^ Corresponding to Q*, we obtain an (n+1) r x (n+l)r matrix M* in a manner analogous to the way M was obtained from Q. Notice that by Lemma (A. 10) Xq = Image (M) = Image (M*). Thus, for all t in Z, rank M(t) = rank M*(t). By an argument identical to the one presented above, we can conclude that for all t in Z rank M*(t) = rank M*(tnl) , i.e., rank M(t) = rank M(tnl). This together with (A. 12) implies that rank M(t) = p = constant for all t in Z. I I
PAGE 123
APPENDIX B PROOFS OF THEOREMS (5.2) AND (5.11) We would like to first emphasize that the proofs of Theorems (5.2) and (5.11) given in the following pages is c onstructi ve . A systematic procedure for obtaining the polynomial factorizations of these theorems is given in Chapter 5, and essentially follows from this appendix. Before we prove these theorems, we need to establish a few intermediate results, some of which are themselves of interest. We proceed to do this. Let (F,G) be a pair of matrices over A that are reachable in N steps at all times. It follows from Lemma (2.4) that the matrix R^ := [GF(aG)...F(oF)(a2F) ... (o^"^F) (a^'^G) ] is rightinvertible over A. This implies that there exist matrices A and C such that [F G] C 116
PAGE 124
117 i.e., the row [F G] is unimodular (over A). It is now a classical fact that there exist matrices X and Y such that 'F g" M = 0 X Y is unimodular (see NEWMAN [1973. application (d), p. 38]). We now have the following key result. (B.l) PROPOSITION. Let (F, G) be a pair of matrices over A that are reachable in N steps at all times, for some integer N. Let F G M = 0 _ X Y_ be any unimodular completion of the row [F G] over A and let N^ : = Mq~^ be pa rti tioned as A b' N = 0 C 0 where A is an nxn matrix. Then, the pair (A' , B* ) is observable in K steps for some integer K.
PAGE 125
118 PROOF. Since the pair (F, G) is reachable in N steps at all times, it follows from Proposition (2.8) that there exists an m x n matrix L A over A such that F := F + GL is anil potent, i.e., for some integer k F(aF) ... (a^ F) = 0. Notice that Here the precise formulae for C, D, and X are not critical to our needs. One component in the above equality is (B.2) AF + BX = I We now see that
PAGE 126
119 F = (a^ I) F = (aÂ’^A)(aÂ’^ F) F + (o'^B)(aÂ“^ X) F . Substituting this into (B.2) yields (B.3) BX A(a'^ B)(a"^ X) F + A(aÂ‘^A) (aÂ“^F) F = I From (B.2) it follows that (a'^ F) F = (aÂ‘^ I)(a'^ F) F = (a'2 A)(a'^ F)(aÂ‘^ F)F + B)(a'^ X)(aÂ‘^ F) Using this in (B.3) gives us BX A(o"^ B)(aÂ’^ X)F + A(aÂ’^A)(a~^ B)(a"^ X)(aÂ’^ F) F ^{a~^ A)(a"^ A)(oÂ‘^ F)(aÂ‘^ F) F = I. Repeating this procedure k times, we obtain
PAGE 127
120 [BA(a"^B) ...A(a'^A) .. ^A)(a'^ B)] \U k ; + S, (a F) ... (a'"F)F = I , where Sg, S^, ..., S^ are matrices over A whose exact formulae are not critical to our needs. Notice now that ... (a"^ F) F = 0 because F is 0 nil potent. Hence the matrix [B A(0"^B) ...A(a"^A) .. (0"Â‘^'^^A) (a'^B) ] is rightinvertible over A. It is then apparent (see WEISS [1972] and also Chapter 2) that the pair (A', B') is observable in k steps at all times. As an immediate consequence of this proposition, we have the (B.4) LEMMA. (F, G) be a pair of m atrices that are reachable in N steps . Let A and B be matrices as defined in Proposition (B.l).
PAGE 128
121 Then, there exists an mxn matrix S such that A r= A + BS is g~^ ni1 potent , i.e., for some integer k A (a~^ A) ... A) = 0. PROOF. Define the matrices A^ and over A by \{t) = A(t) , B^(t) = B(t). It follows from Proposition (B.l) that the pair (AÂ‘, B') is observable in N steps, for some integer N. Hence, by duality, the pair (Ap B^^) is reachable in N steps. We now apply Proposition (2.8) and conclude that there exists an mxn matrix S^ over A such A that A^ := A^ is onilpotent, i.e., there exists an integer k such that for any time t Aj(t) A^(t 1) ... A^(t k) = 0. Define a new matrix S over A by S(t) = S^Ct). Then for any time t. A(t) A(t + 1) ... A(t + k) = 0, where A := A + BS.
PAGE 129
122 (B.5) LEMMA. Let (F, G) be a pair of matrices that are reachable in N steps at all times. Then, there exist polynomial matrices P(z) and Q(z) A[z] with 0(z) monic and such that (zl F)Â“^ G = P(z) Q\z). PROOF. Since the pair (F, G) is reachable in N steps at all times, it follows that there exist matrices Aq, A^, ..., over A such that N1 2 M. A. + I = 0. i =0 ^ ^ where the matrices M^ are defined recursivly by Mq = G, M^^^ = F(oM^). Multiplying the above equation on the right by M^j yields / N E M. B. = 0 i =0 ^ where B^:= A^M^j for i =0, 1, ..., N 1, and B,sj := I. Define a monic polynomial matrix N Q(z) := E B. i=0 ^ It is an easy computation to verify that Tr{(zl F)"^ G Q} = 0. Consequently, (zl F)"^ G Q = P is a polynomial matrix, verifying our claim. Q]
PAGE 130
123 We are now in a position to prove the following result. (5.11) THEOREM. Let Z = (F, G, H) be a linear timevarying sys tem that is reachable in N steps at all times. Then, there exist polynomial matrices P and Q over A[z] where Q is invertible such that Wj.(z) = H(zl F)"^ G = PQ"^ and the Fuhrmann realization E(P, Q, I) is Aisomorphic to E = (F, G, H). PROOF. In order to prove this result, we have to find matrices C, D, Yj ... Yg such that the isomorphism conditions of Theorem (4.20) are satisfied. Since the pair (F, G) is reachable in N steps at all times, the unimodular row [F G] can be completed to a unimodular (invertible over A) matrix (see discussion before Proposition (B.l)). Let be any such completion, and let
PAGE 131
124 Nq := Mq B D where A is an n x n matrix. From Lemma (B.4) it follows that there exists an m x n matrix S over A such that A := A + BS is o ^nilpotent, i.e., for some integer k. A oÂ‘^(A) ... a^^l(A) = 0. Define the matrix (aS) 0 0 . We claim that the polynomial matrix U := zW + Mq is unimodular ki (invertible) over A[z] and its inverse is V := Nq ( Z (zWNÂ„)'']. We . . i =0 ^ verify this by direct computation. First, notice that " I 0 A B A B (oS) 0 C D (aS)A (aS)B _ 1 o 1 r n A 8 _{aS) 0 ^0 0
PAGE 132
125 Therefore (zWNg)'' 'i o' { z > iaS) 0 0 0 1 klI 0 A B z I 0 Q 00 o 1 0 0 {aS) 0_ k1 Â’i o' A B Â’ I o' Â” z 1 z (aS) 0 1 0 0 S 0 = z ' I o' (aS) 0 (Az) 0 k1 Observe that
PAGE 133
126 (Az)^'^ = A (a'^A) ... (a'^'^^A) because A is a"^ niipotent. = 0 , Consequently, (zWNq)*^ = 0. It then follows that k1 UV = (zW + M ) N. ( E (zWNj') ^ ^ i =0 ^ k1 = (zWN. + I) ( E (zWNÂ„)M i =0 = I + (zUNq)*" = I , proving our claim. We now partition V as D Q where Q is an mxm polynomial matrix over A[z]. Let us P := HD. Let Y3 = z(aS) + X. Then (B.6) D = I Q def i ne
PAGE 134
127 Since the pair (F, G) is reachable in N steps at all times, it follows from Lemma (B.5) that there exist polynomial matrices P and Q where Q is monic such that (zl F)Â“^ G = P Q"^. Therefore, where Yy is some polynomial matrix. We now see that r ^ n Â— Â— Â— r Â“ 0 p p = vu = I .'7. _Q_ _o _ Hence, Â— 1 o 1 II 1 < ro r o I o 1 1 1 Q. 1 >1 >1 O' 1 r < Ln O 1 1 Consequently, QY^ = Q, which from Proposition (3.2) allows us to conclude that Q is rightinvertible. Let ip in ((z"^)) be any rightinverse of Q, i.e., QÂ»^ = I. To see that Q is also leftinvertible, we examine (B.6). Two components of this equation are
PAGE 135
128 (B. 7 ) (zl F) D = GQ (B.8) Y3D + Y4Q = I Multiplying (B. 7 ) by Y3(zl F)"^ on the left we obtain Y3D = Y3 (zI F)"^ GQ . Substituting this into (B.8) we see that I = (Y3 (zI F)^G + Y4) 0 , i.e., Q is also leftinvertible. Hence, Q has a unique inverse QÂ“^ in A'^x^((z1)). We have thus constructed polynomial matrices P and Q where Q is invertible, such that P = HD , PQ 1 = H(zl F)" 1 g , (B. 9 ) GYj + (zl F)Y2 = I , Y3D + Y4Q = I. It is now straightforward to check that the isomorphism conditions of Theorem ( 4 . 20 ) are satisfied by choosing C = G, Yg = Yg = 0 and with YiÂ» Y2, Y3, Y^, and D as in (B. 9 ). This completes the proof. Q
PAGE 136
129 (B.IO) REMARK. If the system E (F, G, H) is assumed to be canonical instead of only reachable in N steps at all times, the isomorphism conditions of Theorem (4.20) reduce to (much simpler form) the existence of polynomial matrices Y 3 , such that (B.ll) Y 3 P + Y 4 Q = I , H(zl F)1 q = PQ1 . Thus, Theorem (5.11) tells us that if E = (F, G, H) is a canonical linear time varying system, there exist polynomial matrices P and Q where Q is invertible such that (B.ll) is satisfied for some Y 3 , Y^.n In view of the above remark, we have the (5.2) THEOREM. Let f be an inputoutput map and let W^(z) = E A. z"^ j=l ' be the pxm transfer matrix associated with f. Then the following statements are equivalent . (a) There exist polynomial matrices P, Q, Y^, an^ Y 2 where Q is invertible such that W^(z) = PQ'^ , Y^P + Y^Q = I A i . e . , ( z ) admits a leftBezout polynomial factorization , (b) There exist polynomial matrices Q^, R, Y 3 , an^ Y 4 where Q is invertible such that W^(z) = , Q^Y 3 + RY^ = I
PAGE 137
130 i.e., W^(z) admits a rightBezout polynomial factorization . ( c ) f admits a canonical realization . SKETCH OF PROOF: (c) (a) is a restatement of Remark (B.IO). (a) * (c). Let i:(P, Q, I) be the Fuhrmann realization associated with polynomial representation W^(z) = PQ"^. From Theorem (4.12) since R I, it follows that Z is reachable in N steps, and from Remark (4.13), it follows that E is observable in N steps (because YjP + Y 2 Q = I). Therefore, E is a canonical realization of f. (a) *Â• (b) follows from duality. I 1
PAGE 138
APPENDIX C PROOF OF THEOREM (8.5) Before we begin proving Theorem (8.5), we first give a precise systemtheoretic interpretation of asycontrollability which can be regarded as a definition in lieu of Definition (8.2). The equivalence of these definitions is demonstrated in KHARGONEKAR and POOLLA [1984B]. C.l DEFINITION. Let e > 0 be any (fixed) real number. A system ^ = (F, G, H) over tÂ°Â°(Z) is said to be asycontrollable if and only if A there exists a real number M and an integer N such that for any initial time tn in Z and any initial state g in R'^ with ogn = 1, there exists an input sequence u. (to), u. (to+1), ..., ^tn which results in the state trajectory (to) = C, U,^ Â— u ^to Â•Â•Â•Â’ ^to and such that 0,5 Dx. (tp,+N)H < e ^ 0.5 ^ Hx (t.+k)n , Hu (tÂ„+k)ii < M . 0.5 Â’^0.5 Â° 131
PAGE 139
132 The subscripts tQ and 5 on the inputs u and states x signify their dependence on tQ and 5. We shall require several intermediate lemmas before we are in a position to prove Theorem (8.5). We shall adopt the following notation through the remainder of this appendix: Let F and G be nxn and nxm matrices over with sup DF(t)n , sup l!G(t)il < M t t If for some t in Z a mxn matrix L(t) is defined, then, F(t) := F(t) + G(t)L(t) . Also define for t^ > tQ, the statetransition matrix 4.(tj,tQ) := F(t^l) ... F(tQ+l)F(tQ) . Let N > 0 be a (fixed) integer and let e > 0 be a (fixed) real number. We first have the following simple result.
PAGE 140
133 (C.2) LEMMA. Let S = {xj^,X 2 Â» X^} be a given set of vectors in R^. Then, there exists a subset X = {x. , x. . .... x. } of S such j 1 j 2 J p Â“ that the following two conditions are satisfied: ( 3 ) The vectors in X are linearly independent . (b) Given any x.j in S X. we can write Â”1 Â“ *1,S Â’'J3 Â’ Â‘ 1 Â• PROOF. Extend S by {b^.b 2 . .... b^_^} to form a basis for R^. Let B = [b]^ b 2 ... b^_p] and let S = [xj^ Xp ... X^]. Define the full rank matrix A by A := [B S] . In A. select an nxn submatrix C of largest determinant. Clearly. B is contained in C, and C is invertible. Let C = [B x]. and let X = {x j.x j, ... X j}. The subset X of S satisfies condition (a) above. Given any x^in S X, we can write (uniquely) using Cramer's rule. "i r E k=l det C, k det C
PAGE 141
134 where C^ is the matrix C with kth column replaced by x^. Since C^ is also a submatrix of A and since det C is maximum (among the submatrices of A), it follows that completing the proof. Let X = {x2,X2> x^} be an ordered set of independent A ^ A A vectors in R . Let X = {x^, X2Â» x^} be an (as yet unspecified) rearrangements of X. Using the GramSchmidt orthogonal i zati on procedure, we can (uniquely) write x^ = b^ + b^ A ^3 Â“ Â“ 3,1 * Â’^ 3 , 2^2 * V Â“r,lÂ»l * Â“r,2'Â’2 * + a ,b 1 + b r,rl r 1 r where {b^,b2, b^} is an orthogonal set of vectors. The vectors b,j and the coefficients Â“j^j very much a function of the A particular rearrangement X chosen. We now select a special A rearrangement X of X as follows. A Chose x^ in X such that itb^B (= DXj^ii) is a maximum. Then, ^ A select X2 in X {x^} such that nb2H is a maximum. At the ith
PAGE 142
135 step, we choose x^. in X {x^,X 2 > such that itb. n is a maximum. We shall now prove that for this special rearrangement X, (C.4) a I < 1 , j = 1,2, ii . ' 9J For a fixed i. 'Â‘i = " * Â“l.ilNl * ^ Since was chosen such that nb^n was a maximum, it follows that nb. 2 "2 I > nx^. n = z a. iib .n^ > a? nb. ii^ l*J J tÂ»i i 2 ^ Therefore, Â“ij < 1In general, for j = 1,2, ..., i1, x. was ^ J chosen such that nb.n was a maximum. Hence, J (C.5) iib.n^ > J j1 E s=l 2 i = z s=j ? ? nb n > a. . . s i,j nb.n^ J 2 Therefore, a^. ^ ^ < 1, proving (C.4). Notice also that with j = i 1 in (C.5)
PAGE 143
136 2 7 7 nb. T n > E or. Jib ii^ Â’1 s=il = 2 2 ? > a. nb, = nb. Â• 5 I J. 1 Consequently, it follows that (C.6) ab, a > abÂ„a > > ab a i i r Let X = {xj,X 2 Â» X^} be an ordered set of vectors in Define r(X) = the number of linearly independent vectors in X. The set X will be called wel 1 ordered if and only if the following conditions are satisfied: a) {X]^,X 2 * Xp} are linearly independent, b) for i = r+1, ..., k, we can write X i r 2 a. . X . , wi th la. . I j=l i.J J ' < 1 c) on applying the GramSchmidt orthogonal ization procedure to Â•Â•Â•Â» Xp} as Xi = bj Â”2 Â“2,l*Â’l * *Â’2
PAGE 144
137 V = Â“r.l *>1 * Â»r,2 'Â’2 * Â•Â•Â• Â“r.nl ^l * ^ with {bpb^, ..., b^} orthogonal, we have <1 nb,n > iibÂ„i > > b n 12 r We can summarize the above discussion, along with Lemma (C.2) by the (C.7) PROPOSITION. Every ordered set X = {xi,x^, ..., x^,} of vectors in has a well order rearrangement X = (x^.x^. ..., 1 I For a wellordered set X, define x(X) := ; such that nb. D 4 > n b. , n 1 i^N 1+1 Clearly, X < r < k. We are now in a position to prove the key, (C.8) PROPOSITION. Let t be a (fixed) integer 0 < t < N. Let ^t ~ be a set of vectors in R^. Suppose that there exist u.j(t), i=l, 2, ..., k such that
PAGE 145
138 nu.(t)ii < M (C.9) n<>(N,t+l)x^ (t+1) n < Â£ Xi (t+1) := F(t)x. j (t) + G(t)u. j (t). Then there exists an mxn matrix L(t) with (C.IO) llL(t)ii < IL_!L and such that PROOF. Without loss of generality, assume that is wel 1 ordered. Let b^, Â“j^j be as in the definition of wellordered sets (see discussion preceding Proposition (C.7)). Let X = X(X^). Extend {xj^(t ) ,X 2 (t ) , ..., X;^(t), b^^j(t), ... b^(t)} orthogonal ly by tbk+i(t), ..., bp(t)} to form a basis for R'^. Define L(t) by L(t) : x.(t) u.(t) i=l,2, ..., X (C.12) L(t) : b. (t) > 0 i = X + 1, X + 2, ..., We first prove (C.IO). Notice that
PAGE 146
139 n E 3.b(t)ii^ = E 3? nb.{t)n^ i=l ^ i=i 1 1 (C.13) ^ 2 i=l ^ We now show inductively that for i=l, 2, ..., X (C.14) llL(t)b.(t)ll < 2''"^ M < 2"^ M For i=l, nL(t)bj{t)n = iiL(t)xj(t)o = nu^(t)ii < M and the assertion is clearly true. Assume that the assertion holds for i=l,2, s. We then have a(t)bs^l(t)n < nL(t)x (t)n + e a .  Â•iiL(t)b .(t)ii j=l ^ J By (C.4), lÂ«s+i,j < 1 and Hu^^^(t)ii < M by (C.9). The above equation then becomes llL(t)b^^^(t)ll < M(1 + z 2'^'^) < M 2^ j=l
PAGE 147
140 completing the induction. Combining (C.13) and (C.14) we see that IIS e.L(t)b.(t)Â«Â‘ iiL(t)n^ = sup u 2 : e. b.(t)ii^ i=i ^ ^ < sup ( E 0 12 'Â’ M) i=l ^ ^ 2 .2N < sup ^r**^x n N 2 ( ^ lej]^ '' MM*' 41 I i=l mmN 2 < ^ n2 MM j proving (C.IO). We now prove (C.ll). Notice first that for i=l,2, F(t)x.(t) = F(t)x.(t) + G(t)L(t)x.{t) X. (t+1)
PAGE 148
141 Therefore from (C.9), (C.15) H')>(N,t+l)F(t)x^. (t) II < e , for i=l,2, \ Also, for i = X+1, r from (C.13), Xi(t) = *" Â“i where Hc(t)n < Â— Using (C.3) we can rewrite the above equation as M Xi(t) = Y^^ix^(t) + + ... + Y.^^x^(t) + c(t) , where Yjj < n. Consequently, for i = X+1, ..., r, (C.16) "(N,t+l)F(t)x. (t) H < Xne + e = (Xn+l)e For i = r+1, r+2, ..., k, it follows from the definition of well ordered sets that we can write Â’Â‘ 1 Â“Â’ ' Â“1.J Â‘ Therefore, for i = r+1, ..., k. (C.17) D
PAGE 149
142 Summarizing (C.15) (C.17) we see that for i = 1,2, ..., k. ll()(N,t+l)F(t)n. (t)n < i f X=k otherwi se (C.18) PROPOSITION. Suppose that there exist u. j (t), i = 1,2, ..., t = 0,1, ..., N1 with Bu^. (t)U < M and such that Bx^ (N) B < e where x.j(N) is defined recursively by x^(0) = e. ; (the ith unit _ve ctor in R'^, x^(t+l) = F(t)x. j (t) + G(t)u. j (t). Then, there exist mxn matrices L(t), t = 0,1, ..., N1 and a real number A with BL(t)B < A and such that BF(Nl)F(N2) ... F(1)F(0)B < .
PAGE 150
143 PROOF. We first set up some notation. For a set of vectors X = {xj^,X 2 , ..., X^} in R^, let X = {x^,X 2 Â» .... X^} be any wellordered rearrangement of X. Recall that A A X = x(X) := ; such that nb. n > ^ > iib. ^ Â« , 1 [vIN 1+1 and that k = k(X) = the number of vectors in X. Define a set of i ntegers A A A 1^ Â•Â“ {j Â• ^Xj^,X2Â» Â•Â•Â•Â» * For t = 0,1,2, ..., N1 define X^ and recursively by Xo = n Xq {Xj ( 0) ,X 2 ( 0) , ..., x^(0)} , \+l "" ^Xj(t+l) : iÂ£ } From the above definitions it is clear that (C.19) n = Xq = k 1 > X^= k2 > X 2 \ = 't+1 > X t+1 > X N1 > 0 Â• Â• Â•
PAGE 151
144 Define f^, t = 0, 1, N1 by f t 1 4 n i f X, otherwi se It is clear from (C.19) that (and this is the key step) (C.20) ^N1^N2 Â•Â“ '*'0 ^ Â• We now apply Proposition (C.8) to to conclude that there exists n ^ an mxn matrix L(Nl) with HL(Nl)n < CM^ where C = and such e that Â»F(Nl)x. (Nl)ll < efj^_^ , X. (N1) e . Notice that IIF(N1)D < HF(N1)D + nG(Nl)L(Nl)D < M + M(Cm'^) < We can again apply Proposition (C.8) to X ^_2 (with M being replaced by CM ^ and e being replaced by Â£f\j_ 2 ) to conclude that there exists an mxn matrix L(N2) with
PAGE 152
145 IIL(N2)H < (CM)^ and such that llF(Nl)F(N2) x.(N2)Â» < . x.(N2)sX^_2 . Repeating this argument N times, we conclude that there exist matrices L(0), L(l), L(Nl) with nL(t)n < (CM)'^ =: A and such that for i = 1,2, ..., n, IIF(Nl)F(N2) ... F(0)e.ll < Â• . . f Â„ . / The above equation, along with (C.20) gives us nF(Nl)F(N2) ... F(0)n < nÂ®e , completing the proof. We are finally in a position to prove Theorem (8.5) which mxn is restated below for convenience.
PAGE 153
146 (8.5) THEOREM. Let Z (F,G,H) be a linear timevarying system over ^Â”(Z). Then I is asycontrol Table if and only if Z is stabilizable . PROOF. We have already (see discussion preceding Theorem (8.5)) shown that stabilizability implies asycontrol 1 abi 1 ity . We now prove the converse: Suppose E is asycontrol lable. Choose in the Definition (C.l) of asycontrollability, e = Â— Then, by Proposition (C.18), there 2n'^ exist matrices L(t), t = 0, 1, ..., N1 such nL(t)ii < 2An^ HF(Nl)F(N2) ... F(l)F(0)n < ^ , where F(t) = F(t) + G(t)L(t). Recall that here N is the number of steps required to drive all states to energy < e as in Definition (C.l) of asycontrollability. Since all our arguments are independent of initial time, we can (again by Proposition (C.18)) find matrices L(t) for all t in Z such that HL(t)n < 2AnÂ® and llF(kNl)F(kN2) ... F(kNN+l)F(kNN) n < ^ for all k in Z. Thus the matrix L(t) view as being over A is bounded (i.e. in aÂ“(Z)), and (zIFGL)"^ Thi s completes the proof. is a stable power series.
PAGE 154
REFERENCES B. D. 0. ANDERSON and J. B. MOORE [1969] "New results in linear system stability," SIAM J. Control, _7: 398414. [1981a] Detectability and stabi 1 i zabi 1 i ty of timevarying discretetime linear systems," SIAM J. Control and Optimization, 2032. [1981b] "Timevarying feedback laws for decentralized control," IEEE Transactions on Automatic Control, AC26 : 11331138. P. J. ANTSAKLIS [1979] "Some relations satisfied by prime polynomial matrices and their role in linear multivariable system theory," IEEE Transactions on Automatic Control, AC24 : 611616. L. CESARI [1963] Asymptotic Behavior and Stability Problems in Ordinary Differential Equations , Academic Press. New York. L. CHENG and J. B. PEARSON [1978] "Frequency domain synthesis of multivariable linear regulators," IEEE Transactions on Automatic Control, AC23: 315. V. H. L. CHENG [1979] "A direct way to stabilize continuoustime and discretetime linear timevarying systems," IEEE Transactions on Automatic Control, AC24: 641643. 147
PAGE 155
148 C. A. DESOER, R. W. LIU, J. MURRAY, and R. SAEKS [1980] "Feedback system design: the fractional representation approach to analysis and synthesis," IEEE Transactions on Automatic Control, AC25 : 399412. V. DOLEZAL [1964] "The existence of a continuous basis of a certain linear subspace E which depends on a parameter," Cas. Pro. Pest. Mat., 89 _: 466468. J. DOYLE and G. STEIN [1981] "Multivariable feedback design: Concepts for a classical/ modern synthesis," IEEE Trans, on Automatic Control, AC26: 416. E. EMRE and P. KHARGONEKAR [1982] Regulation of split linear systems over rings: Coefficientassignment and observers," IEEE Trans, on Automatic Control, AC27 : 104113. O. S. EVANS [1972] Finitedimensional realizations of discretetime weighting patterns," SIAM J. of Applied Math., 4567. J. J. FERRER and E. W. KAMEN [1984] "Realization of linear timevarying discretetime systems," in preparation. M. FREEDMAN and G. ZAMES [1968] "Logarithmic variation criteria for the stability of systems with timevarying gains," SIAM J. of Control, 6: 487507. P. A. FUHRMANN [1976] "Algebraic system theory: An analyst's point of view," J. of the Franklin Institute, 301: 521540. [1977] "On strict system equivalence and similarity," Int. J. of Control, 25: 510.
PAGE 156
149 F. R. GANTMACHER [1959] Theory of Matrices . Vol . 1. Chelsea, New York. W. L. GREEN and E. W. KAMEN [1984] "On stability of linear difference equations with timevarying coefficients," in preparation. A. H. JAZWINSKI [1970] Stochastic Processes and Filtering Theory , Academic Press, New York. R. E. KALMAN [1960] "Contributions to the theory of optimal control," Bol . Soc. Mat. Mex., _5: 102119. E. W. KAMEN [1974] "A new algebraic approach to linear timevarying systems," Technical report, Georgia Inst, of Technology, Atlanta. [1976] "Representation and realization of operational differential equations with timevarying coefficients," J. of the Franklin Inst., 301: 559571. [1982] "Linear systems with commensurate time delays: Stability and stabilization independent of delay," IEEE Trans, on Automatic Control, AC27 : 367375. E. W. KAMEN and K. M. HAFEZ [1979] "Algebraic theory of linear timevarying systems," SIAM J. of Control and Optimization, J7: 500510. E. W. KAMEN and P. P. KHARGONEKAR [1982] "A transfer function approach to linear timevarying discretetime systems," 23rd Conf. on Decision and Control, Orlando, Florida. E. W. KAMEN, P. P. KHARGONEKAR and K. POOLLA [1984] "A transfer function approach to linear timevarying discretetime systems," submitted for publication.
PAGE 157
150 E. W. KAMEN and Y. ROUCHALEAU [1984] Linear Systems with Coefficients in a Ring or Algebra , in preparation. P. P. KHARGONEKAR [1982] "On matrix fraction representations for linear systems over commutative rings," SIAM J. of Control and Optimization, 172197. P. P. KHARGONEKAR and A. B. OZGULER [in press] "Regulator problem with internal stability: A frequency domain solution," to appear in IEEE Transactions on Automatic Control. P. P. KHARGONEKAR and K. POOLLA [1984a] "On polynomial matrix fraction representations for linear timevarying systems," submitted for publication. [1984b] "Stabi 1 i zabi 1 i ty and stabl eproper factorizations for linear timevarying systems," in preparation. P. P. KHARGONEKAR and E. 0. SONTAG [1982] On the relation between stable matrix fraction factorization and regulable realizations of linear systems over rings," IEEE Trans, on Automatic Control, AC27: 627638. H. KWAKERNAAK and R. SIVAN [1972] Linear Optimal Control systems, WileyInterscience, New York. W. H. KWON and A. E. PEARSON [1977] "A modified quadratic cost problem and feedback stabilization of a linear system," IEEE Transactions on Automatic Control, AC22 : 252254. [1978] "On feedback stabilization of timevarying discrete linear systems," IEEE Transactions on Automatic Control, AC23: 479481.
PAGE 158
151 A. S. MORSE and L. M. SILVERMAN [1972] "Structure of i ndexi nvariant systems," SIAM J. of Control and Optimization, _U: 215225. R. W. NEWCOMB [1970] "A local timevariable synthesis," in the Proc. Fourth Colloquium on Microwave Communications, Budapest. M. NEWMAN [1973] Integral Matrices , Academic Press, New York. K. POOLLA and P. P. KHARGONEKAR [1983] "Fractional representations for systems over a P.I.D.: A constructive technique," Systems and Control Letters, 3; 145150. H. H. ROSENBROCK [1970] State space and multivariable theory , Wiley, New York. H. H. ROSENBROCK and G. E. HAYTON [1978] "The general problem of poleassignment," Int. J. of Control , 837852. R. SAEKS and J. MURRAY [1981] "Feedback system design: The tracking and distrubance rejection problems," IEEE Trans, on Automatic Control, AC26 : 203217. S. SALOVAARA and H. BLOMBERG [1973] "On an algebraic theory of ordinary linear timevarying differential systems with generalized stochastic processes as input and outputs," Advances in Cybernetics and Systems Research, Transcripta Books, London. V. M. STARZINSKII [1955] "A survey of works on the conditions of stability of the trivial solution of a system of linear differential equations with periodic coefficients," AMS Translations, Ser. 2, 1: 189231.
PAGE 159
152 M. VIDYASAGAR [1978] "On the use of rightcoprime factorizations in distributed feedback systems containing unstable subsystems," IEEE Trans, on Circuits and Systems, CAS25 : 916921. L. WEISS [1972] "Controllability, realization, and stability of discretetime systems," SIAM J. of Control, 230251. J. L. WILLEMS Stability Theory of Dynamical Systems , Nelson, London. W. A. WOLOVICH [1968] "On the stabi ization of controllable systems," IEEE Transactions on Automatic Control, AC13 : 569572. [1974] Linear Multivariable Systems , Spri ngerVerl ag , New York. W. M. WONHAM [1979] Linear Multivariable Systems: A Ge ometric Theory, SpringerVerlag, New York. R. YLINEN [1975] "On the algebraic theory of linear differential and difference systems with timevarying or operator coefficients," Report B23, Helsinki University of Technology, Helsinki, Finland. L. A. ZADEH [1950] "Frequency analysis of variable networks," Proc. IRE, 38: 291299. Â— G. ZAMES and B. FRANCIS [1983] "Feedback, minimax sensitivty, and optimal robustness," IEEE Trans, on Automatic Control, AC28: 585601.
PAGE 160
BIOGRAPHICAL SKETCH Kameshwar Rao Pool! a was born on September 6, 1960, in Tanuku, India to Prof, and Mrs. P. V. Ramana Murthy. He obtained the Bachelor of Technology degree in electrical engineering from the Indian Institute of Technology, Bombay, in 1980. He has worked in the capacity of a field engineer for Schlumberger APR during 1981, and has since been a graduate student in the Department of Electrical Engineering at the University of Florida. 153
PAGE 161
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. ]4iyyy^ Or. E. W. Kamen, Chairman Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. OheJgU.A^ Dr. P. p. Khargonekar,'^CoChai rman Assistant Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. A. R. Tannenbaum Associate Professor of Electrical Engineering and Mathematics I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. T. E. Bullock Professor of Electrical Engineering
PAGE 162
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. /^A

