
Citation 
 Permanent Link:
 http://ufdc.ufl.edu/AA00037938/00001
Material Information
 Title:
 Robust stability analysis of systems under parametric uncertainty
 Creator:
 Letra, JosÃ© Alvaro, 1950
 Publication Date:
 1991
 Language:
 English
 Physical Description:
 vii, 234 leaves : ill. ; 29 cm.
Subjects
 Subjects / Keywords:
 Conservatism ( jstor )
Eigenvalues ( jstor ) Mathematical independent variables ( jstor ) Mathematical robustness ( jstor ) Mathematical vectors ( jstor ) Matrices ( jstor ) Parametric models ( jstor ) Polynomials ( jstor ) Scalars ( jstor ) Sufficient conditions ( jstor ) Control theory ( lcsh ) Dissertations, Academic  Electrical Engineering  UF Electrical Engineering thesis Ph. D Lyapunov functions ( lcsh ) Stability ( lcsh )
 Genre:
 bibliography ( marcgt )
nonfiction ( marcgt )
Notes
 Thesis:
 Thesis (Ph. D.)University of Florida, 1991.
 Bibliography:
 Includes bibliographical references (leaves 230233).
 General Note:
 Typescript.
 General Note:
 Vita.
 Statement of Responsibility:
 by JosÃ© Alvaro Letra.
Record Information
 Source Institution:
 University of Florida
 Holding Location:
 University of Florida
 Rights Management:
 The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for nonprofit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
 Resource Identifier:
 026242054 ( ALEPH )
25046242 ( OCLC )

Downloads 
This item has the following downloads:

Full Text 
ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY
By
JOSt ALVARO LETRA
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1991
To
Carmen Lucia
and
Ariadne
ACKNOWLEDGMENTS
I am profoundly indebted to my advisor and supervisory committee chairman, Dr. Haniph A. Latchman, for his guidance, permanent support and encouragement during my three years at the University of Florida. Despite his several other responsibilities, Dr. Latchman always found time to discuss my work and give me his insightful orientation.
I wish to thank the professors who served on my committee, Dr. Thomas E. Bullock, Dr. J. Hammer, Dr. A. Antonio Arroyo and Dr. Spyros A. Svoronos, for their willingness to discuss and advice my work, and for the high level of consideration I was always treated with.
I wish to thank the help and advice of Dr. G. Basile, my first committee chairman.
I am indebted to the EE Graduate Coordinator, Dr. Leon W Couch, and his staff, for all their assistance. Particularly, I have to thank Mrs. Greta Sbrocco, who always provided helpful orientation on administrative subjects.
It was a privilege to work close to my exfellow student, Dr. Robert J. Norris, whose valuable incentive and help I now acknowledge. I also wish to thank Dr. Julio S. Dolce da Silva, of the Brazilian Army, for his help on my enrollment and adaptation to the University.
I am grateful to the Ex~cito Brasileiro (Brazilian Army) for conceding me the opportunity of coming to the University of Florida to further pursue my studies, and to the CNPq Conselho Nacional de Desenvolvimento Cientifico e Tecnol6gico (Scientific and Technological National Development Agency  Brazil) for the scholarship I was granted.
TABLE OF CONTENTS
page
ACKNOW LEDGM ENTS ........................................................... iii
A BST R A C T ........................................................................ vi
CHAPTERS
I INTRO D UCTIO N ........................................................... 1
1.1 Dissertation Objective ................................................... 1
1.2 Brief Historical of Uncertainty Treatment ................................ 2
1.3 Structure of the Dissertation ............................................. 9
1.4 N otation ............................................................... 11
2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION ......... 16
2.1 Nominal Models and Definitions ........................................ 16
2.2 Uncertainty Representation ............................................. 20
2.3 Conclusions ............................................................ 38
3 STABILITY ANALYSIS OF LINEAR SYSTEMS ........................... 39
3.1 Introduction ............................................................ 39
3.2 Stability of State Space Systems ........................................ 39
3.3 Stability of Transfer Matrix Models ..................................... 45
3.4 FrequencyDomain Scaling Techniques .................................. 63
3.5 Conclusions ............................................................ 7 2
4 LYAPUNOV DIRECT METHOD IN THE PRESENCE OF STRUCTURED
UNCERTAINTY ........................................................ 73
4.1 Introduction ............................................................ 73
4.2 Dependence of Conservatism on Perturbation Structure ................. 76
4.3 Stability Under Structured Uncertainty ................................. 82
4.4 Maximization of Stability Domains ...................................... 92
4.5 Application of Optimization Over Q ................................... 109
4.6 C onclusions ........................................................... 113
5 STABILITY UNDER DIAGONAL PARAMETRIC UNCERTAINTY ...... 115
5.1 Introduction ........................................................... 115
5.2 Diagonal Representation of State Space Perturbations .................. 116
5.3 Problem Formulation .................................................. 122
5.4 Necessary and Sufficient Conditions for Robust Stability ............... 127
5.5 Sufficient Conditions for Robust Stability .............................. 132
5.6 Numerical Application ................................................. 136
5.7 Some Extensions of Previous Results ................................... 139
5.8 Conclusions ........................................................... 143
6 COMPARISON OF SUFFICIENT PARAMETER NORM BOUNDS ....... 145
6.1 Introduction ........................................................... 145
6.2 Results for Problems with 2 and 3 Parameters ......................... 146
6.3 Results for Randomly Generated Matrices ............................. 154
6.4 Conclusions ........................................................... 161
7 ITERATIVE CONTROLLER ROBUSTIFICATION ....................... 163
7.1 Introduction ........................................................... 163
7.2 Robustification Associated to Lyapunov Analysis ....................... 169
7.3 Robustification Associated to FrequencyDomain Analysis .............. 169
7.4 A pplication ............................................................ 187
7.5 Conclusion ............................................................ 195
8 NECESSARY STABILITY DOMAIN IN THE PARAMETER SPACE ..... 197
8.1 Introduction ........................................................... 197
8.2 Characterization of a Necessary Stability Domain ...................... 199
8.3 Computation of the Necessary Stability Domain ........................ 202
8.4 A pplications ........................................................... 209
8.5 Conclusions ........................................................... 214
9 CO N CLUSIO N ............................................................ 216
9.1 Sum m ary ............................................................. 216
9.2 Directions for Future W ork ............................................ 223
REFEREN CES .................................................................... 230
BIOGRAPHICAL SKETCH ....................................................... 234
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY
By
JOSft ALVARO LETRA
May 1991
Chairman: Dr. Haniph A. Latchman
Major Department: Electrical Engineering
In the analysis of stability properties of control systems, the uncertainty in mathematical models must be taken into account. Main sources of uncertainty are high order dynamic phenomena of the physical system neglected in the model, and variations in system parameters. The subject of this work is the assessment of stability of linear control systems in the presence of parametric uncertainty.
State space and frequencydomain models and uncertainty representation are reviewed, as well as general conditions for nominal and robust stability. Also reviewed are scaling techniques used for reducing the degree of conservatism of frequencydomain stability conditions, including optimal similarity scaling, optimal nonsimilarity scaling and Perron scaling.
Particularly, the perturbed state space model i(t) = (A + E) x(t) is studied. The nominal matrix A is assumed asymptotically stable, and the perturbation E is of the form E = F=1 pkEk, where p is a mdimensional vector of system parameters, and Ek, k = 1,..., m, are constant matrices. The application of the Lyapunov Direct Method
for obtaining conditions on the norm of p which are sufficient for robust stability is discussed in detail. A new stability condition on 1I P 112 is given, which is potentially less conservative than available results. The problem of the choice of the Lyapunov matrix which yields less conservative stability conditions is formalized as a constrained numerical optimization problem.
For the case of timeinvariant uncertainty, an equivalent frequencydomain stability problem is formulated, where the perturbation is a real, diagonal matrix obtained directly from the state space perturbation. Sufficient stability conditions are derived from the equivalent formulation, and scaling techniques are used, in order to reduce conservatism.
Comparison of numerical results obtained for several problems indicates that, for timeinvariant uncertainty, the frequencydomain approach, associated to Perron scaling, constitutes an alternative which has better performance than the Lyapunov Direct Method. The frequencydomain approach and corresponding stability conditions are also shown to be of advantage in iterative optimization of static feedback controllers of fixed order.
Additionally, a procedure is suggested for obtaining a necessary stability domain in the space of plant parameters, starting from a known sufficient domain.
Finally, the integration of the stability analysis techniques into robust controller design is discussed.
CHAPTER 1
INTRODUCTION
1.1 Dissertation Objective
At least two common aspects are shared by the majority of the current literature on control systems analysis and design, although many different methods and techniques are nowadays employed. These aspects are as follows:
e Focus is placed on multivariable systems;
9 Uncertainty in system models is explicitly taken into account.
These aspects constitute a frame for the present dissertation. The specific subject is the assessment of robust stability properties of systems under parametric uncertainty, which finds motivation in the following considerations.
Control systems are designed to meet some performance specifications. Although the formulation of performance specifications depends on the approach used, it always requires that some quantitative indices be satisfied by the system response, what of course implies in constraints to the dynamic behavior of the system.
However, it only makes sense to discuss the quantitative behavior of a control system if its stability can be assured. Otherwise, the dynamic behavior can be expected to blow up under some admissible operating condition, thus rendering the system useless. Stability, therefore, emerges as a fundamental requirement.
Control design relies on mathematical modeling of the controlled system. Unfortunately, there always exists a degree of uncertainty between the model and the modeled system,
2
which must be taken into account. The existence of uncertainty gives rise to the requirement of robustness, namely the aptitude of a control system for retaining the desired behavior in spite of the uncertainty.
Design methods definitely depend on analysis techniques in order to assess system properties, including robust stability. Techniques for robust stability analysis count on uncertainty representation, which is dictated by several factors, mainly by the causes of uncertainty and available information on uncertainty structure. Variations in system parameters are sources of an important category of perturbations, which is particularly suitable to representation in state space models.
Motivated by these facts, this dissertation addresses the problem of robust stability analysis in the presence of parametric perturbations. The perturbation will be assumed to depend linearly on a vector of parameters, thus admitting the practically important case in which one parameter affects several entries of the system matrices in the state space representation. This model has been used in several recent works in stability analysis.
The development of the subject is outlined in Section 1.3. Before this, a brief historical summary of the treatment of uncertainty in control theory is given.
1.2 Brief Historical of Uncertainty Treatment The need for control systems has been long felt in the process of technological development. Examples of the use of control systems date back to four thousand years [501. Noteworthy is the fact that feedback principles are found even in those early examples. Among the several advantages that the feedback principle brings to control systems, appears the property of effectively coping with disturbances and system uncertainty [31].
3
Important events in feedback history are registered by Sage [50]. Among them are the invention of the mechanical flyball governor by James Watt in 1788, which was developed from early windmill regulators, and the analysis of feedback control systems published in 1868 by Maxwell.
In 1927, the concept of feedback was introduced by Black in the design of amplifiers for long distance telephone lines; his pioneering work is contained in the paper 'Stabilized Feedback Amplifiers', published in 1934. Although robust to uncertainties caused by nonlinearity and other factors, the feedback amplifier presented unwanted oscillations. The theoretical study of this phenomenon led to the development of the regeneration theory by Nyquist, whose work was published in 1932. The Nyquist criterion, which derives closedloop stability characteristics from openloop information, would constitute a fundamental technique for frequencydomain stability analysis.
Ensuing developments of frequencydomain concepts originated from the work of Bode, in network analysis and amplifier design (1945), which demonstrate the existence of constraints in the manipulation of the frequency response of linear timeinvariant systems; from the Nichols transformation of the Nyquist diagram, and from the root locus technique of Evans.
The set of those techniques constitute what became known as the classical approach to analysis and design of SingleInput, SingleOutput (SISO) systems. In the classical approach, the issue of coping with uncertainty is indirectly addressed, by providing the system with enough gain and phase margins. These margins ensure that unwanted effects of uncertainty will not disrupt stability.
In the late '50s, problems of more complex nature, mainly originated by the control and guidance of missiles and space vehicles, came into the consideration of control engineers
4
and theoreticians, and dominated the development in the field. The already wellknown set of classical tools was not adequate to deal with the essentially multivariable nature of the incoming control problems. The number of degrees of freedom inherent to multivariable systems, and the complex relationship between openloop and closedloop properties in those systems, manly due to interaction, which has no counterpart in SISO systems, often preclude the use of the simple techniques developed for scalar systems [21]. In this context, and because the digital computer was already available, the decade of the '60s saw a marked tendency towards the use of optimization techniques in the solution of control problems. The design objectives in such techniques were mathematically treated and transformed into a cost function to be minimized.
Thus, the approach to control problems shifted from the frequencydomain to state space. Indeed, the state space was well suited for describing multivariable systems, and powerful techniques were developed for handling optimal control problems. Feedback emerged as a convenient property of solutions to optimal problems [311. Linear Quadratic State Feedback (LQSF) appeared as robust solution to control problems, relying however on exact measurements of the states; on the other hand, the possibility of very accurate models for the applications then sought caused the question of uncertainty to receive comparatively less attention than in the classical frequency domain approach.
The state space formulation and the control techniques it brought about, however, did not achieve acceptance in all fields of applied control, particularly in industrial control. Different reasons have been presented for this fact: only approximate models are available for many industrial processes; plants have components which deteriorate due to continued use; long formed habits of dealing with classical techniques by industrial engineers are an obstacle to the adoption of the sophisticated mathematical treatment required by optimal
5
control. The Linear Quadratic Gaussian (LQG) theory, developed in the late '60s, can handle external disturbances modelled as Gaussian noise, and preserve the optimality of solutions, but the LQG controller is not robust against plant uncertainty, an important limitation in such industrial applications.
The decade of the '70s witnessed a renewed effort in control theory. The first phase in the process involved efforts made towards the generalization of classical SISO frequencydomain techniques to multivariable systems. One example of the resulting analysis and design techniques is the Inverse Nyquist Array (INA) method of Rosembrock (1974), which sought to eliminate the influence of interaction and then apply scalar techniques to the independent loops. Another is the Characteristic Locus Method of MacFarlane and Postlethwaite [371, which introduces a generalization of the Nyquist stability criterion based on the eigenloci of the transfer function matrix, and produces necessary and sufficient conditions for stability. The resulting generalized Nyquist plots are used in the multivariable design in the same fashion that the Nyquist plot is in the scalar case. The original formulation, however, applies to the case of exactly known models. Since the eigenloci are sensitive to perturbations in the transfer matrix, the original formulation had limitations in the context of robust stability. Later developments have extended the generalized Nyquist criterion to uncertain system, through the computation of inclusion bands for the perturbed eigenloci. Sufficient inclusion bands are obtained with the normal approximations method [8], and necessary and sufficient inclusion bands with the Econtours method [9].
Another side of that effort, which continued through the '80s, sought a deeper understanding of the structure and property of multivariable systems, with a renewed interest for robustness aspects.
6
Safonov [48, 46] proposed an explicit representation where perturbations in multiloop systems assume the form of a diagonal perturbation matrix, therefore a structured representation. This representation was later used in the definition of a measure of stability margin for multivariable systems [47].
Doyle and Stein [14] developed the use of maximumsingular values to obtain bounds on the perturbations to multivariable systems, with perturbations modeled as normbounded but otherwise unconstrained, having therefore an unstructured representation.
In 1976, a parametrization of all stabilizing controllers of a particular system was presented by Youla and coworkers. Zames [60], proposed a scalar design technique which minimizes the effects of external disturbances while ensuring closedloop stability; performance was measured in terms of oo norm. This work is considered one of the fundamentals of what, associated to the Youla parametrization, has become known as H.o control. Several multivariable problems, like sensitivity minimization and robustness to additive perturbations, can be expressed as H.. control problems, that is, problems where the goal is the minimization, in the frequencydomain, of the norm of a transfer matrix. This approach permits the synthesis of a controller which minimizes an objective function, which in general is used to express some performance requirement, while ensuring the stability of the solution by restricting the controller to belong to the set off all stabilizing controllers. However, controllers derived through this approach tend to be of high order, requiring a posteriori order reduction.
Although an unstructured uncertainty representation yields a more tractable mathematical problem, it may lead to conservative stability results. Often, some information about the structure of the perturbation is available, and should be used in order to produce tighter results. The work of Doyle [12] gave new dimension to the diagonal perturbation problem
7
pioneered by Safonov, when he argued that model uncertainty can be very effectively posed in terms of blockdiagonal norm bounded perturbations. He developed a new analysis tool, namely the pfunction, which constitutes a necessary and sufficient mathematical condition for robust stability of transfer matrix models.
The computation of this new robustness measure presents considerable difficult for general structured uncertainties. An upper bound presented by Doyle involves the minimization, over the space of diagonal similarity scaling matrices, of the norm of the scaled system matrix: this upper bound actually equals 1 when there are at most three complex blocks in the diagonal uncertainty representation. For the case of more blocks, or when the perturbation has real components, the upper bound is a conservative estimate of i. For design purposes under structured uncertainty, Doyle has formulated what has become known as the 'psynthesis' method. In this approach, the cost function to be minimized is the onorm of a similarity scaled transfer matrix involving a controller chosen out of the set of all stabilizing controllers. The parameters are the controller itself and the scaling matrix.
The formulations by Doyle, as well as previous work by Safonov, introduced the use of frequencydomain scaling in control problems, as a tool for the derivation of less conservative sufficient stability results, in connection with the blockdiagonal uncertainty problem.
Other models of uncertainty, as well as different forms of scaling, have been proposed. For instance, in Latchman's work [33], the highly structured elementbyelementbounded uncertain model is explored, and new, less conservative stability conditions are obtained with the introduction of nonsimilarity scaling. For the case of elementbyelementbounded complex perturbations, it has been shown [33] that, if the maximum singularvalue of the optimally scaled system matrix remains distinct, p is attained, regardless of the number of elements in the perturbation matrix. Relationships between similarity scaling and non
8
similarity scaling have been derived [40], and used as tool for decreasing the cost of the computation of the ifunction for complex perturbations.
The blockdiagonal formulation of uncertainties admits complex as well as real perturbations. Real perturbations in frequencydomain models have been employed for example to represent uncertainty in gains [10, 38] and in poles [10] of a transfer function. In this dissertation, a perturbed state space system is given a frequencydomain representation having real diagonal uncertainty, which is derived directly from the state space real uncertainty. For problems involving real uncertainty, results derived with the pfunction approach are usually only sufficient. The derivation of tighter results for the case of real uncertainty is an active area of research [17, 18]; a new upper bound for i, tighter than the singularvalue bound, has been recently introduced [18].
Besides the cited developments in analysis of perturbed transfer matrix models, the analysis of perturbed state space models received a great deal of consideration in the last decade. Two basic approaches can be recognized in the analysis of state space models: the Kharitonov approach and the Lyapunov approach.
The approach spurred by the work of Kharitonov [27] deals with robust stability of control systems through stability analysis of characteristic polynomials having perturbed coefficients. Although the original work considered the case of independent coefficient perturbations, new results [2] have later extended the approach to the case of polytope of polynomials. Basically, this extension permits the assessment of stability of a whole polytopic family by analyzing stability properties of its exposed edge polynomials.
The Lyapunov approach to robust stability analysis stemmed from the original work on stability by Lyapunov, published in Russian in 1892, which has a French translation dating from 1949. The Lyapunov Direct Method (LDM) yields a sufficient condition for stability;
9
stability assessment however depends on the construction of a suitable Lyapunov function for the system under investigation. In the case of linear, timeinvariant systems, a quadratic function of the state is used as Lyapunov function. The condition for robust stability can then be posed in terms of the positivedefiniteness of a certain matrix. Although only sufficient, the approach has been used in robust stability analysis in a great number or recent works [4, 16, 42, 51, 56, 59, 61]. In particular, this method has been used in connection with structured perturbations depending linearly on a vector of parameters [4, 51, 61]. This uncertainty representation, on the other hand, has also been used apart from the Lyapunov approach [18].
Additional stability analysis methods for state space systems are the stability radius method [24], and the methods of Qiu and Davison [44, 45]; tensor products are used in the latter.
1.3 Structure of the Dissertation
This dissertation is organized into 9 chapters, the first of which contains this Introduction. The next 2 chapters present a review of basic concepts, while the main part of the work is presented in Chapters 4 through 8. Chapter 9 contains the Conclusion.
Specifically, nominal and perturbed system models are reviewed in Chapter 2. Special attention is given to uncertainty representation in state space and transfer matrix models, with emphasis placed on diagonal representation of uncertainty in interconnected frequencydomain models.
The focus of Chapter 3 is in stability conditions. The review includes the Lyapunov Direct Method, the Generalized Nyquist Criterion, spectral radius conditions for stability, and spectral radius upper bounds given by singularvalue and structured singularvalue.
10
Chapter 4 concentrates on the assessment of robust stability of state space systems in the presence of structured perturbations which depend linearly on a vector of parameters. The application of the Lyapunov direct method is thoroughly discussed, including a qualitative study of reasons of conservatism under perturbation, a review of available results, the derivation of admissible parameter norms and the use of parameter weighting for shaping the form of the computed stability domain. A new condition on the 2norm of the vector of parameters, which is potentially less conservative than available conditions, is presented, and similarity scaling is explore in the reduction of conservatism of available results. Finally, the choice of the adequate Lyapunov matrix is cast as an optimization problem.
An alternative approach to the assessment of robust stability of state space systems, under timeinvariant perturbations linearly dependent on a vector of parameters, is proposed in Chapter 5. Working directly with the perturbed state equations, and exploring diagonalization of uncertainty, an equivalent frequencydomain problem is formulated, from which sufficient stability conditions are derived. The formulation is such that the uncertainty matrix which appears in the equivalent frequencydomain problem is derived directly from the real perturbation to the state space model. The derivation was independently undertaken, and has not been explicitly found in the literature. Conservatism of the stability conditions is reduced through the use of scaling techniques; besides the wellknown optimal similarity scaling, conditions are obtained in terms of Perron scaling.
Chapter 6 compares numerical results obtained with the LDM of Chapter 4 and the frequencydomain method proposed in Chapter 5. Results obtained from the frequencydomain method were in general less conservative than results from LDM; they were always at least as good as the LDM results. Particularly, it is shown that the stability condition
11
that uses Perron scaling have low computational cost and produces results with the same level of conservatism as results obtained with optimal similarity scaling.
In Chapter 7, the frequencydomain approach is explored in the analysis step of an iterative controller robustification technique, similar to that proposed by Bhattacharyya [4]. The alternative approach has computational advantages, mainly when Perron scaling is used, because then it permits the elimination of parameters in the resulting optimization problem.
Both the methods discussed in Chapters 4 and 5 yield sufficient stability domains in the space of plant parameters. In Chapter 8, a technique is presented for the computation of a necessary domain, starting from an available sufficient domain. An extensive search in the parameter space, which would be unfeasible for a large number of parameters, is avoided on the basis of a conjecture, which has worked well in all problems considered.
Finally, Chapter 9 presents a summary of results and suggestions for further work.
1.4 Notation
The following notational convention will be adopted in this document, unless otherwise explicitly stated. Additional symbols will be defined.as required. A, Nominal dynamic matrix (openloop) A, :Nominal dynamic matrix (closedloop) AP :Perturbed dynamic matrix
D Diagonal form of real perturbation matrix D, :Diagonal form of perturbation with complex scalars E Error matrix (parametric perturbation) EA Parametric perturbation to the matrix A
Ek Fu(M, A) FL(M, K) G0(s) H(s) In
J
K
L
M E nXm M E CnXm Mij MH M+
P PA
Q Q1
R
RA(AP) RA(AP)
S
S,
Perturbation due to the kth parameter Upper linear fractional transformation Lower linear fractional transformation Nominal plant transfer matrix Openloop transfer matrix Identity matrix of order n Objective function in optimization problems Controller
Left matrix in the decomposition E = LDR Real n x m matrix
n x m matrix with complex elements Element at ith row and jth column of M Complex conjugate transpose of M Matrix of the complex magnitude of elements of M Solution to the Lyapunov matrix equation Matrix of upperbounds on elements of A Lyapunov matrix
Nominal compensated transfer matrix The right matrix in the decomposition E = LDR Largest real part of Ai(Ap), for fixed E Largest real part of Ai(Ap), for E in a class Similarity scaling matrix Perron scaling matrix
S: Osborne scaling matrix Sd Stability domain Sdp(Q) Stability domain, function of Q, in the norm 11 * 11p Sd.K,.(K) Stability domain, function of K, based on the measure UK T(s) Closedloop transfer matrix Ue Unitary matrix W Matrix of right eigenvectors dp :Change in parameter p jR :Imaginary axis of the complex plane km Multiloop stability margin km :Conservative assessment of km
Stability bound on 11p 11,
p E Rm imdimensional parameter vector Pw Worst case parameter combination rs2(Q) Stability bound on 11P 112 sk Weight applied to the kih parameter s Complex frequency p E RM Input vector x E n : State vector y E RP Output vector XM,XM Major (minor) output principal direction of M YM7yM Major (minor) input principal direction of M C Field of complex numbers CmXm Space of complex m x m matrices
Du : Class of frequencydependent, unstructured uncertainties Vs Class of frequencydependent, structured uncertainties
Fu Class of unstructured real uncertainties Es :Class of structured real uncertainties Q Set of symmetric, positivedefinite Q E R"ï¿½n ,Sc :Class of scaling matrices related to the blockstructure K X/C :Class of blockdiagonal structured uncertainties R :Field of real numbers R+ Set of nonnegative numbers
nXl Space of n x n matrices with elements in R A(s) Frequencydependent perturbation AM(S) Frequencydependent perturbation to M ak Bound on the range of kth parameter "a Measure of stability margin b(s) Upperbound on the norm of A(s) E Small quantity in general Ai(M) ith eigenvalue of M p(M) :Structured singularvalue of M 7r(M) Perron radius of M 7r, :Set of worst case parameters p(M) Spectral radius of M pR(M) Real spectral radius of M Oi(M) ith singularvalue of M U(M) Maximum singularvalue of M
a(m) Minimum singularvalue of M 0, : Characteristic polynomial 19 Partial derivative lxi Complex magnitude of x det[M] Determinant of square M 11 x lp pnorm of vector x 11 M I!, Matrix norm induced by pnorm I M hIF Froebenius norm of M V For all
* :End of proof
End of statement given without proof o End of example inf, sup Infimum, supremum max, min Maximum, minimum DU Diagonal Uncertain LDM Lyapunov Direct Method GNC Generalized Nyquist Criterion MIMO MultiInput, MultiOutput OS Osborne Scaling OSS Optimal Similarity Scaling PR Perron Radius PS Perron Scaling SISO SingleInput, SingleOutput SSV Structured SingularValue
CHAPTER 2
NOMINAL MODELS AND UNCERTAINTY REPRESENTATION
2.1 Nominal Models and Definitions This section introduces basic definitions and models of linear timeinvariant systems. Let us consider the unity feedback system with cascade compensation, represented in Figure 21. The multiinput, multioutput block G, represents the physical system or process under investigation, which is generically designated as plant.
r, ,
 Go , ul ylL_, UIY
Urn yp
(a) (b) Figure 21. Unity feedback system a) Closedloop system
b) Uncompensated nominal plant The subscript o designates the nominal model of the plant, namely a mathematical representation where the relationships among the quantities involved are exactly known. Unless otherwise stated, nominal models will be regarded as linear and timeinvariant. The cascade connection of plant and compensator defines the openloop compensated plant, denoted by Qo = GoK.
17
Many dynamic systems of engineering significance can be described by a linear differential equation relating the input r(t) and its derivatives to the output y(t) and its derivatives. However, this representation is not the most convenient to deal with. Representations that have become standard in control systems theory are the state space model and the transfer matrix model.
State space model. A differential equation of order n with constant coefficients, involving m inputs, p outputs and their derivatives, can be put in the state variable form: i(t) = Ax(t) + Bu(t) (2.1) y(t) = Cx(t) + Du(t) (2.2) where x(t) E Rn is the state vector and A E Rnxn, B E Rnxm, C E Rpxn and D E Rpxm are constant matrices.
A generic state space model is often designated by the quadruple [A, B, C, D]. Unless otherwise stated, openloop plants are assumed to be purely dynamic, thus having a representation of the form [AG, BG, CG, 0]. A dynamic controller is represented by the quadruple [AK, BK, CK, DKI, which reduces to DK in the case of a purely algebraic controller. To the closedloop system corresponds the quadruple [Ac, Be, Co, Dj], whose components are easily obtained from the state space descriptions of plant and controller.
Transfer matrix model. The nominal transfer matrix may be obtained via the application of the Laplace transform to the state space equations, under the assumption of null initial conditions. The transfer matrix is then given by: H(s) = C(sI  A)'B + D (2.3)
where the term (sI  A)' is the resolvent of the matrix A.
18
Let G,(s) and K(s) be the transfer matrices of plant and compensator, respectively. The transfer matrix of the closedloop unity feedback system, which can be obtained by algebraic manipulation of blocks, is: T(s) = [(I + GoK)1GoK](s) (2.4) Note that, in view of the dimensions of the matrices in the state space model, G0(s) E Cpxm. Consequently, K(s) E CmxP and T(s) E CPxP. Of course, T(s) can be obtained by applying (2.3) to the quadruple [Ac, Bc, Cc, DcI.
Characteristic decomposition. A complex, square matrix M E Cnxn with distinct real eigenvalues has the characteristic decomposition: M = WAW1 (2.5) where A = diag{Ai}, i = 1,...,n, contains the eigenvalues of M. The columns of W are lineaxly independent vectors of M, arranged in correspondence with the eigenvalues. Matrices with nondistinct real eigenvalues, or with complex eigenvalues, have analogous decompositions, where A assumes a nondiagonal Jordan form.
The spectral radius and the real spectral radius of M are, respectively,
p(M) _=f max Ai(M) I (2.6) pR(M) = max AR,(M) (2.7) where ARA(M) is a real eigenvalue of M. It is easy to show that pR(M) < p(M) (2.8) The spectral radius has an important role in stability analysis, as will be seen in the following chapters.
19
Singularvalue decomposition. A complex matrix M E C'1ï¿½"n has the singularvalue decomposition
M = XEYH (2.9) where E = diag {i}, ai E R+, i = 1,... , n, arranged in decreasing order, and Y and X are unitary matrices that contain respectively the right and left singularvectors of M, arranged in corresponding order with the singularvalues.
The right singularvectors YM are called input principal directions, while left singularvectors xM are called output principal directions. The largest and the smallest singular values are of fundamental importance in stability and performance analysis. They are called respectively maximum singularvalue and minimum singularvalue, and denoted by d(M) lf (M) (M) 1(m) a(M) (2.10) The principal directions corresponding to the maximum singularvalue receive the qualifier major, while minor is attributed to the principal directions corresponding to the minimum singularvalue. They are denoted by y9M,M and yMxM, respectively.
The singularvalue decomposition extends to nonsquare matrices M E C"xn. In this case, X and Y are matrices of different dimensions, and E has a number q = min{m, n} of nonnull singularvalues.
The minimum and the maximum singularvalues constitute respectively lower and upper bounds for the magnitude of the eigenvalues, that is, a(M) < IA(M)I _ i(M), Vi (2.11) Derivatives of the maximum singularvalue will be needed in Chapters 4 and 5. The following lemma advances an analytic expression for the derivative of [o(M)]2 with respect to a generic variable x.
20
Lemma 2.1. Let M E CflxI, and assume that the entries of M depend on a variable x. Then, the derivative of [(M)]2 with respect to x is given by d [[_(M)]2] WH d
dx  (M M] (2.12)
x d
Proof[33]. For any matrix A, let A(A) df maxi A (A). Then, it follows from the definition of maximum singularvalue that [0(M)]2 = max Ai(Mg M) =  (MH M) Let W be the normalized eigenvector associated with A(MH M). Then, (MH M) W = AW. The derivative of this expression with respect to x is given by: d[(MH M)W] d[AW]
dx dx
d[(MH M)]w + (MH M) d [W]  d[A]w + d[w] dx dx dx dx Multiplying to the left by WH and considering that WH (MH M) = WHA, one has that wH d [(MH MA W = H W, from which (2.12) follows.
dx d
In robust stability analysis, nominal models must be complemented by a description of the uncertainty they are subject to. Uncertainty representation is discussed in the next section.
2.2 Uncertainty Representation
2.2.1 Causes and Classification of Uncertainty A mathematical model is intended to represent the most significant characteristics of the modelled system. Between the model and the true system there always exists an error, which is called uncertainty in control theory.
21
Two broad categories of modeling error sources can be identified, namely unmodeled dynamics and variations of plant parameters. The objective of this section is to discuss uncertainty representations, with particular attention to the case of parametric uncertainty.
The modeling process is guided by the conflicting requirements of fidelity to the plant dynamics and tractability. As a result of the necessary compromises with respect to these conflicts, some secondary dynamic phenomena may be left unmodelled, or may receive simplified representation.
On the other hand, a model might adequately represent the plant dynamics under given conditions, yet might not be able to capture variations suffered by the plant during its life span, or even during an operation cycle.
Changes in properties of physical components, which affect the plant, are normally expected and cannot be eliminated in some cases. For example, due to a compromise between precision and production costs, almost all technical specifications of serial made industrial components allow variations of properties around the nominal value. Other factors also contribute for changes in properties; among them are aging of the components, hysteresis cycles and environmental conditions.
An example of plant with uncertainty due to both simplifications and neglected dynamics is the chemical batch reactor, discussed in [39]. In that case, a truly nonlinear process is linearized at an operation point, thus characterizing a simplification advised by tractability. The dynamics of the resulting equation is uncertain due to neglected nonlinear effects and due to unknown plus neglected high frequency temperaturedependent effects.
In order to improve the assessment of stability and performance characteristics of control systems, some sort of mathematical description of the uncertainty associated to a given nominal model is needed. This description is called uncertainty representation.
22
In a fairly general sense, the true modeled object can be represented in terms of its nominal model S, and of the modeling error E, by the following relationship: SP = Hl(S, E) (2.13) where SP designates the object obtained when S0 is perturbed by E, and II(e) describes how the error relates to the nominal model. The object Sp may represent either the plant or an interconnected system, which includes the plant as one component. If, for example, the modelled object is a plant, (2.13) becomes Gp = ll(Go, E).
An admissible error set is called a perturbation class. Given the perturbation class, the relationship II(o) determines a family of objects around the nominal model; this family is a set that includes a member which is closer to the true modeled object than the nominal model.
The relationship IH(o) is determined by the uncertainty description chosen. A mathematical description of uncertainty must satisfy the following requirements [19, 33]:
(i) Simplicity: the description should be such that the model is tractable;
(ii) Accuracy: the uncertainty class should be such that it would allow only perturbations that really can occur ;
(iii) Adequacy: the uncertainty class must admit all possible perturbations.
The quality of results obtained from the analysis of perturbed models depends, to some extent, on the uncertainty representation. The following are relevant factors in uncertainty representation:
(i) Nature of the model. The uncertainty representation must follow the nature of the nominal model. For example, if a linearized model is constructed for a system described
23
by a nonlinear inputoutput relationship, the error can be adequately represented by the difference between the true output vector and the output vector of the model.
When the system is represented by a MIMO transfer matrix model, the uncertainty is represented by a dimensionally compatible transfer matrix. If a state space model is used, the uncertainty is represented by dimensionally compatible real perturbations to the quadruple [A, B, C, D].
(ii) Type of the error. The uncertainty may assume either the form of an absolute error or the form of a relative error. In the former case, the uncertainty is represented as an additive term, while in the later it appears in multiplicative form.
(iii) Structure of the uncertainty. This is the most important characteristic of the uncertainty representation. It is related to the knowledge and assumptions made about the mechanisms that generate the uncertainty.
If nothing is known about particular causes of uncertainty, or if it is not practical to consider sources of uncertainty individually, the unstructured representation is used. The effects of all, possibly several sources are lumped together, and represented as if caused by only one source. The error is characterized by a norm upper bound, say Ij E 1I < C, but is otherwise unconstrained. The norm upper bound completely characterizes a class of unstructured perturbations.
When the mechanisms that give rise to uncertainty are known, it is useful, although not required, to adopt an structured representation. It is in general possible to identify at least some of the causes of uncertainty [14], whence it is in general possible to use at least a partially structured representation of uncertainty.
An interconnected system whose components are uncertain presents multiple perturbation 'blocks', which can be of different dimensions. Looking at the whole system, the
24
uncertain has a structure defined by the position of the blocks. An unstructured representation could be used to cover up for various scattered 'blocks.' However, this approach would be conservative, because the norm bounded but otherwise unconstrained class of unstructured uncertainties would admit perturbations which don't satisfy the known block structure.
In the following, the general principles given above are applied to uncertainty representation in frequencydomain and state space models.
2.2.2 Representation of Uncertainty in Transfer Matrix Models Unstructured plant uncertainty
Let us assume that the nominal and the perturbed plants are represented by transfer matrix models, respectively G0(s) E Cpxm and Gp(s) E Cpxm, and let A(s) represent the uncertainty. The argument 's' may be dropped, if the dependence on s is clear from the context.
In the unstructured representation, the class of admissible perturbations is characterized by a frequency dependent norm bound; usually the norm of choice is the induced 2norm, which coincides with the maximum singularvalue. An unstructured class which admits all possible A in a ball or radius b(s) in CPxm is defined as: Du = {A(s) E CPxm : 5[A(s)] < b(s) E R+, Vs} (2.14) Additive representation. If the unstructured uncertainty is meant to account for an absolute error in the nominal model, the representation assumes the following additive form, illustrated by Figure 22 (a):
GP = G + AA, AA E 1u
(2.15)
25
Multiplicative representation. This representation accounts for relative errors in the model. It is well suited when the nominal plant has input or output uncertainty. The perturbed model becomes, for each of these cases, respectively:
Gp = Go(Im + Al), A, E Du; Gp = (Ip + Ao)G., Ao E Tu (2.16)
When both input and output uncertainty are present, as shown by Figure 22 (b), the above expressions combine to give Gp = (Ip + Ao)Go(Im., + A1).
(a) (b)
Figure 22. Plant uncertainty representation a) Additive representation
b) Multiplicative representation
Brief analysis. The unstructured representation does not discriminate sources of uncertainty. Neglected dynamics, which usually contributes to highfrequencies error components, and parametric variations are considered together.
This representation certainly satisfies the simplicity requirement. However the maximum singularvalue, used to characterize the class of allowable perturbations, depends on the whole matrix and does not account for magnitudes or phases of individual elements or submatrices structure. Consequently, the accuracy requirement may not be attained, because the class Du admits perturbations which are not physically possible to occur.
From the point of view of accuracy, it is preferable to use structured representations. Yet, even when some of the plant error components can be represented in structured form, there exist highfrequency components that require unstructured representation [141.
26
It is interesting to note that additive and multiplicative representations of plant uncertainty lead to different expressions for the perturbation of compensated plants. Regarding Figure 21 (a), when the additive representation is used, the perturbed compensated plant is given by Qp = (Go + AA)K = GoK + AAK = Qo + &A, while in the case of output multiplicative uncertainty representation, the perturbed compensated plant is Q, = (p + Ao)GK def (1, + Ao)Qo. Therefore, with multiplicative representation, the relative error in the compensated plant is the same as in the nominal plant, while the absolute error changes in the additive representation.
Structured plant uncertainty
Structured representations are adopted when it is possible to identify the causes of uncertainty, so that their effects can be linked to specific entries of the transfer matrix. Since individual sources of uncertainty are independently considered, the structured representation is more accurate.
Elementbyelementbounded perturbations. This highly structured representation can be used when frequencydependent norm bounds for the uncertainty in each element of the nominal transfer matrix are available. The class is characterized by magnitude bounds and unconstrained element phases, and is defined as [33]:
Ds = {A(s) E CPr : At < Pij E +, arg(Aij) = 0ij, 0 < ij < 27r, Vs} (2.17) It has been shown [33] that the class Ds defined above is a proper subset of the class Du given by (2.14). The perturbed plant under elementbyelementbounded additive uncertainty is:
GP=Go+ AA, AAE As
(2.18)
27
This structured class admits all perturbations whose element (ij) belongs to a ball of radius Pij around the nominal element Go(i,j), Vi < p, Vj < m. Cases where some elements of the nominal system are exactly known are covered by setting to zero the corresponding elements of P.
Since the matrix of upper bounds, namely P, is a nonnegative matrix, this representation permits the use of results from PerronFroebenius theory in robust stability analysis. Also useful is stability analysis is the result of the following lemma.
Lemma 2.2. For any A E Ds and P E Rp1m, such that At < Pi, a(A+) < (P)
Proof. For any real matrix A E Rpxm and vector x E R,
Y(A) =11A 11i2= sup 1I Ax I12= sup [( Aix Ilxl1II::l=1 i=1 j=1
Therefore,
1 1
sup( 21 A;x F(P) = SUP _: 3)j
[11= =1 j=1 J= =
Since At > 0, the supremum will occur for some x such that xj > 0, Vj; let Y be the value of x which maximizes (A+). Now,
T > 0 At > 0, Pi >0 =Piji > A+gi, V(ij) Therefore:
+)= sup Ajxj < sup I P=
I11= i=1 5=1 IIkl=1  j= 1
28
This proof is an alternative to the original proof [33]. It is known that, VA E Ds,
(A) < (A+). Therefore, using the result of the lemma, for any perturbation in the elementbyelementbounded class one has that d(A) d(A+) < (p).
Uncertainty representation in interconnected systems
When uncertain systems are connected together, the resulting larger system have scattered simultaneous perturbations. Although the individual perturbations may be unstructured, the perturbation in the overall system presents a structure, because the relative positions of the system components are known.
The system represented in Figure 23 illustrates such a case; although it has only two perturbations, the following discussion applies in general.
Figure 23. Uncertain unity feedback system
Additive unstructured representation. A possible approach to the derivation of an uncertainty representation for this system is to obtain the perturbation of the compensated openloop transfer matrix in terms of Al and A0. The perturbed openloop compensated matrix is:
Qp = (I,, + Ao)Go(Ip + AI)K GoK + (AoGK + AoG ,iAK + GoAIK)
Qp defQ + A (2.19)
29
Therefore, the uncertainty can be written as and additive perturbation to the openloop transfer matrix. This approach however is inadequate for two reasons.
The first reason is that, in order to render this formulation useful, it is necessary to compute or estimate a norm bound for the perturbation AA.. Although this possibly can be done for simple systems, it might become very cumbersome in the case of complex systems. The second and most important reason is that the additive unstructured representation does not carry information about the structure of the perturbation in the interconnected system.
Additive blockdiagonal representation. An alternative approach, which takes into account the structure of uncertainty, is the blockdiagonal representation.
It derives from the technique introduced by Safonov and Athans [481, for dealing with systems involving simultaneous perturbations in the context of the LQG regulator problem, therefore in timedomain analysis. The essence of the technique is to rearrange the system in such a way that the perturbations are isolated in a blockdiagonal matrix.
The technique was explored by Safonov [46] in the derivation of 'conic sector conditions' for stability of MIMO systems, and by Doyle [12] in the derivation of necessary and sufficient conditions for stability under structured perturbations.
A diagonal representation of simultaneous perturbations can be obtained for any system, regardless of the dimensionality of each particular perturbation. Both parameter dependent additive perturbations and actuator and/or measurement uncertainties, represented respectively as input and output perturbations, can be handled [39]. Let us consider its application to the system in Figure 23.
The loops involving the perturbations Al and AO can be regarded as additional system loops, through which the nominal system and the perturbations exchange signals. The
30
nominal feedback loop provides signal to the ith perturbation through the output yAi, and receives signal through the input uAj. The perturbations may be isolated in a blockdiagonal structure through the following simple procedure:
Procedure 2.1. Diagonalization of uncertainty in frequencydomain systems:
1. Suppose the additional system loops are open, as in Figure 24 (a);
2. Compute the transfer function from each system input to each system output. Inputs
and outputs now include the nominal input vector r and the nominal output vector
yi, as well as the perturbation outputs uh, and perturbation inputs yAj;
3. Arrange the transfer functions in matrix form. This step will generate the representation in Figure 24 (b), which is referred to as the 'M  A' form of the perturbed
system.
r M(s) Y
r
(a) (b) Figure 24. Block diagonal representation a) Open perturbation loops
b) The A  M form
The perturbation in Figure 2.4 (b) is A = diag(A, Ao), therefore a blockdiagonal structure; yA and uA are vectors containing uncertainty inputs and outputs, respectively. The transfer matrix M(s) is called nominal interconnection structure. The (1, 1)submatrix relates the collective output of the uncertainties to collective inputs, while the (2, 2)submatrix
is the nominal transfer matrix from r to y. For the system in Figure 23, M, is:
yh1 (I + KGo)IKG, (I + KGo)IK UAI YA2 (I + G0K)'G0 (I + GK)'GK J
YA1 (I + KG)1 KG0 (I + KG0)1K A1 0 yai YA2 (I+GoK)lGo (I+G0IQ1G0K 0 A YA2J Ail A
Note that the dimension of the square submatrix M11 dependents on the number of simultaneous perturbations. Therefore, even a SISO system subjected to simultaneous perturbations is characterized by a MIMO nominal interconnection structure.
Partitioning the interconnection structure according to the dimensions of inputs and outputs, the system can be represented as: y[ Mn M12 uA
= [(2.20) Y M21 M22 r
From the partition and Figure 24 (b), the following relations are obtained:
UA =AYA; YA =M11uA+M12r; Y=M22r+M2,ua
Manipulating these equations, one obtains:
Y = [M22 + M21A(I  M1A)'M2] r (2.21) Thus, the transfer matrix from r to y is given by an upper linear fractional transformation of the uncertainty, namely:
Fu(M, A)d = [M22 + M21A(I  MuA)1M12] (2.22) A block diagonal representation of the LFT is shown in Figure 25 below. The expression A(I  M11A)1 represents a feedback loop, with A in the direct path, and M1 in the
32
feedback path. If A = 0, then Fu(M, A) simplifies to the nominal transfer matrix from r to y, namely M22 = (I + G0K)1G0K.
 M12(M)1(s)
r' M2j2(8) + 1/
Figure 25. Block diagram representation of Fu(M, A)
The general case of block diagonal representation. The technique applied to the simple example above applies to systems having a larger set of localized perturbations.
Uncertainties originating from unmodeled dynamics assume the form of normbounded, full complex blocks of different dimensions. On the other hand, uncertainty coming from parametric variations assumes the form of real perturbations, which can be repeated. Additionally, fictitious repeated complex scalars perturbations can be used to reformulate a robust performance problem as a robust stability problem [15].
Therefore, in the most general case, the final block diagonal structure will show (possibly repeated) real scalars, (possibly repeated) complex scalars and full complex blocks of different dimensions.
To account for the correct dimensionality of blocks in the diagonal formulation, a block structure of indices is defined [171. Assume that M E Cx'em, and consider the triple (mr, m, mC) of nonnegative real numbers such that m, + m, + mc =r  < n, and define
the block structure K associated with M by:
KC(m,,mc, mc) = (ki7. .., kmr,kmr+l,.. .,kmr+mc, kmr+mc+l . kmr+mc+mc) (2.23) where, for compatibility of dimensions, E=' ki = m. Given C, a family of associated m x m block diagonal perturbations is defined by:
Xr = {A = bl diag(1Ik1,..., 1r ..., Ik,+, I, . Mc)}' (2.24) where 6r E R, bq E C and Al E Ck'r+m,+jxk',+"c+j. As required by the dimension of M, Xic E Cmxm. Each b!*Ik, represents a repeated real scalar, while each bjclk, represents a repeated complex scalar and At represents a full complex block.
The general form can be particularized through the convenient choice of indices. For example, if there is no parametric uncertainty, mr = 0. In the case of purely real perturbations, the adequate setting is mc = 0 and mc = 0.
A class of allowable perturbations, having block sizes determined by the block structure, is defined from (2.24) by specifying an upper bound on the norm: X :(b) = {A : A E Xic, _U(A) _< 6 E R+} (2.25)
2.2.3 Representation of Uncertainty in State Space Models Let us now assume that the nominal plant is described by a state space model.The dynamics of the physical process is captured by the matrix A. Since A has fixed dimension in the state space model, it implies that the dynamical order of the process is well determined. Thus, uncertainty caused by neglected high order dynamics cannot be taken into account in the usual state space model.
On the other hand, the state space model is well suited to the representation of parametric uncertainty. Variations in system parameters are represented as perturbations in the
34
elements of the real matrices that define the model. The perturbations can be collected in the error matrix E, so that the perturbed matrix is represented by Mp = + E (2.26) where M can be either one of the real matrices in the state space representation. Particular forms of E are discussed below. Unstructured uncertainty
As in frequencydomain models, the class of unstructured errors is characterized by a norm upperbound:
ï¿½u= {E: I E_} (2.27) and the perturbed matrix is
Mp=(Mï¿½E), E E Cu (2.28) This representation is adequate when several, indistinguishable uncertainties exist in the system, which otherwise has a well defined order. However, since it is in general possible to identify at least some of the uncertainty sources, more realistic representations are needed in order to account for the structure. Structured uncertainty
Independent variations of elements. This representation is used when the elements of a real matrix change inside known real intervals, independently of each other. The admissible class of uncertainty can be defined by placing an upper bound on the largest interval: =si {E : I Ei I  Eij; max cij= c} (2.29)
1'3
The perturbed matrix takes the form of an interval matrix: Mp= (M+ E), E E &si (2.30) If only E is known, this representation can be used with the error matrix elementwise bounded by the matrix P = EU, where U,(ij) = 1, ij = 1,....n [58]. If some of the entries of M are exactly known, the corresponding entries of U, are set to zero, thus accommodating the extra information on the error structure.
Dependent variations of elements. This case differs from the previous one in that it admits correlated variations between entries of M. This assumption is actually required in practical cases. For example, consider the case of an openloop state space model in which the output matrix has some uncertain entries, due to variations in a physical parameter that affects the output gain. If an output feedback controller is used, the dynamical matrix of the closedloop system is likely to have several uncertain entries. However, the variations on these entries are not free, since they depend on the same physical parameter.
A convenient representation for such cases is to obtain the error matrix in terms of the physical parameters. Suppose that an mdimensional vector of parameters can be identified, and assume that the dependence of M on each parameter is linear. This assumption is not too restrictive, since it is possible to redefine nonlinear combinations of physical parameters such that the assumption is satisfied. The perturbation class can be characterized as:
ESD I= {E : E= pkEk, IPkI_ ak, k= 1,...m} (2.31) k=1
Each Ek is a constant matrix which expresses the structural dependence of M on the parameter Pk. Such representation has been largely used in stability analysis [4, 51, 611.
The perturbed matrix is represented by: MV=(M+E), E E &SD (2.32) Notice that M. = M + k=l PkEk is (affinely) linear on the parameters.
The following example illustrates the use of this representation of parametric uncertainty.
Example 2.1 Consider the circuit diagram represented in Figure 25.
+ vo
+ V
Figure 26. Elementary electric circuit Let the input be u(t) = vi(t) and the output be y(t) = vo(t). Then, one has: X1 I
+ Ri
i2 T;  X 2 0 Xl1
S[0 R2]X2
Assume that R1, R2 are uncertain, and that the components are rated at L = 1H, C = IF, Rjo = 0.5Q, R2. = IQ. The nominal matrices are:
AJ ; B= ; C=1 0 1 Given that R1, R2 are uncertain, the terms they affect can be written as:
1 1 1 1 R1C 1 R1 + R1
R2= R2= R2. + (R2)= + p2
L
where b(o) represents the unknown variation. Therefore, the perturbed openloop model is given by:
; 2 1 1 0 0 0 X1 21
+ P, + P2 +{ + Pi u i:2 1  1 0 0 0 1 X2 0 0 EA EB
Y = 1 + P2 0 1
X2
Ec
Thus, uncertainties in the physical parameters R1, R2 are reflected by the state space model as uncertain input and output gains, plus uncertainties in the dynamic matrix A. Assuming that an output feedback controller K = 1 is used, one has A, = (A + BKC), where
0 ((2ï¿½pl+2p2+ PlP2)) BKC =
0 0
def
Defining P3 = PI P2, the closedloop perturbed matrix becomes:
xl2 3 1 1 0 2 0 1 X1
1 + Pli + P2 + P3
i/2 1 10 0 0 1 0 0 X2
Now, let p dff [P1 P2 P3]T. The objective of stability analysis is to find out which is the largest p such that the perturbed system remains stable, and to characterize the
38
allowable intervals fak, ak]. Alternatively, assume that the parameter ranges are known. For example, assume that the variations in R1, R2 are within ï¿½10% of the rated value. Then, the parameters are in the ranges:
pi E [0.202,0.2021; p2 E [0.100,0.1001; j03 E [0.020,0.0201
In this case, the objective of stability analysis is to check whether or not the system remains stable for all possible combination of parameters in the hypercube defined by these ranges.
0
2.3 Conclusions
This chapter puts together basic concepts concerning system models and uncertainty representation, which will be relevant for subsequent development.
Since the objective of this dissertation is the study of robust stability under parametric uncertainty, the state space model will have an important role in following chapters. Also very useful will be the uncertainty description given by (2.31), which accommodates practical cases of parametric uncertainty, as demonstrated by Example 2.1.
In Chapter 5, the problem will be given a frequencydomain treatment, and the diagonalization of uncertainty will be employed. Although the diagonalization technique has been used for some years, no explicit derivation has been found. For this reason, indications found in the literature were put together in Procedure 2.1, and the steps leading to the linear fractional transformation (2.22) were completely worked out.
The review of fundamental concepts will continue in the next chapter with a summary of stability conditions.
CHAPTER 3
STABILITY ANALYSIS OF LINEAR SYSTEMS
3.1 Introduction
Stability of control systems is a fundamental requirement, which must be ensured prior to any other. This chapter presents a review of stability conditions and stability analysis techniques applicable to linear systems.
Both state space and transfer matrix models are considered; in each case, nominal stability and robust stability under additive perturbations are addressed.
3.2 Stability of State Space Systems
3.2.1 Nominal Stability Condition Let us consider the linear, timeinvariant system i (t) = Ax(t) (3.1) This model can be interpreted as the representation of either an unforced system or a system under fixed, known input [52]. The following theorem gives necessary and sufficient condition for asymptotic stability:
Theorem 3.1 [52]. The equilibrium point 0 of (3.1) is asymptotically stable if and only if all the characteristic values of A have strictly negative real parts, that is lim x(t) = 0 4==* Re[Ai(A)] < 0, Vi (3.2)
t 00
An asymptotically stable linear system is globally asymptotically stable, because
x(t) 1* 0 independently of the initial state x(t0).
Equation (3.2) states that asymptotic stability depends on the eigenvalues of A. However, it is not necessary to compute the eigenvalues in order to check stability. The RouthHurwitz criterion gives necessary and sufficient condition for stability based on the signs of the coefficients of the characteristic polynomial. Furthermore, the Lyapunov direct method permits sufficient conditions for stability to be derived from a matrix function involving A.
Nominal stability assessment through the Lyapunov Direct Method. The stability properties of the equilibrium point x(t) = 0 of the system i(t) = Ax(t) can be determined through the Lyapunov direct method, which does not require the computation of the characteristic polynomial.
According to Lyapunov theory, a sufficient condition for global asymptotic stability of the equilibrium point x(t) = 0 is the existence of a scalar positive definite function of x, say V(x), having a negative definite timederivative V/(x) [52]. For LTI systems, the natural choice of a Lyapunov function candidate is the quadratic function V(x) = xTpxT (3.3) where P is a real symmetric matrix. As long as P is positive definite, the scalar function V(x) is positive definite. The time derivative of the quadratic function is given by:
V(X) = iTPX + xTpi = XT(ATp + PA)x dL XTQX (3.4) from which the matrix Lyapunov equation, relating the matrices A, P and Q, is obtained: (ATp + PA) def Q (3.5) Global asymptotic stability of the equilibrium point x =_ 0 of i = Ax(t) is ensured if, for a given A, it is possible to find symmetric positive definite matrices P and Q satisfying
41
equation (3.5). It is so because, if such P and Q exist, V(x) is a scalar positive definite function whose timederivative is negative definite. On the other hand, if there exists Q positive definite such that the corresponding P is negative definite, the equilibrium point is unstable.
The following theorem formalizes the relationship between the asymptotic stability of A and the matrix Lyapunov equation.
Theorem 3.2 [52]. The following statements are equivalent, VA E Rnxn:
1. All eigenvalues of A have strictly negative real parts;
2. For every positive definite Q E Rxn, the equation (3.5) has a unique, positive definite
solution for P;
3. There exists some positive definite matrix Q E Rnx such that the equation (3.5) has
a unique, positive definite solution for P.
This theorem provides a computational device for assessing stability without computing the eigenvalues of A. Choosing any positive definite Q and solving (3.5) for P, if the solution exists, is unique and positive definite, then A is asymptotically stable. If there is no solution, or if the solution is either not unique or not positive definite, then A is not asymptotically stable.
3.2.2 Assessment of Robust Stability
Robust stability assessment through the Lyapunov Direct Method
Let us consider the perturbed state equation
i(t) = AP x(t) = (A + E) x(t)
(3.6)
42
where the nominal matrix A is asymptotically stable. Since A is stable, the matrix Lyapunov equation for the nominal system, namely ATp + PA = Q, has a unique, positive definite solution P for every positive definite matrix Q; let P be the solution corresponding to some positive definite Q0.
Now, let V(x) = xTPx, where P is symmetric and positive definite, be a Lyapunov function candidate for the perturbed system (3.6). The time derivative of Vp(x) is:
VP(x) = iTPx + XTDi = [(A + E)XTpX + xT15[(A + E)x] = xT[(ATD + PA) + (sTP + PE)Ix
Let us choose P = P,, the positive definite matrix defined above. Then, the last equation becomes
VP(x) = XT[Qo  (ETpO + PoE)Jx d'f XTQPX(t) (3.7) According to Theorem 3.2, since Po is positive definite, AP is asymptotically stable, if Qp is positive definite. Therefore, the robust stability analysis problem becomes that of finding conditions on E which ensure the positive definiteness of Qp. Certainly the conditions that can be derived depend on the description of the uncertainty E.
Although stability conditions obtained from the Lyapunov direct method are only sufficient, a positive feature of the method is that it can be applied with virtually all uncertainty descriptions, including timevarying and nonlinear uncertainties.
In Chapter 4, a detailed treatment of stability conditions according to the Lyapunov direct method will be given, for the case of E belonging to the class ESD defined by (2.31).
Other results
A Perron radius stability bound [44]. Sufficient conditions for (A+ E) being asymptotically stable are A stable and (A + E) without eigenvalues on the imaginary axis of the complex plane, for all E in an admissible class. It can be show that (A + E) has no eigenvalue on the imaginary axis if there exists a nonsingular matrix R E R,'ï¿½r, such that II RE(jwI,,  A)R1 I11< 1, Vw > 0, VE (3.8) Assuming that the uncertainty can be decomposed as E = SIAES2, where S1 E Rnxp and S2 E Rqxn are known constant matrices which account for the structure and the matrix AE E Rpxq, p < n, q < n contains the perturbation factors, and using condition (3.8), with the further assumption that I AE, 1< EijE; Eqj > 0, E > 0, where E is unknown, the following sufficient robust stability condition can be obtained [44]:
1
C < 1(3.9) SUPw>o r[ S2(jwl A)1S1 U] (
where U = [qj], and 7r(.) is the Perron eigenvalue.
The advantage of a condition based on the Perron eigenvalue is that it is easily computable; however, it can be too conservative. It will be shown in Chapter 5 that a less conservative robust stability condition can be obtained by explicitly using Perron scaling. The relevant concepts of Perron theory are reviewed in Section 3.4 ahead.
Stability radius condition[24]. The objective of the stability radius method is to compute the distance from the stable matrix A to the set of unstable matrices of the same dimensions. The distance is measured by the smallest norm of a destabilizing matrix, namely the smallest norm of E such that (A + E) have a purely imaginary eigenvalue.
Considering the decomposition
i(t) = (A + E)x(t) = (A + BDC)x(t) (3.10) where A E Rxn is stable, B E Rnxm and C E RPX, are known constant matrices which define the uncertainty structure, and D E mxp is a matrix of unknown factors, the stability radius of A is:
r3(A; B, C) = inf {I D II: (A + BDC) unstable} (3.11)
D
An analytical expression for the real stability radius has been obtained [24], but the computation is too complex, even for unstructured perturbations. In the case of structured perturbations of rank 1, namely when either only one row or only one column of A is perturbed by each factor, the computational burden of the analytical expression is considerably simplified.
Letting G(s) 4f C(sI  A)IB, and defining as GR(jw) and Gi(jw), respectively, the real and imaginary parts of G(jw), and as Q and Q, respectively, the set of frequency points for which Gi(jw) = 0 and its complement in R, the real stability radius for the case of rank
1 perturbations is given by:
rR(A; B, C) = minG ) sup [>2 1
maxima Q G(jw)II1' su  [11 GR(jW) 11' ~ ,J)l' (3.12)
Therefore, in the case of rank one perturbations the computation of the real stability radius involves an unidimensional optimization problem. If only one entry of A is under perturbation, then D and the associated G(s) become scalars; the second term in the right side of (3.12) becomes infinity, and the real stability radius is easily computable.
3.3 Stability of Transfer Matrix Models
3.3.1 Nominal Stability Analysis
Inputoutput stability
A linear system is BoundedInput, BoundedOutput (BIBO) stable if an input bounded in magnitude always produces a bounded output. Let H(s) be a matrix whose elements are proper rational functions of s. H(s) can be written as H(s) = N(s) _ N(s) (3.13) d [I)fj~d(s  pi)
where dd is the degree of the denominator polynomial d(s), which is given by the least common denominator of all (nonidentically zero) minors of H(s) [39]. The transfer matrix, which was assumed proper, is stable if all poles pi are in the open LHP. If pj = 0, for some j, then stability requires that the multiplicity of pj = 0 be 1.
Under the assumptions that each element is a proper rational function of s, the transfer matrix possesses a state space realization [A, B, C, D], such that the transfer matrix relates to the state space realization by H(s) = C(sI  A)IB + D. Although the transfer matrix representation of a system is unique, the state space realization is not.
This transfer matrix can be rewritten as
C(sI  A)1B + D Z(s) _ Z(s) (3.14) I)I = det(sI  A) H=I [s  Ai(A)]
If there is no cancellations of terms of the form [s  Ai(A)], between the denominator and all the elements of the numerator in (3.14), then (3.13) and (3.14) are equivalent; the pole polynomial d(s) of the transfer matrix and the characteristic polynomial det(sI  A) are the same. In this case, inputoutput stability is equivalent to the asymptotic stability of the dynamic matrix A.
46
A necessary and sufficient condition for noncancellations of system poles in (3.14) is that the state space realization [A, B, C, D] be a minimal realization of the dynamic system, that is be state controllable and observable.
Internal stability of closedloop systems
Asymptotic stability of closedloop systems, like the feedback system shown in Figure 21 (a), is equivalent to the internal stability of the loop [53]. A closedloop LTI system is internally stable if any two points of the loop are connected through an exponentially stable transfer matrix [38].
Let K(s) in Figure 21 (a) be a stabilizing compensator for Go(s), and let rd designate an external signal placed at the plant input. The vector [y, u]T, formed by the outputs of plant and compensator, is related to the vector Ir, rd]T of their inputs by:
g(Go, K) = y = (I + GoK)'GK (I + GK)G r (3.15) (I + KG,)0 K (I + KG KG r
HIGOGK) =1 jG(3.15
Therefore, internal stability of the unity feedback system with cascade compensation is equivalent to the stability of the four transfer matrices in II(GoK). The characteristic polynomial of each of these matrices must be checked in order to assess the internal stability of the closedloop system.
Also, it can be shown that external stability and internal stability of the closedloop system are equivalent if the state space representations of the plant and controller are stabilizable and detectable [53].
Note that, if the compensator K is already known to be stable, then the stability of (I + GK)Go is necessary and sufficient for the stability of H(Go, K).
Spectral radius condition for stability
The term (I + GK)'Go represents the transfer matrix of a feedback loop, with G in the forward path and K in the feedback path. This loop can be represented in state space form by F, = [At, Bc, Co, D,]. Stability of the feedback loop depends on the pole polynomial of its transfer matrix; therefore, it depends on the characteristic polynomial of A,. The following result relates the characteristic polynomial of A, to the characteristic polynomials of AG and AK.
Assume that Go(s) and Ks are proper transfer functions having respectively minimal realizations [AG, BG, CG, DG] and [AK, BK, CK, DKI, and define the returndifference operator as
F(s) = [I + K(s)G0(s)I (3.16) further assuming that
det[I + F(oo)] = det[I + K(oc)G(oo)] = det[I + DKDG] 5 0 Let Oc be the closedloop characteristic polynomial. Then [261: det[F( s)]
Oc = det(sI  A,) = det(sI  AG) det(sI  AK) det[F(o)]
ns n nk det[F(s)]
I(al)] = 7J[s Aj(AG)I Ij[s  A (AK)] detfF(oo)] (3.17)
i=1 i=I =
The important fact revealed by this equation is that, when AG and AK are Hurwitz, the matrix A, is Hurwitz if and only if all the zeros of det[I + K(s)Go(s)] have negative real parts. It is important to notice [11] that, if cancellations of terms [s  Ai(*)] occur between the left and the right side of equation (3.17), the zeros of det[I + K(s)G,(s)] are a proper subset of the closedloop eigenvalues.
48
Assuming that Go(s) and K(s) are stable, equation (3.17) shows that a necessary and sufficient condition for stability of a feedback loop is that, for all s such that Re(s) > 0, det[I + KG0(s)] $ 0
= 1A[I + KGo(s)] j 0 + [I KG(s)] $ 0, Vi S Ai[KGo(s)] $ 1, Vi = p[KG.(s)] < 1, (3.18) = 5[KGo(s)] < 1,
Thus, small loop gain is a sufficient condition for stability of a feedback loop.
Internal stability of a feedback loop can alternatively be checked through the Nyquist criterion, which is reviewed next. Nyquist stability criterion
The Nyquist stability test permits the assessment of closedloop stability without requiring the solution of the closedloop characteristic polynomial. Due to its graphical character, it is very appealing in computeraided analysis and design environments.
Let us initially discuss the case of scalar system. Suppose that plant and controller in Figure 21 (a) are scalar transfer functions. Let qo(s) = gokqo(s) = () and let f(s) represent the return difference transfer function. Then, f(s)= 1+ q(S) = n(s) + d(s) (3.19) d(s)
It can be easily verified that AS) (3.20)
49
where 6o(s), Oj(s) designate respectively the openloop and the closedloop characteristic polynomials, and let Po, Pc be their respective number of unstable poles. Closedloop stability analysis requires the determination of the number Pc; for closedloop stability, pc must be zero.
The Nyquist criterion obtains p, from the knowledge of po and the application of the principle of the argument to equation (3.20). Let n, be the number of clockwise encirclements of the origin by the map of the standard Nyquist contour under f(s). Equivalently, n, corresponds to the number of clockwise encirclements of the critical point (1, jO) by the map of the contour under qo(s). Since n, corresponds to the difference between the number of roots of the numerator and denominator of f(s), which are respectively pc and po, the following relationship is satisfied:
Pc = no + Po (3.21) The closedloop system is stable if and only if pc = 0, or, equivalently, if and only if no = po. That is, if and only if the map of the Nyquist contour by qo(s) encircles the critical point, in the anticlockwise direction, a number of times equal to the number of unstable poles of 0,.
Now, consider the case in which Go(s), K(s) in Figure 21 (a) are MIMO transfer matrices. Let Oo, Oc be respectively the openloop and the closedloop characteristic polynomials, and consider the return difference operator defined by equation (3.16). Defining
= det[I + K(oo)G(oo)], equation (3.17) shows that ï¿½fc
det[F(s)] = o (3.22) The Nyquist criterion has been generalized and extended to the case of MIMO systems [37]. In that extension, the fundamental data are the number of unstable poles of o,
50
and the number of encirclements of the origin by the characteristic loci of F(s), which is the same as the number of encirclements of the critical point by the characteristic loci of Q,. The characteristic loci of F(s) are the maps of the Nyquist contour under the characteristic values of F(s).
Let fi(s), qi(s) be the characteristic values of F(s), Q0(s), respectively, and recall that Q,, E CPXP. The characteristic values qi(s) are the solutions of the characteristic equation V(q, s) =L det[q(s)I  Q0(s)] = 0.
In general, the characteristic equation can be factored as a product of irreducible polynomials, V(q, s) = Vl(q, s),..., Vj(q, s). Each polynomial Vi is a polynomial of order ni in qi, with coefficients aij (s),j = 1,..., ni, such as: Vi(q,s) = qi'(s) + ei, (s)qi"(s) + . . . + aim,(s) = 0 (3.23) where the condition E =, ni = p is satisfied.
The algebraic functions qi, i = 1,...,1, defined through equation (3.23), are the characteristic functions of Q0(s). Each algebraic function qi(s) is defined in a Riemann surface Ri, constituted by nicopies of the complex plane, joined together in such a way that qi(s) is singlevalued on Ri. Except at the branch points (through which the Riemann surfaces are pieced together), qi(s) is constituted by ni analytic, distinct branches. The characteristic values of Q,(s) are obtained as the set of branches of the characteristic functions, qi(s),i = 1,...,p.
The generalized Nyquist criterion arises from the application of the generalized principle of the argument [37] to equation (3.22), which can be rewritten as: P P (3.24
fi(S)  1I[1 + qi(s)] =  (3.24) i=1 =1
51
The map of the standard Nyquist contour under the characteristic values of Qo(s) generates a set of closed curves, which constitute the characteristic loci of Q0(s). The number of encirclements of the critical point by the characteristic loci of Qo(s) and the number of unstable poles of 0o(s) are used to assess closedloop stability. The generalized criterion is formally stated as follows:
Generalized Nyquist criterion. Let no be the number of encirclements of the critical point by the characteristic loci of the openloop transfer matrix Qo(s), and let pc and Po be the number of unstable poles of q6c and q0o, respectively, and assume that there are no hidden openloop unstable modes. Then, the closedloop system under unity feedback is stable if and only if no eigenlocus passes through the critical point 1 + jO and Pc = no + Po (3.25) Since pc = 0 is the condition for closedloop stability, it is required that no = Po, which means that the characteristic loci of Q0(s) must encircle the critical point Po times in the anticlockwise direction.
3.3.2 Robust Stability Analysis of Transfer Matrices Generalized Nyquist Criterion under uncertainty
In the presence of additive perturbations belonging to a given class, the openloop matrix becomes Qp = Q0 + AA. Let us assume that the nominal system Q, is stable under unity feedback, and that the perturbation AA is such that Q, and Qp have the same number of Right Half Plane (RHP) poles.
The generalized Nyquist criterion states that the nominal system is stable under unity feedback if and only if the net number of anticlockwise encirclements of the critical point by the characteristic loci of Qo(s) equals po, the number of RHP poles of Q,(s). Therefore, the
52
assumption of nominal closedloop stability is equivalent to assuming the correct number of encirclements of the critical point by the nominal eigenloci.
Now, under the assumption that Q0 and Qp have the same number of RHP poles, the perturbed closedloop system remains stable as long as the net number of encirclements of the critical point does not change under perturbation. A change in the number of encirclement occurs if and only if there is a nonnull net number of crossings of the critical point by the perturbed eigenloci. The following theorem formally states these considerations.
Theorem 3.3119]. Let the unity feedback system of Figure 21 (a) be closedloop stable. Assume the presence of additive perturbations, belonging to a given class, such that Qo and Qp have the same number of RIP poles. Then, the perturbed system remains stable under unity feedback, for all perturbations in the given class, if and only if nop = no (3.26) where nop and no are respectively the number of encirclement of the critical point by the perturbed and the nominal characteristic loci.
Two remarks are in order here. First, the assumption that Q, and Qp have the same number of RHP poles requires that the perturbation itself be stable. Also, if the controller K(s) is an openloop stabilizing controller for G0(S), then no = 0; but fop = 0 if and only if the controller openloop stabilizes the perturbed plant Gp(s), for all perturbations in the allowable class.
Second, the application of the Nyquist criterion requires graphical displays of eigenloci; however, the perturbed eigenloci are not known.
Fortunately, there exist methods for determining regions in the complex plane which include the eigenvalues of a perturbed complex matrix. Computed in a pointbypoint
53
basis as the complex frequency describes the Nyquist contour, each region containing one eigenvalue generates an inclusion band in the complex plane which contains one perturbed eigenlocus. Thus, the perturbed eigenloci is contained in the set of bands described in the complex plane by the set of inclusion regions.
If the openloop compensated plant Q, is stable, the stability requirement (3.26) is equivalent to the requirement that the critical point does not be belong to set of the inclusion bands. Therefore, to in robust stability analysis the generalized Nyquist criterion is applied to the inclusion bands.
The size of the inclusion regions depend on the construction method and on the norm upper bound on the uncertainty class. Methods of computation of inclusion regions are next briefly reviewed.
Condition number method. Let AA E DU, defined in equation (2.14), Qp = Qo + AA, and assume that Q, has the characteristic decomposition Q, = WA0 W1. Then, it can be shown that [541
I A(Qp,)  A(Qo,) I _ KW6, Vi (3.27) where r.w is the condition number of the eigenvector matrix W.
The quantity Kw6 gives the radius of regions in the complex plane, centered at the nominal eigenvalues, which include the perturbed eigenvalues for all perturbations in the class characterized by [AA(s)] < (s).
The inclusion regions defined by (3.27) are easily computable, but the method has disadvantages. If the condition number of the nominal matrix is large, the radius is large, and the computed inclusion regions may be very conservative. Also, if the eigenvectors of the nominal matrix are too skewed, the condition number can be very sensitive to small perturbations, thus unfavorable for computations.
54
Normal approximations method. Let us consider again the openloop perturbed matrix, namely Qp(s) = Q,(s) + AA, where Q0(s) E Cmxm. Using the rectangular decomposition technique, Q0(s) can be decomposed into the sum of two normal matrices, one hermitian and one skewhermitian.
The method of normal approximations to perturbed matrices [7, 8] consists of the substitution of the nominal matrix by the hermitian part of a rectangular decomposition. The skewhermitian part is considered an approximation error, and included in the perturbation.
Let Q,, and EQ be respectively the hermitian and the skewhermitian parts of the decomposition of Q,. The rectangular decomposition is chosen such that the norm of EQ is minimized. Assuming that EQ is characterized by a norm upper bound, say a[EQ(s)] < c(s), Vs, the perturbed matrix can be written as QP = Q" + (EQ + AA), where (EQ + AA) represent the total perturbation to the normal matrix Q,
The application of the condition number method to Qp yields:
I A(Qp,)  A(Q.,) I < w(, + c) = (b + e), Vi (3.28) since the normal matrix Q,, has condition number KW = 1. By adequately choosing the normal approximation, the radius (b + e) given by the last equation can be made smaller than the radius given by (3.27), thus reducing the conservatism of the inclusion region.
Inclusion regions determined by normal approximation can be made tighter by taking their intersection with the region determined in the complex plane by the numerical range of the matrix Qp. The numerical range of Q, E CPxP is given by [23]:
~X*QoX
o {zE C :Z= X
The numerical range of Qp , which obviously includes the eigenvalues, is contained in the region of the complex plane determined when the numerical range of Q, is extended by
6 in all directions. That is,
= {zEC : = o + )X )Vp = z E C z  x X*X 0j4xEC)
x*QoX X*AAX
= {zEC : z + 0 ,O4xECP} X*X X*X
A p _9 A o r,, Lt f (3.29) where L+J means the extension in all directions.
Since the perturbed eigenvalues are included in the regions defined by both equations (3.27) and (3.29), they are included in their intersections. Whence, tighter inclusion bands are obtained by the computation of those intersections as the complex frequency describes the standard Nyquist contour.
The regions given by the intersections are still not tight, in the sense that they may include points which cannot be made eigenvalues of the perturbed system, for any of the perturbations in the allowable class. A method which yields tight inclusion regions for the case of unstructured perturbations is next summarized.
Econtours method. Let z E C be an eigenvalue of the perturbed openloop matrix. Then det[(Q. + AA)  Z1pJ = 0, what means that (Q, + AA  zIp) loses rank at z = 0. Therefore, a(Qo + AA  zip) = ![(Qo  zlp) + AA] = 0. The inequality [Qo zip) + AA_ _ !(Q.  zIp)  a(AA) (3.30) permits the derivation of the following result [9, 33]:
Lemma 3.1. Assume that z E C and AA E DU. Then,
1. If u(Qo  zIp) > b, then z cannot be an eigenvalue of Qp, for any AA;
2. If u(Qo  zIp) < b, there always exists AA such that z is an eigenvalue of Qp.
56
This lemma leads to an algorithm for the computation of the Econtours inclusion regions. Letting A,,, i = 1,...,p be the nominal eigenvalues, the Econtours are the loci of the 'first' solution for z of the equations
a(Qo  z1) = [Q.  (Ao, + pen]  6 (3.31) as p is increased from 0, and 0 < 0 < 27r.
It can be shown [9] that the contours constructed as above described always form closed curves, and that the perturbed eigenvaues are contained in the union of the contours. Plotted as function of the frequency, the contours sweep bands to which the generalized Nyquist criterion is applied.
Singularvalue condition for stability under unstructured uncertainty
Let us consider again the unity feedback system of Figure 21 (a), assuming that K is a stabilizing controller for the nominal system. Furthermore, let us assume that the plant is subject to additive unstructured uncertainty AA belonging to the class Du.
The presence of the controller in the forward path of the feedback loop changes the openloop perturbation. In order to assess robust stability, one may consider the perturbed openloop compensated system, given by:
def
Qp(s) = [G.(s) + AA(s)]K(s) = G0K(s) + AAK(s) = Qo(s) + AQ(,s) (3.32) Notice that the resultant perturbation is AQ(S). In order to characterize the class containing the uncertainty in the compensated openloop plant, the norm upper bound 5[AAK(s)] must be obtained. It may happen that, due to the controller structure, the upperbound results too large, thus causing the uncertainty description to be unacceptable.
57
Using the method described in Chapter 2., the system can be rearranged so that the uncertainty becomes an additive term to the closedloop system, as in the M  A representation of Figure 24 (b). The nominal interconnection structure is given by:
YA M ua (I + GoK)1 (I + GoK)l ua (3
= M = GK r(3.33) Y ] (I + GoK)1 (I + GoK) GotK r
The transfer matrix from r to y, in the presence of uncertainty, is given by the linear fractional transformation Fu(M, A) = [M22 + M21AQ(I  MIIlAQ)M12, which is represented by the blockdiagram in Figure 25. The equation (3.33) above shows that the transfer functions M12, M21 and M22 are stable, since they depend only on the nominal system, which is by assumption stabilized by K(s). Therefore, the stability of the linear fractional transformation depends only on the transfer matrix [AQ(I+M1lAQ)1], which represents a feedback loop with AQ(s) in the forward path and M11(s) in the feedback path.
Let [All, Bil, Cil, Dnl] be a minimal state space realization of M (s), and let us assume that the perturbation AQ(s), which is itself a dynamic system, has a minimal realization [AA, Ba, CA, DA]. Using equation (3.17), the characteristic polynomial of the feedback loop is given by:
= J7[s  Ad(Al)]JI[s  AjdAA)I det[I + Ml1AQ(s)] 3.4 .~1 i det(I + MlIAQ(oo)]
Therefore, if the perturbation AQ(s) is stable, the stability of the feedback loop involving the perturbation can be derived from the stability of det[I + M A(s)]. Stability of AQ(s) is a requirement stronger than the requirement of stability of AA(S).
Alternatively, the perturbed system can be rearranged so that the original perturbation AA(s) becomes the additive perturbation to the closedloop system. In this case, the
nominal interconnection structure becomes:
Y ua (I + KG0) K (I+ KGo)'K uA ]
[ = M = (IGK ï¿½0 GK r] (3.35)
y r (I + GK)1 (I + GK)IGK r
Under the assumptions that the controller K(s) stabilizes Go and that the controller itself is stable, the transfer matrices M12, A121 and M22 are stable; thus the stability of the transfer matrix from r to y depends on the feedback loop [A(I + M11AA)1]. Furthermore, M11(s) itself is stable. If the perturbation AA(S) is stable, then the zeros of the closedloop characteristic polynomial of the feedback loop are in Left Half Plane (LHP) if and only if the zeros of the return difference matrix are in the LHP. Therefore, the perturbed system is stable, Vs : Re(s) > 0 and VAA E Du, if and only if det[I+M11AA(s)] $ 0
4==' flAi[Iï¿½+M11AA(S)] 4 0
i
=* Ai[MllAA(S)] $ 1,Vi
ï¿½= p[MIIAA(s)] < 1 (3.36) Recall that the spectral radius condition for nominal stability, given by equation (3.18), is only sufficient. The last inequality however shows that, in the presence of unstructured uncertainty, the spectral radius condition is necessary and sufficient. Necessity is obtained from the phase freedom of the elements of the unstructured perturbation, and the possibility of scaling the perturbations, in order that A' = c AA, ( E [0, 1], is obtained from AA.
For suppose that p[ML.AA(S)] > 1, for some perturbation in the allowable class, and some s. Then, by changing only the phase of the perturbation elements and scaling by multiplication by c, it is possible to obtain a perturbation, say i3A, such that det[i + M11AA(S)] = 0, for some s.
59
It is always possible to find a perturbation in the allowable unstructured class, say A'(s), which satisfies
p[MniA'A(S)] = j[MiiA'A(s)] = [/i(s)] JA (s)] (3.37) Therefore, necessary and sufficient condition for robust stability, VAA E Du is: j[/ll(S)]d[1AA(S)] < 1,VS
1
4=* d[M1(s)] < 1 ,Vs (3.38) 6F[AA(S)I'
Stability under structured perturbations
Let us consider the M  A form of a perturbed system, represented in Figure 24 (b), and assume that A E Xc(b), defined by equation (2.25), and that the associated block structure has k, = kmc = 0. This is the case of a perturbation composed by complex blocks, which emerges naturally when the diagonalization technique is applied to an interconnected system whose subsystems are subject to unstructured uncertainty.
Applying the same reasoning used above leads to a necessary and sufficient stability condition in terms of the spectral radius, namely p[MIIA(s)] < 1, VA E Xk(b)
However, this perturbation class does not admit all perturbations with norm less than b, but only those which satisfy the norm constraint and the block structure, whence the inequality chain
p[MllA(S)] < d[/llA(S)] < Z[MJJ(s)]5F[A(s)]) (3.39)
in general does not hold with strictly equality for any member of the admissible perturbation class. Consequently, the singularvalue stability condition, namely
1
W[A(s)] < [A()]' A E Xc(b), Vs (3.40)
60
is only sufficient. The conservatism of this condition can be arbitrarily large, since it may happen that no perturbation satisfying (3.40) and having the required structure will destabilize the system.
Spectral radius preserving transformations have been widely used to scale the relevant matrices such that the gap between the spectral radius and the singularvalue is reduced, thus reducing the conservatism of the stability condition obtained from (3.39). Scaling techniques are reviewed in the Section 3.4. Next, two tighter criteria for stability under structured perturbations are reviewed.
Structured singularvalue stability condition. Given a matrix M and the associated block structure K, the structured singularvalue of M, or sIfunction, is defined by [12]: def 1
/I(U) = minAEX,,(6) {I[A(s)] : det[I  MA(s)] = 01 (3.41) if there is A E Xg(b) such that det[I  MA] = 0; if there is not such a A, then p(M) = 0. The following theorem states the necessary and sufficient condition for stability of the M  A representation, in terms of the ifunction.
Theorem 3.4 [13]. The system M  A is stable, VA E XIc(b), if and only if: P[M11(s)] 5(s) < 1, Vs
1
S[M11(s)] < ( Vs (3.42)
If the perturbations are weighted such that aA(s) < 1, Vs, and the frequency dependent weight is included in M, the above result asserts that:
stability == sup IM11(s)] < 1
a
(3.43)
61
The tightness of the above stability condition stems directly from the definition of the jifunction: Mi(M) is defined on the basis of a destabilizing perturbation having the required structured. However, although it clearly addresses the robust stability problem, the definition is not of much help from a computational point of view.
Actually, the computation of the exact value of i(M) can be done only in special cases. Usually only upper and lower bounds are computable, even for the purely complex case, namely when mr = 0 in the block structure [41]. The computation is specially demanding in the mixed case, namely when m, $ 0.
Computation of bounds for p(M) relies on a set of properties of the 11function, proved by Doyle [12], the most important of which are given below: 11(aM) = Ia I 1L(M), VM E C"mm, V scalar a (3.44) A(MIM2) < a(MI) p(M2), VM1, 12 ECnxrn (3.45) If i = 1, mC = 1, As(M) = (M), VM E Ct'x' (3.46) If if = 1, m, = 1, p(M) = p(M), VM E C'x (3.47)
The equality in (3.46) is attained in the case of one single complex block of any size, since the conditions imply m, = 0 and m, = 0. On the other hand, (3.47) concerns the case of one complex scalar, since the conditions mean that m, = 0 and mc = 0.
From the computational point of view, the following property is fundamental. Let U. de=f {U : U is unitary } with the same blockdiagonal structure as XIC, and let S, e {S : S = diag {sili}, si E R+} (3.48) the set of real positive diagonal matrices with blocks having the dimension of the corre
sponding block in XC. Then, VM E Cmxtm,
sup p(UM) :_ p(M) < inf 5(SMS1) (3.49) UEUic SESi
It has been shown [12, 151 that the left inequality of (3.46) is actually always an equality; however, the optimization problem involved is not convex, what may lead to the existence of local maxima.
On the other hand, it has been proved [49] that the optimization problem involved in the right inequality of (3.46) is always convex, and hence has only global minima, as a consequence of the fact that (eSMes) is convex in D. Since S has i elements, one of which can be fixed, the minimization is done over (T 1) variables, no matter what the sizes of the blocks are. Equality is always attained on the right side of (3.46) when there are 3 or fewer nonrepeated blocks in the blockdiagonal perturbation, regardless of the dimension of the blocks. For more than 3 blocks, the lower and upper bounds in (3.46) usually stay within 5% from each other, and almost always within 15% [38].
Furthermore, it has been shown [29] that, for the case of complex perturbations, the right inequality holds with equality, regardless of the number of elements in the perturbation, provided that 'inf' in (3.49) occurs at a stationary point of (SMS1) relatively to the elements of the scaling S. This case occurs when there is no cusping of d(SMS1).
Multivariable stability margin. Consider the perturbed M  A form where A E Xjc(6), is
A =diag{fb,.., br
The multivariable stability margin of the MIMO structure M is defined as follows [10]: k, I min {k E [0, oo) : det (I  kAM] = 0} (3.50)
A
63
Let Di be the known domain of the parameter bi and let the actual perturbation be A.c E Xkp(b). Then, the perturbed system is stable if and only if Aac, E kmDi, Vi
Therefore, given a set of parameter ranges, if km > 1, it indicates how much the ranges can be extended without the system becoming unstable for any combination of parameters inside the extended domain. Conversely, k, < 1 indicates how much the ranges must be shrunk so that the system can stand all perturbations in a given class.
An algorithm for the computation of the multivariable stability margin, which can be applied also to the case of purely real uncertainty, has been given by De Gaston and Safonov [10]. The algorithm avoids a burdensome search over the parameter space by exploiting the mapping theorem due to Zadeh and Desoer.
3.4 FrequencyDomain Scaling Techniques The fundamental condition for robust stability of the M  A representation is given by equation (3.36), namely p[M11A(s)] < 1, Vs. Equation (3.38) shows that, if the perturbation belongs to an unstructured class characterized by the norm upper bound 6(s), a necessary and sufficient condition for stability is Y[Mi(s)], < )Vs (3.51) "(s)' < (s)'V'
The sufficiency of the condition comes from the inequality
p[M, A(s)] < j[M,,A(s)] [ [(s)] < [M1(s)] 6(s) (3.52) which applies in general. Necessity arises because, since the only constraint posed on the unstructured class is the norm bound, it is always possible to find a member of the class for which all the above inequalities become equalities.
64
If a structured uncertainty class is considered, constraints are posed on the norm and on the structure of the admissible perturbations. Under these constraints, it is not possible to guarantee that (3.52) holds with strictly equality for some member of the class. Consequently, if the uncertainty is structured, a singularvalue condition in the form of (3.51) is in general only sufficient.
In fact, it has been shown [29] that the worst case perturbation A(s), namely the one for which p[M11A(s)] = [MiiA(s)] = &[M1l(s)]O[A(s)] = d[Mil(s)]b, is characterized by having output and input major principal directions aligned with the input and output major principal directions of M1(s). This is a rather stringent requirement that may not be satisfied by perturbations in a structured class.
Structured representations of uncertainties occur very often. For example, a block diagonal structured representation is the outcome of the technique discussed in Chapter 2 for rearranging interconnected systems such that simultaneous perturbations are isolated. Furthermore, when estimation and or identification techniques are used to obtain a frequency response model of a plant, confidence bounds are generated for each element of the transfer matrix. The uncertainty in the frequencydomain nominal model is then naturally represented by the structured class of element by element bounded perturbations.
Given the frequent occurrence of structured perturbations, the potential conservatism of the singularvalue condition is a substantial limitation. A resourceful measure to reduce the conservatism of the singularvalue stability condition is to perform a preconditioning of the matrices involved, in such a way that the spectral radius is preserved, while the gap between the spectral radius and the maximum singularvalue is reduced. Scaling techniques for preconditioning of the relevant matrices are reviewed next.
3.4.1 Similarity Scaling
The advantageous application of similarity scaling in robust stability analysis was first reported in the context of the block diagonal uncertainty problem [15, 47]. Let us review this case.
Consider the M  A perturbed representation, and let A be a member of the structured class Xjc(b) defined by (2.25), with the further assumption that A has no real elements. Applying condition (3.36), stability is guaranteed, V A E Xlc(b), if and only if sup p[MiA(s)] < 1, Vs (3.53)
A
A well known property of nonsingular similarity transformations is that they preserve the eigenvalues of the transformed matrix. Therefore, for some S E SIC defined by (3.48), the spectral radius and the maximum singularvalue of [M11A(s)] are related by:
sup p[SM1iA(s)SJ = sup p[MuIA(s)] < sup 5[SMJ1A(s)S'], Vs
A A A
Letting S range over the set Sq, one has that sup p[AMiiA(s)] !_ inf {sup &[SMiA(s)S']}, Vs (3.54) A S A
Let A (s) E Xjc(b) be the worst case perturbation, which is characterized by p[MnA(s)] = maxp[Mj1A(s)]
A
It has been shown [12] that, in the case of purely complex perturbations, there exist a worst case perturbation in which each element is in the boundary of its domain in C. Therefore, the worst case perturbation can be decomposed as
A (s) = PAUO(
(3.55)
66
where PA is a diagonal real matrix containing the known upper bounds on the norm of the complex blocks, and U6 E UK, the set of unitary matrix having the same block structure as XIC. Substitution in (3.54) gives:
sup p[Mll(s)PAUe] < inf {sup [SM1(s)PAUoS1]}, Vs
Ue S Ue
Observing that U9 and S1 commute, because by definition they satisfy the same block diagonal structure, and that the spectral norm is invariant under multiplication by an unitary matrix, the last equation can be written as: sup p[Mi(s)PaUeoj < inf [SMii(s)PAS'j, Vs UG S
Defining Ma(s) =f M11(s)PA, the above inequality becomes:
p[Ma(s)] _ sup p[Ma(s)Uo] < inf d[SM(s)S'], Vs (3.56) U9 S
Therefore, a sufficient condition for stability of the M  A representation, under blockdiagonal structured uncertainty, is inf [SMa(s)S1] < 1, Vs (3.57)
S
3.4.2 NonSimilarity Scaling
In the derivation above, the commutative property of block diagonal matrices was invoked to do a swap of positions between U0 and S1, thus allowing the phase matrix to be discarded in the term involving the spectral norm. This property could not be used if the perturbations had a more general structure than the blockdiagonal form. This is the case of the elementbyelement bounded perturbations in the class Ds, defined by (2.17). However, this case can be handled by the technique of nonsimilarity scaling [28, 33].
67
Let us consider the M  A representation, assuming that M,1 E CmXm and the allowable uncertainty class is Ds defined in (2.17). Then, the perturbation A(s) is a full matrix satisfying A+ < PA, for some PA E Rmxm. Now, let S 4f {S : S = diag{s,...,sm}, si E +, Vi} (3.58) Considering S1, S2 E S, one has that
p[MIA(s)] < sup p[Mu A(s)] _ sup [SiM11(s)S2S;1A(s)S1] A A
Letting SI and S2 range over S, the above relationship become
p[MnIA(s)] < sup p[M11A(s)J < inf {l[SIMII(s)S2] sup d[S2'A(s)SI]} A S1,S2 A Now, for any A E C.xm such that A+ < P E Rmx, one has [29]: Id[A] _< d(A+) < (P) (3.59)
In view of these inequalities, the right term of the previous inequality becomes: p[MIA(s)] < inf {I[SIM11(s)S2][S21PASI]1 (3.60) SI,$2
Therefore, a sufficient condition for stability under all A E Ds is: inf {f [S1M11(s)S2]5[S21PAS'11 < 1,Vs (3.61) SI,S2
The presence of two scaling matrices, Si and S2, with S2 0 S11, characterizes nonsimilarity scaling. Note that in the application of similarity scaling technique, complex perturbations are explicitly assumed, what allows the consideration of the worst case given by (3.55). In the application of nonsimilarity scaling, the upper bound matrix PA implicitly admits complex perturbations.
3.4.3 Suboptimal Scaling
Both stability conditions (3.57) and (3.61) are optimal in the sense that the norm of the scaled matrix is minimized over the set of scaling matrices. However, consider S E S. The following inequalities follow from equation (3.56), under the assumption of complex perturbations:
pAiWPA] < sup p[MI(s)PAUo] < inf [SMll(s)PAS1] < F[SMll(s)PAS]
Ue S
(3.62)
In the same way, for 91, 92 E S, equation (3.60) yields:
p[illA(s)] < inf {'&[SjMjj(S)S2]d[S2 YaSt']) < 4[,MII(W)2]F[92 'PA 1l]
 1,82
(3.63)
If the similarity scaling S, or the nonsimilarity scaling pair S1 and ,2, is chosen according to some criteria, equations (3.62) and (3.63) can be used to obtain sufficient stability conditions. Although more conservative, these conditions save computation time, since they do not require a search over S. Two techniques for the choice of suboptimal scaling are discussed below.
Perron scaling
Let us review some results related to the theory of nonnegative matrices, the first of which is the Perron theorem.
Theorem 3.5 (Perron). A (real) irreducible nonnegative square matrix A has an eigenvalue of multiplicity one equal to its spectral radius, and no other eigenvalue is larger in absolute value. Corresponding to this eigenvalue, there exist a right and a left eigenvector which have only positive components.
69
The eigenvalue of A which equals the spectral radius is called Perron eigenvalue and denoted by 7r(A). The associated eigenvectors are the right and left Perron eigenvectors.
Lemma 3.2[3, 29]. For any A E Ctexm, and S E S,
inf d(SA+S1) = 7r(A+) (3.64)
S
The minimizing scaling S d___f S,, called Perron scaling, is given by S, = [YAXa']2, where YA and XA are diagonal matrices containing respectively the elements of the left and of the right Perron eigenvector of A+.
Lemma 3.3[3, 28]. Given matrices A and B of compatible dimensions, with both Ai, and Bij E R+, and S and S2 E S, then inf {(S1AS2)F(S21BSI1)} = r(AB) (3.65) S1 ,S2
The scaling defined in this lemma is called Perron1 ,s2 scaling [28]. The optimal pair of scaling matrices, for which equality is obtained in (3.65), is determined by [3, 28]: = [YABX 1]; S2,, = [XBAYBJ] (3.66)
where XAB and YAB are diagonal matrices whose elements are respectively the entries of the right and of the left Perron eigenvectors of (AB). XBA and YBA are defined in a similar manner, regarding (BA).
Lemma 3.4[28]. Let A and B be complex, with compatible dimensions. Then, for S1 and S2 E S,
inf {(SIAS2)U(S IBS 1)} < r(A+B+) (3.67) SI,S2
70
where A+ and B+ are matrices whose elements are the magnitudes of the elements of A and B, respectively.
Let us return to the problem of robust stability under structured perturbations characterized by [A(s)]+ < Pa,Vs. Using equation (3.59), the following inequalities apply:
p[MuA] < "[MA] < j[(M11A)+] < j[MjPAJ, Vs Using similarity scaling, and applying Lemma 3.2, one has that p[MiiA] < inf d[S(M+ PA)S1I = ir(M+PA ) (3.68)
s
Therefore, the Perron radius can be used to obtain a sufficient condition for stability, namely: lr(M+PA) < 1, Vs (3.69) Now, consider the Perron scaling for Mj+PA, given by
SW = [YM+p,(XM~pA)I]2 (3.70) Substituting S, for S in equation (3.62) gives p[Ml(S)PA] <_ j[S, MII()PAS7r] Therefore, using the Perron scaling for (M+jPA), a stability condition less conservative than (3.69) can be obtained, namely
5[SMll(s)PAS 1] < 1, Vs (3.71) A nonsimilarity scaling condition can be derived in the same fashion. Since
p[M.] <5 [Mll A] < [(Mii))+ AN a'[MAIPA]
the application of nonsimilarity scaling and Lemma 3.4 results in the inequalities
p[M1lA] < inf d[SIM+S2][S 1PAS71] < 7r(M+PA) (3.72) from which condition (3.69) can also be obtained. The Perron scaling for (M+PA) is
Sr = [YM+P (XMI+P ) '1; S2,r = [XpM+(Yp M+) ]2 (3.73) Substituting S1,. and S2, for 91 and S2 in equation (3.63) gives p[MII(s)A] < _j[S1rM1(S)S2,]j [Sj PASI,]
Thus, a sufficient condition for robust stability, based on explicit nonsimilarity Perron scaling, is:
 [SMl(8)S2]aF[SIPAS7,rS < 1, Vs (3.74) Osborne scaling
Osborne's scaling process [43] comprises an iterative procedure to find the scaling which minimizes the Froebenius norm of an irreducible matrix A E C'X, defined as
dd 1
11 A JIE d.f Aij 1
Let S, be the scaling obtained from Osborne's iterative process applied to the matrix [Mnl(s)PA]. A stability condition analogous so (3.71) can be obtained using S,, namely
F[SoM11(S)PASI1] < 1, Vs (3.75)
3.5 Conclusions
This chapter summarizes robust stability conditions and techniques that will be employed in the next chapters. One important topic is the application of the Lyapunov direct
72
method under uncertainty. This method will be explored in Chapter 4, and the sufficient condition obtained in Section 3.2.2 will be studied in detail.
Also important is the notion that singularvalue stability conditions are only sufficient in the presence of structured uncertainty, and that the conservatism of singularvalue conditions can be reduced through scaling. These concepts will have significant roles in Chapter 5, where an alternative frequencydomain approach is proposed for the assessment of robust stability of state space system under structured uncertainty.
Although the generalized Nyquist criterion and its extension to systems under perturbation will not be applied in the next chapters, the review undertaken above is justified because this technique is a relatively recent generalization to MIMO systems of a classical tool in frequencydomain analysis of SISO systems, which can have a prominent role in computeraided analysis and design environment.
CHAPTER 4
LYAPUNOV DIRECT METHOD IN TILE PRESENCE OF STRUCTURED UNCERTAINTY
4.1 Introduction
The objective of this chapter is to obtain conditions for robust stability of linear state space systems under structured uncertainty, using the Lyapunov direct method.
Although Lyapunov theory yields only sufficient conditions for stability, it can be applied to a wide class of dynamic systems, including nonlinear, timevarying systems. The difficulty in general associated with the application of Lyapunov theory direct method is that it requires the construction of a suitable Lyapunov function.
In the case of linear systems, this difficulty is not present, since an immediate choice is a quadratic function of the form V(t, x) = x(t) Tp(t)x(t), where P(t) is a symmetric matrix. Furthermore, in the case of timeinvariant linear systems, the positive definiteness of the derivative of the function V(x) = x(t)TpX(t), which depends only on P, can be checked through the Lyapunov matrix equation, given by (3.5).
This property extends to the analysis of linear systems whose matrix A is uncertain. In this situation, however, besides the inherently sufficient nature of the stability condition, there is an additional cause for conservatism, as illustrated by the following case [42].
Let us consider the application of the Lyapunov indirect method to a nonlinear system. After linearization around an equilibrium point, the linearized system can be viewed as a perturbed linear system, where the perturbation is the linearization error, namely the neglected high order terms. Let the perturbed model be i(t) = Ax(t) Bm u(t)+f [x(t), u(t)], 73
74
where Am and Bm describe the linear part and f is a nonlinear vector function. A nominally stabilizing linear quadratic state feedback control yields the closedloop
i(t) = (A,,  BmR1BTp)X(t) + f[x(t)] de4f Ax(t) + f[x(t)]
which is stable for f = 0. Let V(x) = xTPx be a Lyapunov function candidate, where P comes from the solution to the Ricatti equation associated with the LQSF problem. Then, the derivative is V(x) = xT(AP + PA,)x + 2fT(x)Px. The following robust stability condition can be derived [42]:
I1 f(x) 112 1 a
<~ 1 +  Vx E Ry
11 X 112  2(D1)d(P) n(P) V
where D = Q +PBmR1BTp, o(*) is the spectral condition number, and a is a parameter in the Ricatti equation.
This case exemplifies two facts about the use of the Lyapunov theory in robust stability analysis. First, the problem of nominal stability analysis of a nonlinear system can be approached by robust stability analysis of the corresponding linearized system.
Second, and more important for the objectives of this chapter, stability conditions obtained from the application of the direct method generally involve some function of the norm of the perturbation. Consequently, the method cannot discriminate between real and complex uncertainties having the same norm bound. If the uncertainty is known to be real, and the stability result is given in the form of a norm bound on the perturbation, a larger class of perturbations is virtually admitted, namely the class of complex perturbations with the same norm bound. Therefore, the result is not tight.
The Lyapunov direct method can handle timevarying perturbations as well, in which case V/(x) is required to be negativedefinite at each instant t. In the case of nominal timevarying systems, the use of the Lyapunov matrix equation is precluded. However, if
75
the system matrix can be decomposed into a constant part plus a timevarying part, this case also can be handled, by looking at the timevarying part as a perturbation to the timeinvariant part, and requiring negative definiteness of V(x) at each instant t.
Examples of the application of the Lyapunov direct method to systems under unstructured and under structured perturbations are available in the literature. For instance, assuming Q = 21, the following robustness condition can be derived for the system .i(t) = [A + E(t)]x(t), where E(t) is a timevarying unstructured perturbation [57]: < 1
V[E(t)] <1
Y(P)
where P is the solution to the Lyapunov matrix equation.
The application of the method in the presence of structured perturbations can be illustrated by the case below [55, 56]. A bound on the magnitude of each perturbation element def
is given, namely E13(t) < Ei3, Vt, with maxim, Ei = E. Using Q = 21, the following condition for robust stability can be derived:
1
E <
<[(P U)]
where (Pm Un)s is the symmetric part of the matrix PmUn, Pm contains the magnitudes of the elements of the P, and U,, is such that Uni, = 1, Vij.
In the next section, a link between perturbation structure and conservatism of the stability condition is investigated.
4.2 Dependence of Conservatism on Perturbation Structure
The section points out a cause for conservatism in the application of the LDM under structured uncertainty, which is inherent to the mechanics of the application, and related to the choice of the Lyapunov matrix.
76
Recall that according to Theorem 3.2, the system dynamic matrix is asymptotically stable if and only if there exists some positive definite symmetric matrix Q such that matrix Lyapunov equation AT + PA = Q has a unique, positive definite solution P. It is important to keep in mind that the theorem does not guarantee that, picking a positive definite P, the corresponding Q is positive definite. Now, consider the following lemma:
Lemma 4.1. Given a real symmetric positivedefinite matrix P, the set of systems i(t) = Ax(t) for which V(X) = xTpx is a Lyapunov equation is a convex set.
Proof. Let
M PD d=_ {M : 1 is symmetric, positivedefinite} (4.1) A(P) d= {A ATP+PA=_Q, PQ EMPD} (4.2) Then, for A1 and A2 E A(P) and P E MPD, one has ATp + PA1 = Q1 E MPD and ATp ï¿½ PA2 = Q2 E MPD. Taking al, a2 E R+ such that al + a2 = 1, and defining A3 = a1A1 + a2A2, one has:
AT3p +PA3 = [aA, +(1 a,)A2]Tp+ P[al A, + (1 a)A2
 a,[ATp+ PAI]+(1al)[ATp + PA2]
= al(Q1) + (1  al)(Q2)
def ~ EP
d Q3 E .,MPD
Therefore, A3 E A(P), which shows that A(P) is a convex set.
Let us now turn the attention to the matrix Ap = (A + E), where A is Hurwitz and E is some perturbation in the admissible class. Define A1 = A, and A2 = A + yt, Y E R+, further assuming that A2 is also Hurwitz and that, for a given P, the function V(x) = xTbx
77
is a Lyapunov function for both i(t) = Aix(t) and i(t) = A2x(t). Letting A3 be a convex combination of A1 and A2, one has:
A3 = or A, + (1  a,) A2 = Al + a21E f A, +/3 where/3 E [0, y]. According to the preceding lemma, V(x) = xTPx is a Lyapunov function for i(t) = A3x(t). Now, suppose that A4 = A + (E, C > y. Even if A4 is Hurwitz, it may happens that V(x) = xTPX is not a Lyapunov function for i(t) = A4x(t).
Since the choice of Q determines P, it also determines the size of the convex set of system equations for which V(x) = xTPx is a Lyapunov function. Therefore, the conservatism of a computed stability condition will be reduced if Q is selected such that the corresponding P yields the largest possible convex set A(P). However, notice that in the above lemma, a fixed perturbation t is taken into account, while in a robust stability problem one deals with an admissible class of perturbations. The question of selecting Q such that the corresponding P generates a Lyapunov function for the largest possible set of perturbed systems, for any perturbation in the admissible class, does not have a straightforward analytic solution; possibly it has no analytic solution at all.
It was seem in Chapter 3 that, choosing the Lyapunov function candidate V(x) = xTp x for the perturbed system i(t) = (A+E)x(t), it leads to the derivative equation (3.7), namely VP(x) = XT[Q  (ETpo + PoE)]x df _xTQPX where Q, and P, are respectively the choice of Lyapunov matrix for the nominal system and the solution of the nominal Lyapunov equation. A sufficient condition for stability of the perturbed system is the positive definiteness of Qp. Defining, for simplicity,
F If ETp + P, E
(4.3)
78
robust stability requires positive definiteness of Qp = (Qo  F). Since both Qo and F are real symmetric matrices, one has: (Qo  F) positive definite
ï¿½= min{Re[A(Qo F) > U; =v a(Q o  F) > 0 (4.4) == _.(Qo) d(F) > 0 (4.5) in view of the inequality
1(Q.  F) >_ o(Qo)  d(F) (4.6) Since the analysis objective is to find explicit conditions on E, equation (4.4) is not useful, and the only alternative is to apply (4.5). Obviously, this condition is not tight, since, as shown by (4.6), it may be possible that a(Qo  F) > 0 even if a(Qo)  5(F) < 0. Therefore, the closer (4.6) is to strictly equality, the tighter (4.5) is. The following theorem gives necessary and sufficient conditions on Qo and F for equality to be attained in (4.6). For simplicity, the subscript of Q, will be dropped.
Theorem 4.1. Given Q, F E Rmxm, then a(Q  F) = _(Q)  (F) if and only if the following conditions hold:
"7F = e'XQ (4.7) YF = eoYQ (4.8) where 0 is arbitrary.
The first of these conditions requires that the major output principal direction of F and the minor output principal direction of Q be aligned. The second requires alignment
79
between the major input principal direction of F and the minor input principal direction of Q.
The proof of this theorem is derived from a similar proof [30], and is given after the following lemma, which establishes necessary and sufficient conditions for alignment between the relevant principal directions of Q, F and (Q  F).
Lemma 4.2. Given Q, F E RX', then a(Q  F) = _a(Q)  j(F) if and only if the following conditions hold:
QF = eJOYF (4.9) YQF = eJlkYQ (4.10) XQF = eO'%F (4.11) XQF = eJvTQ (4.12) where 0 and 0 are arbitrary.
Proof. Sufficiency: (4.9) to (4.12) => _(Q  F) = _(Q)  (F).
Assume conditions (4.9) to (4.12) are true, and consider the input YQF applied to [Q  F]. Then,
[Q F]YQF = QYQF  FYQF
= ej'QyQeJFF by (4.9),(4.10) Applying the relationships _(M)xM = MYM and d(M)YM = MYM, VM, to the last equation, it becomes:
[Q  F]YQF = ej'a(Q)XQ  ejo,(F)YF
= Q(Q)xQ.Fd(F)xQF by (4.11),(4.12) = [_(Q) (F)]_QF
The last equation implies that a(Q  F) = _a(Q)  j(F), which proves sufficiency.
Necessity: a(Q  F) = a(Q) (F) ==*. (4.9) to (4.12).
Assume a'(Q  F) = a(Q)  (F). Now, Vz E Rn, (Q  F)z = Qz  Fz. For z = YQF' this expression becomes (Q  F)YQF = _(Q  F)XQF = QYQF  FYQF. Given the assumption above, Ua(Q)XQF  (F)XQF = QYQF  FYQF' which is equivalent to a(Q)xQF = QYQF (4.13)
d(F)_QF = FYQF (4.14) Equation (4.13) means that, since Q applied to _YQF produces a magnification e(Q), YQF and yQ must be aligned, that is, YQF = eJ'kYQ, for arbitrary ', which is (4.10). Now,
(Q)XQF = QYQF = QeJVPyQ = eJ"QyQ = e'la(Q)XQ = g(Q)eJ .Q Therefore, x~QF = eJ'xQ, which is (4.12).
Similarly, equation (4.13) shows that, since F applied to YQF produces a magnification
(F),yqf and YF must be aligned, that is, yQF = Y arbitrary 9, which is (4.9). Since
F(F)_QF = FyQF Fejï¿½= eJ ï¿½'(FQ =!_Qf = eJOTF, which is (4.11).
Proof of Theorem 4.1. Necessity: o(Q  F) = (Q)  (F) :==> (4.7) and (4.8)
Rewriting (4.9) as 9F = eJYQF and using (4.10), one gets YF = ejOej'YQ, and letting / = )  0, one obtains iF = eJf3yQ, which is (4.8). Similarly, rewriting (4.11) as
XF = eJ xQF and using (4.12), one gets XF = eJoeJxQ, and using the definition of/3, one obtains 5F = eJ~xQ, which is (4.7). Therefore, necessity is proved.
Sufficiency: (4.7) and (4.8) == _r(Q  F) = _I(Q)  F(F)
Assume (4.7) and (4.8) and consider the input yQ to (Q  F). Then,
[Q  F]yQ = QyQ  FyQ
= a(Q)_Q  ejo 'F, by 4.9
= a(Q)Xc
= a_(Q)xq eJ ej3j(F)xQ, by 4.10 = [_(Q)  (F)]_Q
Therefore, [I(Q)  (F)I = _(Q  F), which proves sufficiency.
In the present case F is defined by (4.3), and Q is the solution to the nominal Lyapunov equation. Therefore, necessary and sufficient conditions for equality in (4.6) are:
X(ETP+PE)   [(ATpPA)], (ETP+PE) =_[(ATppA)]
The expressions above have a qualitative significance. They show that (4.6) holds with equality, for an allowable perturbation class, if and only if the class includes a perturbation for which the alignment conditions are attained. If the existence of such a perturbation were guaranteed, the use of (4.5) in place of (4.4) would not introduce conservatism.
However, it is not evident whether or not the above expressions can be helpful in the choice of the Lyapunov matrix Q. The conservatism of (4.6) would be eliminated if Q were such that the resulting P leads to the attainment of the alignment conditions. However, it cannot be guaranteed that the Lyapunov function constructed with such P would be a Lyapunov function for a larger set of perturbed system than the function obtained with some other P.
82
This section shows that the choice of the nominal Lyapunov matrix has an important role in determining the conservatism of robust stability conditions. In the next section, the problem of the choice of Q is addressed, in the context of structured perturbations.
4.3 Stability Under Structured Uncertainty
4.3.1 Uncertainty Description
In this section, the uncertainty class E E ï¿½SD defined in (2.31) is adopted. Uncertainty in this class can be represented as E = "k"=I Pk Ek, where Ek, k = 1,..., mn, is a constant matrix which accounts for the structure of the perturbation due to the parameter Pk. Without loss of generality, a symmetric range about the origin is assumed for each parameter, namely Pk E (ak, ak), Vk.
This description is well suited to the representation of real world systems uncertainty, since it accounts for the possibility that changes in one physical parameter may affect several entries of the matrix A. However, it requires that the perturbation to each element of A be linear in the parameters, and thus may require parameter redefinitions. That description has already been used in robust stability analysis of state space systems [4, 51, 61].
4.3.2 Sufficient Condition for Robust Stability Let p = [PlP2,.. . ,pm]T be a vector containing the system parameters, an let us define MdefME n,
=xn { M : KlXf Re[Ai(M)] < 0, Vi } (4.15) where either K  R or I  C, according to the context, and
m
Sd=~ f PE Rm : (A + :Z pk Ek) E . ï¿½, (4.16) k=1
Then, Sd represents the stability domain in the space of system parameters.
83
Given the nominal system model and the parametric uncertainty description, the objective of robust stability analysis is to determine the stability domain in the space of parameters, which is usually specified by an admissible upper bound on some norm of p.
The Lyapunov Direct Method has been used in robust stability analysis by several authors [4, 16, 42, 51, 55, 56, 58, 59, 61].
Particularly, the uncertainty description above has also been adopted [4, 51, 61]. Introducing that uncertainty description in (3.7), the equation of the derivative of the Lyapunov function becomes:
l/,(X) = _XT = X QcT Pk (EkP +PEk)] X (4.17) k=1I
where Q, and P are respectively the Lyapunov matrix for the nominal system and the corresponding solution of the Lyapunov equation. Therefore, positive definiteness of the matrix [Q,  TUI Pk (Ek= TP + PoEk) is a sufficient condition for asymptotic stability of (A + E). In order to obtain the stability domain, an explicit condition on some norm of p must be derived. A derivation of stability domains is presented in Section 4.3.4. Before this, some available results are reviewed.
4.3.3 Available Results for Admissible p1 p For simplicity, the subscript will be dropped on the notation of Q, and P. Therefore, Q and P will mean nominal matrices. Let us define: Fk d (Ep+ PEk), k 1,...,m (4.18) def 1
pe d= [Pi ..fPm] (4.20) FQ, def QFkQ2 (4.21)
The following norm bound [4] gives a condition for robust stability:
=PkJ2 IIPIf2<  F ', Q a parameter (4.22) [EM=I [a0(Fk)]2 ]12
Notice that both the numerator and the denominator depend on Q, which is treated as a free parameter.
Results for a fixed Q have been reported. Using Q = 2 I,, the following conditions can be derived [61]:
Pk 11 P 112 < 1 (4.23) E IPklf(Pk) < 1, (4.24) k=1
1
IPjl < ( lPiH' j = 1,. ..,m; (4.25) The choice of Q = 2 I,, has been justified [59] on the basis that it maximizes the ratio !(Q) Fixing Q yields ready to use analytic expressions for bounds on p; however, in view of the facts pointed out in the last section, it is a potentially conservative option. Actually, it has been acknowledged [61] that a state transformation [58, 59] can be applied to the system description, in order that improved results are obtained with Q = 2 I,, for the transformed system. Yet there is no systematic method for choosing the adequate state transformation.
The following stability conditions have also been reported [51]:
M 11
P 11P112 < 2
Pk 1 1 Q)]T (4.26) Z lPklI(FQk) < 1 (4.27) k=1
max IPkl =11P1=1 < (4.28) It has been shown through examples [51] that less conservative stability conditions can be obtained from these expressions with a choice of Q other than Q = 2 1,,. Furthermore,
85
it has been argued [51] that regarding Q as a free parameter inherently incorporates the degree of freedom brought about by a state transformation [58, 59]. However, no analytical method has been proposed for the choice of Q.
Note that, since Q is a free parameter in (4.22) and in (4.26) to (4.28), and no analytical method is available for the selection of Q, it implicitly means that some sort of search over the space of n x n symmetric, positivedefinite matrices is required.
In the following, a derivation of stability conditions on norms of p, which was independently developed, is explicitly presented, and the corresponding stability domains in the parameter space are defined.
4.3.4 Derivation of Admissible 11 p Using the definition of Fk in (4.18), equation (4.17) can be rewritten as:
V(x) =  [XTQIQ  0XTQ2 ( PkQ1FkQ) Qx
From the innerproduct properties < y, y > = I y I2 and < y, My > < ?(M) < y, y >, and defining y(t) = Q2x(t), the inequality below follows from the last equation: 1(x) <  [i (Z Pk Q F)k Q ii (4.29) Since the norm term on the right side is always bigger than zero for nonzero y, a sufficient condition for robust stability is
1
PkQ Fk  < 1 (4.30) New result for admissible 11 p 112
Let us define
(4.31)
Mr, [Pi IPmI,]d
86
Mo dli [FQlI...IFQm]T (4.32) Then, substituting in (4.30), one obtains
 ( PkQ2FkQ) = d(MAiQ) < U(M,)d(MQ) (4.33) The maximum singularvalue of Mp is given by:
2
(Mp) = [max { Ai(MPTMp)}]1 = Pk (4.34) Substituting in (4.33), one obtains that the robust stability condition (4.30) is satisfied whenever
1 def (.5 k 11 :IP 112 < j2Q (4M35 k=1I12 1 (MQ)
The corresponding stability domain in the parameter space is given by Sd2(Q) = { p : IP 112 < rd2(Q) } (4.36) The computed stability domain Sd2 is a hypersphere of radius r,2 in Rm. Given A0, Ek, k m 1.., m,the induced 2norm of MQ, and consequently the radius r2, is parametrized by the Lyapunov matrix Q. A related result for admissible 11 p j[
Considering the matrix Alp defined in (4.31), the following inequality applies:
j (Mp) = plln[ I ... 1Pmln. I : "aF ([ Iln . .. [pm1.1 1 (4.37) Now, let us define
def
p. = pj : jpjj = rnaxk IPk1 (4.38) Substituting p, for Pk, Vk in (4.37), one obtains
(4.39)
'5F(MP) f JP ([[p./, ...p I ])[pI [x,.[ ]) 1p.I(mn)2
87
where rn is the number of parameters of the system. Using (4.39), one obtains from equation (4.33) that lP.4 (m) d (MQ) K 1 is a sufficient condition for robust stability; equivalently,
1
IN* =11 P 11.o < def 1soo(Q) (4.40)
(M) 2j (MQ)
Notice that, in view of definition (4.35), 1o = (rs2. Therefore, the derivation of admissible 11 p 11... above leads to the smallest conorm upper bound that can be directly obtained from 11 P 11, < 11 P 112  (m)2 11 p II... The corresponding stability domain is: Sd. (Q) de { P II. < Is.(Q) } (4.41) The stability domain is a hyperrectangle E Rm, with semiside given by 11 p 1100, therefore parametrized by the matrix Q through MQ.
The 2norm and the onorm conditions derived above differ from the corresponding previous results summarized in the last section. For future references, a derivation of results (4.27) and (4.28) is now presented.
Admissible 11 p I1,
From the robust stability condition of equation (4.35), one obtains m m
II S PkFQk 11i2 < 1 lPkl 1 FQk 11:2 (4.42) k=l k=1
Now, let w = [11 FQ1 11i2 I ... 111 FQ, 11i2]. Then, substituting in the right term of the above inequality, one obtains that a sufficient condition for robust stability is
m
SIwkpkl <1 (4.43) k=1
This condition, which is the same as (4.27), is given in terms of a weighted 1norm of p, with the kth weight given by 11 FQA 11i2. The corresponding stability domain is an hyperrhombus
in Rm, defined by
sdI.(Q) "Vf P :tP III< 1} (4.44) The largest possible value of a semiaxis is IPkI < W , Vk. Notice that the weights are parametrized by the Lyapunov matrix Q. Admissible 11 p11
From (4.35), one obtains 1 E' Pk FQk 11i2 < 1 E=i IPk FQk I 11i2. Now, letting
def
P. = Pj : IPjj = maxk IPkI and substituting p. for Pk, Vk in the last inequality, one obtains the sufficient condition p.I 11 E'=I IFl 11i2 < 1 or, equivalently,
1 . < Id cIIldef IFk(Q) (4.45) which is identical to (4.28). The corresponding stability domain is S d(Q) e {p :11 P 1. < is.} (4.46) Comparison of new results to previous results
The new result of equation (4.35) is analogous to the earlier results of equations (4.22) and (4.26). Now, consider the following possible derivation of equation (4.26). The matrix in equation (4.30) can be written as = PkQ FkQ = Q M FQ, where Mi is given by (4.31) and FQ d F1Q ... FmQI ]T. Therefore, one obtains:
a ( PkQ!FkQ2 < a(MP) d (Q (FQ) (4.47) from which equation (4.26) follows.
However, from (4.32) and the definition of FQ above, it follows that MQ = diag[Q ] FQ. Therefore, "&(MQ) S d(Q2)(FQ). Using (4.33) and (4.47), it follows that
d ( MP) d ( MQ ) < a ( MP)  ( Q 2 ) d ( FQ ) (4.48)
89
Consequently, condition (4.30) is satisfied with less conservatism by d(MV)j(MQ) < 1, as in the new result (4.35), instead of by a(Mv)j(Q )d(FQ) < 1, which is the case in the derivation of (4.26) given above. Similar reasoning can be applied relative to the derivation of (4.22).
The new result for the admissible 2norm of p is superior to previously available results, in the sense that, if an arbitrary Lyapunov matrix Q is used, equation (4.35) will give better 2norm bound on P than either (4.22) or (4.26). Therefore, the new result in 'nonconservative', relative to the others. However, the conservatism of all the results depend on the adequate choice of the Lyapunov matrix Q.
On the other hand, the derivation of the new result for the admissible c norm of p, given in equation (4.40), requires that the inequality (4.33) be used, while the derivation of the result (4.33) does not. Therefore, given a Lyapunov matrix Q, the new result is expected to be more conservative than the previously available result. However, while the latter is given in terms of IFQj , the new oonorm result is given in terms of the same matrix function MQ that appears in the new 2norm result. Furthermore, it will be shown in the next section that the derivative of cost functionals relative to the elements of Q, are easier to obtain for a functional based on the new oonorm result than for a functional based on the previous result.
4.3.5 Admissible Weighted Stability Domains In the derivation of the norm bounds (4.35), (4.40) and (4.43), it was implicitly assumed that no 'a priori' information was available about the relative range of the individual parameters. It is equivalent to assuming that the largest value that can be taken is the same for all the parameters, that is IPkI < &,Vk, & = maxk laki. Consequently, the stability
90
domains defined in the parameter space by those equations are, respectively, a hypersphere, a hypercube and a hyperrhombus.
If information is available on the actual relative range of the parameters, the conservatism of the stability domains Sd2 and Sd.o can be reduced, by shaping them such that their relevant dimensions become proportional to the ranges of the parameters. The adequate shape can be obtained by weighting the parameter ranges [511.
Let us rewrite the uncertainty description as E = Ek'=j pkEk = X'=l PA skEk where Sk, k = 1,..., m are adequately chosen scalars, and define def Pk ' def
Plc = , Ek = SkEk (4.49) Sk
so that
m
E= pkEk (4.50) k=1
Considering the weighted uncertainty description above, and proceeding as in Section 4.3.4, admissible norms for Pk, Vk, are derived. The corresponding stability domains, in the weighted parameter space, are given by (4.35), (4.40) and (4.43). The stability domains in the original parameter space are then obtained using (4.49).
2norm weighted stability domain
Following the same steps of the derivation of equation (4.35), one obtains
2 1 def
Pk= [a(Mt,)] = r,2(Q) (4.51) where M is obtained by substituting E for Ek in the definition of MQ. The stability domain in the weighted parameter space is given by
Sd2(Q) {p : 11 P 112 < r2(Q) (
(4.52)
To obtain the stability domain in original parameter space, consider [P 12 + 2 ... + [Pm12 < [r2(Q)] (4.53) S1 S2 8M
This inequality defines a hyperellipsoid with semiaxes ak given by ak < (sk)2 r2(Q), Vk (4.54) oonorm weighted stability domain
Proceeding as in the derivation of (4.46), one obtains P'1.< 1 def (Q) (4.55)
(m) 2(M )
where VI, is as above defined. The stability domain in the weighted parameters is Sd def (Q)} (4.56) Since,
max p_ < l..(Q) ==max [Pk < sk1's(Q) (4.57) k 8k k
the stability domain in the original parameter space is a hyperrectangle, with semisides lk given by
lk < SkI.' (Q), Vk (4.58) The choice of weights
The norm bounds for weighted parameters define either regions with equal axes or equal sides, depending on the norm used. It is convenient to obtain stability regions whose relevant dimension is proportional to the corresponding actual parameter range.
Let us assume that pi is the original parameter with the smallest range. Then, one possible choice of the weights is I=  Vk (4.59)
4.4 Maximization of Stability Domains
4.4.1 The 'optimal' choice of Q Let us recall the expressions obtained for stability domains in the parameter space:
(4.35),(4.36): Sd2(Q) = {p : 11 2 < r,2(Q); r,2(Q) (Mq)
1
(4.40),(4.41): Sdc0(Q) = {p : lllp 11 < (Q); 130(Q) = 1
(4.42), (4.44): Sdlw(Q) = {p :IIp,.. < 1; 1I1piw= IPkI (FQk)
k=1
As previously discussed, the size of the stability domains depends on the choice of the Lyapunov matrix Q. The best choice of Q, namely the one which yields the largest computed stability domain, is problemdependent, since it is affected by both the system matrix A and the matrices Ek. For instance, let us recall the equations relating Q to MQ. Given A and Ek, k = 1,..., m, and chosen Q, it follows that: Q,A: ATp+PA = Q  P P, Ek: ETp + PEk = Fk Q, Fk : Q2 FkQ = FQk [(FQ1)Ti... :(FQm)T]T= MQ Notice that MQ, which is uniquely determined by Q, is an mn x n real matrix, where m symmetric, n x n blocks are stacked. Defining the set
Q = {Q : Q E RnTn, symmetric, positive definite}
the quantity rs2(Q) defined by (4.35) relates to Q through the real functional: N:Q  +
Q(MQ)  rS2(Q) (4.60)
93
The above equations show that the functional N(Q) is highly nonlinear, and complex enough to void the possibility of a simple analytical solution for the best choice of Q. Moreover, Q must be restricted to Q, the set of n x n symmetric, positivedefinite matrices, which means that the eigenvalues of Q are constrained to have strictly positive real parts. A feasible alternative to the analytical solution is to treat the problem of selecting Q E Q as a constrained parameter optimization problem, where the real elements of Q are the parameters. In the following, the problem of the computation of 'nonconservative' stability domains in the system parameter space is recast as optimization problems over the set Q. Although the discussion refers to the stability domains Sd2, Sdo and Sd1 ,, it applies, with the obvious changes, to the weighted domains S' and S'
2 doo"
'Optimal' 2norm stability domain
The objective to be optimized can be derived from any of the inequalities which give the admissible 11 P 112 as a function of Q. However, it is convenient to choose the least conservative condition, namely the one which yields the largest stability domain for a given Q. As shown in the previous section, the least conservative condition on the 2norm is given by equation (4.35). Therefore, let us elect that equation as the basis of the optimization procedure.
Let us define the objective functional
J2(Q) ef (MQ) (4.61)
Then the optimized stability domain can be obtained as:
S;2(Q) {P: 11 P 112 < r(Q)}; r,2(Q) (4.62) (4.62

Full Text 
PAGE 1
ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY By JOSE ALVARO LETRA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1991
PAGE 2
To Carmen Lucia and Ariadne
PAGE 3
ACKNOWLEDGMENTS I am profoundly indebted to my advisor and supervisory committee chairman, Dr. Haniph A. Latchman, for his guidance, permanent support and encouragement during my three years at the University of Florida. Despite his several other responsibilities, Dr. Latchman always found time to discuss my work and give me his insightful orientation. I wish to thank the professors who served on my committee, Dr. Thomas E. Bullock, Dr. J. Hammer, Dr. A. Antonio Arroyo and Dr. Spyros A. Svoronos, for their willingness to discuss and advice my work, and for the high level of consideration I was always treated with. I wish to thank the help and advice of Dr. G. Basile, my first committee chairman. I am indebted to the EE Graduate Coordinator, Dr. Leon W Couch, and his staff, for all their assistance. Particularly, I have to thank Mrs. Greta Sbrocco, who always provided helpful orientation on administrative subjects. It was a privilege to work close to my exfellow student, Dr. Robert J. Norris, whose valuable incentive and help I now acknowledge. I also wish to thank Dr. Julio S. Dolce da Silva, of the Brazilian Army, for his help on my enrollment and adaptation to the University. I am grateful to the Execito Brasileiro (Brazilian Army) for conceding me the opportunity of coming to the University of Florida to further pursue my studies, and to the CNPq Conselho Nacional de Desen volvimento Cientifico e Tecnologico (Scientific and Technological National Development Agency Brazil) for the scholarship I was granted. 111
PAGE 4
TABLE OF CONTENTS page ACKNOWLEDGMENTS iii ABSTRACT vi CHAPTERS 1 INTRODUCTION 1 1.1 Dissertation Objective 1 1.2 Brief Historical of Uncertainty Treatment 2 1.3 Structure of the Dissertation 9 1.4 Notation 11 2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION 16 2.1 Nominal Models and Definitions 16 2.2 Uncertainty Representation 20 2.3 Conclusions 38 3 STABILITY ANALYSIS OF LINEAR SYSTEMS 39 3.1 Introduction 39 3.2 Stability of State Space Systems 39 3.3 Stability of Transfer Matrix Models 45 3.4 FrequencyDomain Scaling Techniques 63 3.5 Conclusions 72 4 LYAPUNOV DIRECT METHOD IN THE PRESENCE OF STRUCTURED UNCERTAINTY 73 4.1 Introduction 73 4.2 Dependence of Conservatism on Perturbation Structure 76 4.3 Stability Under Structured Uncertainty 82 4.4 Maximization of Stability Domains 92 4.5 Application of Optimization Over O 109 4.6 Conclusions 113 IV
PAGE 5
5 STABILITY UNDER DIAGONAL PARAMETRIC UNCERTAINTY 115 5.1 Introduction 115 5.2 Diagonal Representation of State Space Perturbations 116 5.3 Problem Formulation 122 5.4 Necessary and Sufficient Conditions for Robust Stability 127 5.5 Sufficient Conditions for Robust Stability 132 5.6 Numerical Application 136 5.7 Some Extensions of Previous Results 139 5.8 Conclusions 143 6 COMPARISON OF SUFFICIENT PARAMETER NORM BOUNDS 145 6.1 Introduction 145 6.2 Results for Problems with 2 and 3 Parameters 146 6.3 Results for Randomly Generated Matrices 154 6.4 Conclusions 161 7 ITERATIVE CONTROLLER ROBUSTIFICATION 163 7.1 Introduction 163 7.2 Robustification Associated to Lyapunov Analysis 169 7.3 Robustification Associated to FrequencyDomain Analysis 169 7.4 Application 187 7.5 Conclusion 195 8 NECESSARY STABILITY DOMAIN IN THE PARAMETER SPACE 197 8.1 Introduction 197 8.2 Characterization of a Necessary Stability Domain 199 8.3 Computation of the Necessary Stability Domain 202 8.4 Applications 209 8.5 Conclusions 214 9 CONCLUSION 216 9.1 Summary 216 9.2 Directions for Future Work 223 REFERENCES 230 BIOGRAPHICAL SKETCH 234 v
PAGE 6
Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY By JOSE ALVARO LETRA May 1991 Chairman: Dr. Haniph A. Latchman Major Department: Electrical Engineering In the analysis of stability properties of control systems, the uncertainty in mathematical models must be taken into account. Main sources of uncertainty are high order dynamic phenomena of the physical system neglected in the model, and variations in system parameters. The subject of this work is the assessment of stability of linear control systems in the presence of parametric uncertainty. State space and frequencydomain models and uncertainty representation are reviewed, as well as general conditions for nominal and robust stability. Also reviewed are scaling techniques used for reducing the degree of conservatism of frequencydomain stability conditions, including optimal similarity scaling, optimal nonsimilarity scaling and Perron scaling. Particularly, the perturbed state space model x(t) = (A + E)x(t) is studied. The nominal matrix A is assumed asymptotically stable, and the perturbation E is of the form E = YHt=\ PkEk, where p is a mÂ— dimensional vector of system parameters, and Ek, k = are constant matrices. The application of the Lyapunov Direct Method vi
PAGE 7
for obtaining conditions on the norm of p which are sufficient for robust stability is discussed in detail. A new stability condition on  p 2 is given, which is potentially less conservative than available results. The problem of the choice of the Lyapunov matrix which yields less conservative stability conditions is formalized as a constrained numerical optimization problem. For the case of timeinvariant uncertainty, an equivalent frequencydomain stability problem is formulated, where the perturbation is a real, diagonal matrix obtained directly from the state space perturbation. Sufficient stability conditions are derived from the equivalent formulation, and scaling techniques are used, in order to reduce conservatism. Comparison of numerical results obtained for several problems indicates that, for timeinvariant uncertainty, the frequencydomain approach, associated to Perron scaling, constitutes an alternative which has better performance than the Lyapunov Direct Method. The frequencydomain approach and corresponding stability conditions are also shown to be of advantage in iterative optimization of static feedback controllers of fixed order. Additionally, a procedure is suggested for obtaining a necessary stability domain in the space of plant parameters, starting from a known sufficient domain. Finally, the integration of the stability analysis techniques into robust controller design is discussed. vn
PAGE 8
CHAPTER 1 INTRODUCTION 1.1 Dissertation Objective At least two common aspects are shared by the majority of the current literature on control systems analysis and design, although many different methods and techniques are nowadays employed. These aspects are as follows: Â• Focus is placed on multivariable systems; Â• Uncertainty in system models is explicitly taken into account. These aspects constitute a frame for the present dissertation. The specific subject is the assessment of robust stability properties of systems under parametric uncertainty, which finds motivation in the following considerations. Control systems are designed to meet some performance specifications. Although the formulation of performance specifications depends on the approach used, it always requires that some quantitative indices be satisfied by the system response, what of course implies in constraints to the dynamic behavior of the system. However, it only makes sense to discuss the quantitative behavior of a control system if its stability can be assured. Otherwise, the dynamic behavior can be expected to blow up under some admissible operating condition, thus rendering the system useless. Stability, therefore, emerges as a fundamental requirement. Control design relies on mathematical modeling of the controlled system. Unfortunately, there always exists a degree of uncertainty between the model and the modeled system, 1
PAGE 9
2 which must be taken into account. The existence of uncertainty gives rise to the requirement of robustness, namely the aptitude of a control system for retaining the desired behavior in spite of the uncertainty. Design methods definitely depend on analysis techniques in order to assess system properties, including robust stability. Techniques for robust stability analysis count on uncertainty representation, which is dictated by several factors, mainly by the causes of uncertainty and available information on uncertainty structure. Variations in system parameters are sources of an important category of perturbations, which is particularly suitable to representation in state space models. Motivated by these facts, this dissertation addresses the problem of robust stability analysis in the presence of parametric perturbations. The perturbation will be assumed to depend linearly on a vector of parameters, thus admitting the practically important case in which one parameter affects several entries of the system matrices in the state space representation. This model has been used in several recent works in stability analysis. The development of the subject is outlined in Section 1.3. Before this, a brief historical summary of the treatment of uncertainty in control theory is given. 1.2 Brief Historical of Uncertainty Treatment The need for control systems has been long felt in the process of technological development. Examples of the use of control systems date back to four thousand years [50]. Noteworthy is the fact that feedback principles are found even in those early examples. Among the several advantages that the feedback principle brings to control systems, appears the property of effectively coping with disturbances and system uncertainty [31].
PAGE 10
3 Important events in feedback history are registered by Sage [50]. Among them are the invention of the mechanical flyball governor by James Watt in 1788, which was developed from early windmill regulators, and the analysis of feedback control systems published in 1868 by Maxwell. In 1927, the concept of feedback was introduced by Black in the design of amplifiers for long distance telephone lines; his pioneering work is contained in the paper Â‘Stabilized Feedback AmplifiersÂ’, published in 1934. Although robust to uncertainties caused by nonlinearity and other factors, the feedback amplifier presented unwanted oscillations. The theoretical study of this phenomenon led to the development of the regeneration theory by Nyquist, whose work was published in 1932. The Nyquist criterion, which derives closedloop stability characteristics from openloop information, would constitute a fundamental technique for frequencydomain stability analysis. Ensuing developments of frequencydomain concepts originated from the work of Bode, in network analysis and amplifier design (1945), which demonstrate the existence of constraints in the manipulation of the frequency response of linear timeinvariant systems; from the Nichols transformation of the Nyquist diagram, and from the root locus technique of Evans. The set of those techniques constitute what became known as the classical approach to analysis and design of SingleInput, SingleOutput (SISO) systems. In the classical approach, the issue of coping with uncertainty is indirectly addressed, by providing the system with enough gain and phase margins. These margins ensure that unwanted effects of uncertainty will not disrupt stability. In the late Â’50s, problems of more complex nature, mainly originated by the control and guidance of missiles and space vehicles, came into the consideration of control engineers
PAGE 11
4 and theoreticians, and dominated the development in the field. The already wellknown set of classical tools was not adequate to deal with the essentially multivariable nature of the incoming control problems. The number of degrees of freedom inherent to multivariable systems, and the complex relationship between openloop and closedloop properties in those systems, manly due to interaction, which has no counterpart in SISO systems, often preclude the use of the simple techniques developed for scalar systems [21]. In this context, and because the digital computer was already available, the decade of the Â’60s saw a marked tendency towards the use of optimization techniques in the solution of control problems. The design objectives in such techniques were mathematically treated and transformed into a cost function to be minimized. Thus, the approach to control problems shifted from the frequencydomain to state space. Indeed, the state space was well suited for describing multivariable systems, and powerful techniques were developed for handling optimal control problems. Feedback emerged as a convenient property of solutions to optimal problems [31]. Linear Quadratic State Feedback (LQSF) appeared as robust solution to control problems, relying however on exact measurements of the states; on the other hand, the possibility of very accurate models for the applications then sought caused the question of uncertainty to receive comparatively less attention than in the classical frequency domain approach. The state space formulation and the control techniques it brought about, however, did not achieve acceptance in all fields of applied control, particularly in industrial control. Different reasons have been presented for this fact: only approximate models are available for many industrial processes; plants have components which deteriorate due to continued use; long formed habits of dealing with classical techniques by industrial engineers are an obstacle to the adoption of the sophisticated mathematical treatment required by optimal
PAGE 12
5 control. The Linear Quadratic Gaussian (LQG) theory, developed in the late Â’60s, can handle external disturbances modelled as Gaussian noise, and preserve the optimality of solutions, but the LQG controller is not robust against plant uncertainty, an important limitation in such industrial applications. The decade of the Â’70s witnessed a renewed effort in control theory. The first phase in the process involved efforts made towards the generalization of classical SISO frequencydomain techniques to multivariable systems. One example of the resulting analysis and design techniques is the Inverse Nyquist Array (IN A) method of Rosembrock (1974), which sought to eliminate the influence of interaction and then apply scalar techniques to the independent loops. Another is the Characteristic Locus Method of MacFarlane and Postlethwaite [37], which introduces a generalization of the Nyquist stability criterion based on the eigenloci of the transfer function matrix, and produces necessary and sufficient conditions for stability. The resulting generalized Nyquist plots are used in the multivariable design in the same fashion that the Nyquist plot is in the scalar case. The original formulation, however, applies to the case of exactly known models. Since the eigenloci are sensitive to perturbations in the transfer matrix, the original formulation had limitations in the context of robust stability. Later developments have extended the generalized Nyquist criterion to uncertain system, through the computation of inclusion bands for the perturbed eigenloci. Sufficient inclusion bands are obtained with the normal approximations method [8], and necessary and sufficient inclusion bands with the Econtours method [9]. Another side of that effort, which continued through the Â’80s, sought a deeper understanding of the structure and property of multivariable systems, with a renewed interest for robustness aspects.
PAGE 13
6 Safonov [48, 46] proposed an explicit representation where perturbations in multiloop systems assume the form of a diagonal perturbation matrix, therefore a structured representation. This representation was later used in the definition of a measure of stability margin for multivariable systems [47]. Doyle and Stein [14] developed the use of maximumsingular values to obtain bounds on the perturbations to multivariable systems, with perturbations modeled as normbounded but otherwise unconstrained, having therefore an unstructured representation. In 1976, a parametrization of all stabilizing controllers of a particular system was presented by Youla and coworkers. Zames [60], proposed a scalar design technique which minimizes the effects of external disturbances while ensuring closedloop stability; performance was measured in terms of oo norm. This work is considered one of the fundamentals of what, associated to the Youla parametrization, has become known as Hoo control. Several multivariable problems, like sensitivity minimization and robustness to additive perturbations, can be expressed as Hoo control problems, that is, problems where the goal is the minimization, in the frequencydomain, of the norm of a transfer matrix. This approach permits the synthesis of a controller which minimizes an objective function, which in general is used to express some performance requirement, while ensuring the stability of the solution by restricting the controller to belong to the set off all stabilizing controllers. However, controllers derived through this approach tend to be of high order, requiring a posteriori order reduction. Although an unstructured uncertainty representation yields a more tractable mathematical problem, it may lead to conservative stability results. Often, some information about the structure of the perturbation is available, and should be used in order to produce tighter results. The work of Doyle [12] gave new dimension to the diagonal perturbation problem
PAGE 14
7 pioneered by Safonov, when he argued that model uncertainty can be very effectively posed in terms of blockdiagonal norm bounded perturbations. He developed a new analysis tool, namely the //function, which constitutes a necessary and sufficient mathematical condition for robust stability of transfer matrix models. The computation of this new robustness measure presents considerable difficult for general structured uncertainties. An upper bound presented by Doyle involves the minimization, over the space of diagonal similarity scaling matrices, of the norm of the scaled system matrix; this upper bound actually equals // when there are at most three complex blocks in the diagonal uncertainty representation. For the case of more blocks, or when the perturbation has real components, the upper bound is a conservative estimate of //. For design purposes under structured uncertainty, Doyle has formulated what has become known as the Â‘//synthesisÂ’ method. In this approach, the cost function to be minimized is the ooÂ— norm of a similarity scaled transfer matrix involving a controller chosen out of the set of all stabilizing controllers. The parameters are the controller itself and the scaling matrix. The formulations by Doyle, as well as previous work by Safonov, introduced the use of frequencydomain scaling in control problems, as a tool for the derivation of less conservative sufficient stability results, in connection with the blockdiagonal uncertainty problem. Other models of uncertainty, as well as different forms of scaling, have been proposed. For instance, in LatchmanÂ’s work [33], the highly structured elementbyelementbounded uncertain model is explored, and new, less conservative stability conditions are obtained with the introduction of nonsimilarity scaling. For the case of elementbyelementbounded complex perturbations, it has been shown [33] that, if the maximum singularvalue of the optimally scaled system matrix remains distinct, // is attained, regardless of the number of elements in the perturbation matrix. Relationships between similarity scaling and non
PAGE 15
8 similarity scaling have been derived [40], and used as tool for decreasing the cost of the computation of the /ifunction for complex perturbations. The blockdiagonal formulation of uncertainties admits complex as well as real perturbations. Real perturbations in frequencydomain models have been employed for example to represent uncertainty in gains [10, 38] and in poles [10] of a transfer function. In this dissertation, a perturbed state space system is given a frequencydomain representation having real diagonal uncertainty, which is derived directly from the state space real uncertainty. For problems involving real uncertainty, results derived with the /^function approach are usually only sufficient. The derivation of tighter results for the case of real uncertainty is an active area of research [17, 18]; a new upper bound for /i, tighter than the singularvalue bound, has been recently introduced [18]. Besides the cited developments in analysis of perturbed transfer matrix models, the analysis of perturbed state space models received a great deal of consideration in the last decade. Two basic approaches can be recognized in the analysis of state space models: the Kharitonov approach and the Lyapunov approach. The approach spurred by the work of Kharitonov [27] deals with robust stability of control systems through stability analysis of characteristic polynomials having perturbed coefficients. Although the original work considered the case of independent coefficient perturbations, new results [2] have later extended the approach to the case of polytope of polynomials. Basically, this extension permits the assessment of stability of a whole polytopic family by analyzing stability properties of its exposed edge polynomials. The Lyapunov approach to robust stability analysis stemmed from the original work on stability by Lyapunov, published in Russian in 1892, which has a French translation dating from 1949. The Lyapunov Direct Method (LDM) yields a sufficient condition for stability;
PAGE 16
9 stability assessment however depends on the construction of a suitable Lyapunov function for the system under investigation. In the case of linear, timeinvariant systems, a quadratic function of the state is used as Lyapunov function. The condition for robust stability can then be posed in terms of the positivedefiniteness of a certain matrix. Although only sufficient, the approach has been used in robust stability analysis in a great number or recent works [4, 16, 42, 51, 56, 59, 61]. In particular, this method has been used in connection with structured perturbations depending linearly on a vector of parameters [4, 51, 61]. This uncertainty representation, on the other hand, has also been used apart from the Lyapunov approach [18]. Additional stability analysis methods for state space systems are the stability radius method [24], and the methods of Qiu and Davison [44, 45]; tensor products are used in the latter. 1.3 Structure of the Dissertation This dissertation is organized into 9 chapters, the first of which contains this Introduction. The next 2 chapters present a review of basic concepts, while the main part of the work is presented in Chapters 4 through 8. Chapter 9 contains the Conclusion. Specifically, nominal and perturbed system models are reviewed in Chapter 2. Special attention is given to uncertainty representation in state space and transfer matrix models, with emphasis placed on diagonal representation of uncertainty in interconnected frequencydomain models. The focus of Chapter 3 is in stability conditions. The review includes the Lyapunov Direct Method, the Generalized Nyquist Criterion, spectral radius conditions for stability, and spectral radius upper bounds given by singularvalue and structured singularvalue.
PAGE 17
10 Chapter 4 concentrates on the assessment of robust stability of state space systems in the presence of structured perturbations which depend linearly on a vector of parameters. The application of the Lyapunov direct method is thoroughly discussed, including a qualitative study of reasons of conservatism under perturbation, a review of available results, the derivation of admissible parameter norms and the use of parameter weighting for shaping the form of the computed stability domain. A new condition on the 2norm of the vector of parameters, which is potentially less conservative than available conditions, is presented, and similarity scaling is explore in the reduction of conservatism of available results. Finally, the choice of the adequate Lyapunov matrix is cast as an optimization problem. An alternative approach to the assessment of robust stability of state space systems, under timeinvariant perturbations linearly dependent on a vector of parameters, is proposed in Chapter 5. Working directly with the perturbed state equations, and exploring diagonalization of uncertainty, an equivalent frequencydomain problem is formulated, from which sufficient stability conditions are derived. The formulation is such that the uncertainty matrix which appears in the equivalent frequencydomain problem is derived directly from the real perturbation to the state space model. The derivation was independently undertaken, and has not been explicitly found in the literature. Conservatism of the stability conditions is reduced through the use of scaling techniques; besides the wellknown optimal similarity scaling, conditions are obtained in terms of Perron scaling. Chapter 6 compares numerical results obtained with the LDM of Chapter 4 and the frequencydomain method proposed in Chapter 5. Results obtained from the frequencydomain method were in general less conservative than results from LDM; they were always at least as good as the LDM results. Particularly, it is shown that the stability condition
PAGE 18
11 that uses Perron scaling have low computational cost and produces results with the same level of conservatism as results obtained with optimal similarity scaling. In Chapter 7, the frequencydomain approach is explored in the analysis step of an iterative controller robustification technique, similar to that proposed by Bhattacharyya [4]. The alternative approach has computational advantages, mainly when Perron scaling is used, because then it permits the elimination of parameters in the resulting optimization problem. Both the methods discussed in Chapters 4 and 5 yield sufficient stability domains in the space of plant parameters. In Chapter 8, a technique is presented for the computation of a necessary domain, starting from an available sufficient domain. An extensive search in the parameter space, which would be unfeasible for a large number of parameters, is avoided on the basis of a conjecture, which has worked well in all problems considered. Finally, Chapter 9 presents a summary of results and suggestions for further work. 1.4 Notation The following notational convention will be adopted in this document, unless otherwise explicitly stated. Additional symbols will be defined, as required. A 0 : Nominal dynamic matrix (openloop) A c : Nominal dynamic matrix (closedloop) A p : Perturbed dynamic matrix D : Diagonal form of real perturbation matrix D c : Diagonal form of perturbation with complex scalars E : Error matrix (parametric perturbation) Ea ' Parametric perturbation to the matrix A
PAGE 19
E k : Perturbation due to the k th parameter F\j(M, A) : Upper linear fractional transformation F L (M, K) : Lower linear fractional transformation G 0 (s) : Nominal plant transfer matrix H{s) : Openloop transfer matrix In : Identity matrix of order n J : Objective function in optimization problems K : Controller L : Left matrix in the decomposition E = LDR M e ft nXm : Real n x m matrix M Â€ C nxm : n X m matrix with complex elements Mij : Element at i th row and j th column of M M h : Complex conjugate transpose of M M+ : Matrix of the complex magnitude of elements of M P : Solution to the Lyapunov matrix equation Pi, : Matrix of upperbounds on elements of A Q : Lyapunov matrix Qo : Nominal compensated transfer matrix R : The right matrix in the decomposition E Â— LDR RX(A P ) : Largest real part of A,(/l p ), for fixed E RX(A P ) : Largest real part of A,(A P ), for E in a class S : Similarity scaling matrix 5. : Perron scaling matrix
PAGE 20
s 0 Osborne scaling matrix s d : Stability domain S dp (Q) : Stability domain, function of Q , in the norm  Â•  p s da ,A K ) : Stability domain, function of K, based on the measure aftT(s) : Closedloop transfer matrix Ug : Unitary matrix W : Matrix of right eigenvectors dp : Change in parameter p : Imaginary axis of the complex plane km : Multiloop stability margin km : Conservative assessment of k m IsooiQ) : Stability bound on  p oo : mdimensional parameter vector Pw : Worst case parameter combination r S 2{Q) : Stability bound on  p H 2 Sk : Weight applied to the k th parameter s : Complex frequency : Input vector : State vector y 6 : Output vector xm,2Lm : Major (minor) output principal direction of M y\i,y M : Major (minor) input principal direction of M c : Field of complex numbers Qm*.m : Space of complex m x m matrices
PAGE 21
14 Vu : Class of frequencydependent, unstructured uncertainties Vs : Class of frequencydependent, structured uncertainties Â£u : Class of unstructured real uncertainties Â£s : Class of structured real uncertainties Q : Set of symmetric, positivedefinite Q Â£ 3? nxn Sk : Class of scaling matrices related to the blockstructure K, : Class of blockdiagonal structured uncertainties ft : Field of real numbers : Set of nonnegative numbers ft nxn : Space ofnxn matrices with elements in 3? A (s) : Frequencydependent perturbation A m(s) : Frequencydependent perturbation to M Ok : Bound on the range of k th parameter a : Measure of stability margin 6(s) : Upperbound on the norm of A(s) e : Small quantity in general A ,(M) : i th eigenvalue of M /i(M) : Structured singularvalue of M 7r(M ) : Perron radius of M Kw : Set of worst case parameters P(M) Spectral radius of M Pr(M) : Real spectral radius of M *i(M) : i th singularvalue of M a(M) : Maximum singularvalue of M
PAGE 22
o(M) Minimum singularvalue of M Characteristic polynomial d : Partial derivative 1*1 Complex magnitude of x det [M] Determinant of square M II * lip pnorm of vector x II M , p Matrix norm induced by pnorm II M f Froebenius norm of M V : For all : End of proof o : End of statement given without proof : End of example inf, sup : Infimum, supremum max, min : Maximum, minimum DU : Diagonal Uncertain LDM : Lyapunov Direct Method GNC : Generalized Nyquist Criterion MIMO : MultiInput, MultiOutput OS : Osborne Scaling OSS : Optimal Similarity Scaling PR : Perron Radius PS : Perron Scaling SISO : SingleInput, SingleOutput ssv Structured SingularValue
PAGE 23
CHAPTER 2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION 2.1 Nominal Models and Definitions This section introduces basic definitions and models of linear timeinvariant systems. Let us consider the unity feedback system with cascade compensation, represented in Figure 21. The multiinput, multioutput block G 0 represents the physical system or process under investigation, which is generically designated as plant. (a) (b) Figure 21. Unity feedback system a) Closedloop system b) Uncompensated nominal plant The subscript o designates the nominal model of the plant, namely a mathematical representation where the relationships among the quantities involved are exactly known. Unless otherwise stated, nominal models will be regarded as linear and timeinvariant. The cascade connection of plant and compensator defines the openloop compensated plant , denoted by Q 0 = G 0 I\. 16
PAGE 24
17 Many dynamic systems of engineering significance can be described by a linear differential equation relating the input r(t) and its derivatives to the output y(t) and its derivatives. However, this representation is not the most convenient to deal with. Representations that have become standard in control systems theory are the state space model and the transfer matrix model. State space model. A differential equation of order n with constant coefficients, involving m inputs, p outputs and their derivatives, can be put in the state variable form: x(t) = Ax(t) + Bu(t ) (2.1) y(t) = Cx(t) + Du(t) (2.2) where x(t) Â£ 3?" is the state vector and A Â£ 3f? nxn , B Â£ 3? nXm , C Â£ 3Â£ pxn and D Â£ 3f? pxm are constant matrices. A generic state space model is often designated by the quadruple [A, B ,C, D\. Unless otherwise stated, openloop plants are assumed to be purely dynamic, thus having a representation of the form [Ac, Bq, Cg, 0]. A dynamic controller is represented by the quadruple [Aft, 5ft, Cft, Dk], which reduces to Dk in the case of a purely algebraic controller. To the closedloop system corresponds the quadruple [A c , B c , C c , D c \, whose components are easily obtained from the state space descriptions of plant and controller. Transfer matrix model . The nominal transfer matrix may be obtained via the application of the Laplace transform to the state space equations, under the assumption of null initial conditions. The transfer matrix is then given by: H(s) = C(slA)~ X B + D (2.3) where the term ( si Â— A) 1 is the resolvent of the matrix A.
PAGE 25
18 Let G 0 (s ) and Ii{s) be the transfer matrices of plant and compensator, respectively. The transfer matrix of the closedloop unity feedback system, which can be obtained by algebraic manipulation of blocks, is: T(s) = [(I p + G 0 K)l G 0 K](s) (2.4) Note that, in view of the dimensions of the matrices in the state space model, G a (s ) G C pxm . Consequently, K(s) G C mXp and T(s) G C pXp . Of course, T(s ) can be obtained by applying (2.3) to the quadruple [A c , B c , C c , D c \. Characteristic decomposition. A complex, square matrix M G C nxn with distinct real eigenvalues has the characteristic decomposition: M = \VAW~ 1 (2.5) where A = diag{A,}, i = 1 ,...,n, contains the eigenvalues of M. The columns of W are linearly independent vectors of M , arranged in correspondence with the eigenvalues. Matrices with nondistinct real eigenvalues, or with complex eigenvalues, have analogous decompositions, where A assumes a nondiagonal Jordan form. The spectral radius and the real spectral radius of M are, respectively, p(M) d = max  Xi(M) \ (2.6) i p R (M)t f max  A Rt (M) (2.7) t where A Rt (M) is a real eigenvalue of M . It is easy to show that Pr(M) < p(M) (2.8) The spectral radius has an important role in stability analysis, as will be seen in the following chapters.
PAGE 26
19 Singularvalue decomposition . A complex matrix M Â£ C nxn has the singularvalue decomposition M = XYY h (2.9) where E = diag (cr, } , a, Â£ i = 1, . . . , n, arranged in decreasing order, and Y and X are unitary matrices that contain respectively the right and left singularvectors of Af, arranged in corresponding order with the singularvalues. The right singularvectors j/jvr are called input principal directions , while left singularvectors xm are called output principal directions. The largest and the smallest singular values are of fundamental importance in stability and performance analysis. They are called respectively maximum singularvalue and minimum singularvalue , and denoted by o(M) d = o 1 {M);
PAGE 27
20 Lemma 2.1 . Let M 6 C nxn , and assume that the entries of M depend on a variable x. Then, the derivative of [a(M)\ 2 with respect to x is given by d[[
PAGE 28
21 Two broad categories of modeling error sources can be identified, namely unmodeled dynamics and variations of plant parameters. The objective of this section is to discuss uncertainty representations, with particular attention to the case of parametric uncertainty. The modeling process is guided by the conflicting requirements of fidelity to the plant dynamics and tractability. As a result of the necessary compromises with respect to these conflicts, some secondary dynamic phenomena may be left unmodelled, or may receive simplified representation. On the other hand, a model might adequately represent the plant dynamics under given conditions, yet might not be able to capture variations suffered by the plant during its life span, or even during an operation cycle. Changes in properties of physical components, which affect the plant, are normally expected and cannot be eliminated in some cases. For example, due to a compromise between precision and production costs, almost all technical specifications of serial made industrial components allow variations of properties around the nominal value. Other factors also contribute for changes in properties; among them are aging of the components, hysteresis cycles and environmental conditions. An example of plant with uncertainty due to both simplifications and neglected dynamics is the chemical batch reactor, discussed in [39]. In that case, a truly nonlinear process is linearized at an operation point, thus characterizing a simplification advised by tractability. The dynamics of the resulting equation is uncertain due to neglected nonlinear effects and due to unknown plus neglected high frequency temperaturedependent effects. In order to improve the assessment of stability and performance characteristics of control systems, some sort of mathematical description of the uncertainty associated to a given nominal model is needed. This description is called uncertainty representation.
PAGE 29
22 In a fairly general sense, the true modeled object can be represented in terms of its nominal model S 0 and of the modeling error E , by the following relationship: S P = U(S 01 E) (2.13) where S p designates the object obtained when S 0 is perturbed by E, and II(*) describes how the error relates to the nominal model. The object S p may represent either the plant or an interconnected system, which includes the plant as one component. If, for example, the modelled object is a plant, (2.13) becomes G p Â— II (G 0 ,E). An admissible error set is called a perturbation class. Given the perturbation class, the relationship II(*) determines a family of objects around the nominal model; this family is a set that includes a member which is closer to the true modeled object than the nominal model. The relationship IT(*) is determined by the uncertainty description chosen. A mathematical description of uncertainty must satisfy the following requirements [19, 33]: (i) Simplicity: the description should be such that the model is tractable; (ii) Accuracy: the uncertainty class should be such that it would allow only perturbations that really can occur ; (iii) Adequacy: the uncertainty class must admit all possible perturbations. The quality of results obtained from the analysis of perturbed models depends, to some extent, on the uncertainty representation. The following are relevant factors in uncertainty representation: (i) Nature of the model . The uncertainty representation must follow the nature of the nominal model. For example, if a linearized model is constructed for a system described
PAGE 30
23 by a nonlinear inputoutput relationship, the error can be adequately represented by the difference between the true output vector and the output vector of the model. When the system is represented by a MIMO transfer matrix model, the uncertainty is represented by a dimensionally compatible transfer matrix. If a state space model is used, the uncertainty is represented by dimensionally compatible real perturbations to the quadruple [A, B, C, D]. (ii) Type of the error. The uncertainty may assume either the form of an absolute error or the form of a relative error. In the former case, the uncertainty is represented as an additive term, while in the later it appears in multiplicative form. (iii) Structure of the uncertainty . This is the most important characteristic of the uncertainty representation. It is related to the knowledge and assumptions made about the mechanisms that generate the uncertainty. If nothing is known about particular causes of uncertainty, or if it is not practical to consider sources of uncertainty individually, the unstructured representation is used. The effects of all, possibly several sources are lumped together, and represented as if caused by only one source. The error is characterized by a norm upper bound, say  E  < e, but is otherwise unconstrained. The norm upper bound completely characterizes a class of unstructured perturbations. When the mechanisms that give rise to uncertainty are known, it is useful, although not required, to adopt an structured representation. It is in general possible to identify at least some of the causes of uncertainty [14], whence it is in general possible to use at least a partially structured representation of uncertainty. An interconnected system whose components are uncertain presents multiple perturbation Â‘blocksÂ’, which can be of different dimensions. Looking at the whole system, the
PAGE 31
24 uncertain has a structure defined by the position of the blocks. An unstructured representation could be used to cover up for various scattered Â‘blocks.Â’ However, this approach would be conservative, because the norm bounded but otherwise unconstrained class of unstructured uncertainties would admit perturbations which donÂ’t satisfy the known block structure. In the following, the general principles given above are applied to uncertainty representation in frequencydomain and state space models. 2.2.2 Representation of Uncertainty in Transfer Matrix Models Unstructured plant uncertainty Let us assume that the nominal and the perturbed plants are represented by transfer matrix models, respectively G 0 (s) Â€ Q pXm and G'p(s) 6 C pxm , and let A(s) represent the uncertainty. The argument Â‘sÂ’ may be dropped, if the dependence on s is clear from the context. In the unstructured representation, the class of admissible perturbations is characterized by a frequency dependent norm bound; usually the norm of choice is the induced 2norm, which coincides with the maximum singularvalue. An unstructured class which admits all possible A in a ball or radius 6(s) in C pxm is defined as: 2>u = {A (s) e C pXm : a[A(s)] < S(s) Â€ Â»+, Vs} (2.14) Additive representation . If the unstructured uncertainty is meant to account for an absolute error in the nominal model, the representation assumes the following additive form, illustrated by Figure 22 (a): G p Â— G 0 + Aa, Aa Â€ Â£> u (2.15)
PAGE 32
25 Multiplicative representation. This representation accounts for relative errors in the model. It is well suited when the nominal plant has input or output uncertainty. The perturbed model becomes, for each of these cases, respectively: G p = G 0 (I m + Ai), Ai G V a ; G v = (J p + A 0 )G 0 , A 0 Â€ V n (2.16) When both input and output uncertainty are present, as shown by Figure 22 (b), the above expressions combine to give G v = ( I p + Ao)G 0 (I m f Ai). (a) (b) Figure 22. Plant uncertainty representation a) Additive representation b) Multiplicative representation Brief analysis. The unstructured representation does not discriminate sources of uncertainty. Neglected dynamics, which usually contributes to highfrequencies error components, and parametric variations are considered together. This representation certainly satisfies the simplicity requirement. However the maximum singularvalue, used to characterize the class of allowable perturbations, depends on the whole matrix and does not account for magnitudes or phases of individual elements or submatrices structure. Consequently, the accuracy requirement may not be attained, because the class V\j admits perturbations which are not physically possible to occur. From the point of view of accuracy, it is preferable to use structured representations. Yet, even when some of the plant error components can be represented in structured form, there exist highfrequency components that require unstructured representation [14].
PAGE 33
26 It is interesting to note that additive and multiplicative representations of plant uncertainty lead to different expressions for the perturbation of compensated plants. Regarding Figure 21 (a), when the additive representation is used, the perturbed compensated def ~ plant is given by Q p = ( G a + A a )K = G a K + = Q 0 + Aa, while in the case of output multiplicative uncertainty representation, the perturbed compensated plant is Q p = (/ p Ao )G 0 K (I p + Ao )Q 0 Therefore, with multiplicative representation, the relative error in the compensated plant is the same as in the nominal plant, while the absolute error changes in the additive representation. Structured plant uncertainty Structured representations are adopted when it is possible to identify the causes of uncertainty, so that their effects can be linked to specific entries of the transfer matrix. Since individual sources of uncertainty are independently considered, the structured representation is more accurate. Elementbyelementbounded perturbations . This highly structured representation can be used when frequencydependent norm bounds for the uncertainty in each element of the nominal transfer matrix are available. The class is characterized by magnitude bounds and unconstrained element phases, and is defined as [33]: V s = (A(s) Â€ C pxm : A{ < P {j e Â»+, arg(Ajj) = 6 XJ , 0 < 9 {j < 2*, Vs} (2.17) It has been shown [33] that the class Vs defined above is a proper subset of the class V u given by (2.14). The perturbed plant under elementbyelementbounded additive uncertainty is: G p Â— G 0 + Aa, Aa Â€ As ( 2 . 18 )
PAGE 34
27 This structured class admits all perturbations whose element ( i,j ) belongs to a ball of radius P tJ around the nominal element G 0 (i,j), Vi < p, Vj < m. Cases where some elements of the nominal system are exactly known are covered by setting to zero the corresponding elements of P. Since the matrix of upper bounds, namely P , is a nonnegative matrix, this representation permits the use of results from PerronFroebenius theory in robust stability analysis. Also useful is stability analysis is the result of the following lemma. Lemma 2.2 . For any A Â£ Vs and P Â£ $? pxm , such that A+ < P,j, a(A+) < a (P) Proof . For any real matrix A Â£ 3J pxm and vector x Â£ 3i m , ct(A) = A , 2 = sup  Ax ,2 = sup ll*=i 11*11=1 P m E E A a x i *= i \j = i Therefore, i i <7(A + ) Â— SUp 2 ; a(P) = sup 11*11=1 [Â•=i W / . ll*ll=i [.= V=i / Since A^ > 0, the supremum will occur for some x such that Xj > 0, Vj; let x be the value of x which maximizes 7f(A + ). Now, x>0, A* > 0, Pij > 0 => PijXj > &tj x h v (*Â»i) Therefore: I i vi ) 11*11=1 L*=i l=i ) = a(P)
PAGE 35
28 This proof is an alternative to the original proof [33]. It is known that, VA Â€ Vs, a(A) <
PAGE 36
29 Therefore, the uncertainty can be written as and additive perturbation to the openloop transfer matrix. This approach however is inadequate for two reasons. The first reason is that, in order to render this formulation useful, it is necessary to compute or estimate a norm bound for the perturbation Aa q . Although this possibly can be done for simple systems, it might become very cumbersome in the case of complex systems. The second and most important reason is that the additive unstructured representation does not carry information about the structure of the perturbation in the interconnected system. Additive blockdiagonal representation . An alternative approach, which takes into account the structure of uncertainty, is the blockdiagonal representation. It derives from the technique introduced by Safonov and Athans [48], for dealing with systems involving simultaneous perturbations in the context of the LQG regulator problem, therefore in timedomain analysis. The essence of the technique is to rearrange the system in such a way that the perturbations are isolated in a blockdiagonal matrix. The technique was explored by Safonov [46] in the derivation of Â‘conic sector conditionsÂ’ for stability of MIMO systems, and by Doyle [12] in the derivation of necessary and sufficient conditions for stability under structured perturbations. A diagonal representation of simultaneous perturbations can be obtained for any system, regardless of the dimensionality of each particular perturbation. Both parameter dependent additive perturbations and actuator and/or measurement uncertainties, represented respectively as input and output perturbations, can be handled [39]. Let us consider its application to the system in Figure 23. The loops involving the perturbations Ai and Ao can be regarded as additional system loops, through which the nominal system and the perturbations exchange signals. The
PAGE 37
30 nominal feedback loop provides signal to the i th perturbation through the output j/aÂ», and receives signal through the input Â«aiThe perturbations may be isolated in a blockdiagonal structure through the following simple procedure: Procedure 2.1 . Diagonalization of uncertainty in frequencydomain systems: 1. Suppose the additional system loops are open, as in Figure 24 (a); 2. Compute the transfer function from each system input to each system output. Inputs and outputs now include the nominal input vector r and the nominal output vector yj, as well as the perturbation outputs ,Â• and perturbation inputs j/Aji 3. Arrange the transfer functions in matrix form. This step will generate the representation in Figure 24 (b), which is referred to as the Â‘Af Â— A' form of the perturbed system. (b) Figure 24. Block diagonal representation a) Open perturbation loops b) The A Â— M form The perturbation in Figure 2.4 (b) is A = diag(Aj, Ao), therefore a blockdiagonal structure; j/a and ua are vectors containing uncertainty inputs and outputs, respectively. The transfer matrix M(s ) is called nominal interconnection structure. The (1, l)submatrix relates the collective output of the uncertainties to collective inputs, while the (2, 2)submatrix
PAGE 38
31 is the nominal transfer matrix from r to y. For the system in Figure 23, M\\ is: 2/Al (/ + KG 0 )'KG 0 (/ + KG 0 )~ l K Â«A1 2/A2 (/ + G 0 K)'G 0 (I + G 0 K)~'G 0 K Â«A2 2/Al ( I + KG 0 )'KG 0 (/ + KG 0 )~ l K A/ 0 2/a i i/A2 (/ + G 0 K)~ l G 0 (/ + G a K)~ l G 0 K 1 O I> o l 2/A2 Mu A Note that the dimension of the square submatrix M\\ dependents on the number of simultaneous perturbations. Therefore, even a SISO system subjected to simultaneous perturbations is characterized by a MIMO nominal interconnection structure. Partitioning the interconnection structure according to the dimensions of inputs and outputs, the system can be represented as: 2/A M u M\2 UA y M 2 1 M 22 r From the partition and Figure 24 (b), the following relations are obtained: ( 2 . 20 ) ua = Aj/a; 2/a = M n u& + M \2 r\ y = M 22 r + M 2 iu a Manipulating these equations, one obtains: V = M 22 h .A/ 2 iA(/ Â— A/nA) l M u \ r (2.21) Thus, the transfer matrix from r to y is given by an upper linear fractional transformation of the uncertainty, namely: Fv(M, A) d =l f [M 22 + M 21 A(/ MuA)* 1 Mu] (2.22) A block diagonal representation of the LFT is shown in Figure 25 below. The expression A(I Mu A) 1 represents a feedback loop, with A in the direct path, and M u in the
PAGE 39
32 feedback path. If A = 0, then A) simplifies to the nominal transfer matrix from r to y, namely M 22 = (/ + G o K)~ l G 0 K. Figure 25. Block diagram representation of Fu(M, A) The general case of block diagonal representation . The technique applied to the simple example above applies to systems having a larger set of localized perturbations. Uncertainties originating from unmodeled dynamics assume the form of normbounded, full complex blocks of different dimensions. On the other hand, uncertainty coming from parametric variations assumes the form of real perturbations, which can be repeated. Additionally, fictitious repeated complex scalars perturbations can be used to reformulate a robust performance problem as a robust stability problem [15]. Therefore, in the most general case, the final block diagonal structure will show (possibly repeated) real scalars, (possibly repeated) complex scalars and full complex blocks of different dimensions. To account for the correct dimensionality of blocks in the diagonal formulation, a block structure of indices is defined [17]. Assume that M Â£ C mxm , and consider the triple (m r , m c , m c ) of nonnegative real numbers such that m r + m c + m c = f m < m, and define
PAGE 40
33 the block structure 1C associated with M by: K(m T , m c , me ) Â— ... , k mr , k mr +i , . . . , ^m r +m c i ^mrf m c + l > Â• Â• Â• 1 ^m r +m c +mc ) (2.23) where, for compatibility of dimensions, = m. Given 1C , a family of associated m x m block diagonal perturbations is defined by: = {A = bl diag^J/*, , . . . , S r mr I kmr , ^ c 4 mr+1 , Â• Â• , 6 c m Jk mr+mc , Af , . . . , AÂ£ c )} (2.24) where S[ Â£ 5?, b^ Â£ C and A/ 6 C* m >+ m c+ ,x * : "'<+m c +i As required by the dimension of M, X/c Â£ C mxm . Each 6\Iki represents a repeated real scalar, while each represents a repeated complex scalar and Ap represents a full complex block. The general form can be particularized through the convenient choice of indices. For example, if there is no parametric uncertainty, m r = 0. In the case of purely real perturbations, the adequate setting is m c = 0 and me = 0. A class of allowable perturbations, having block sizes determined by the block structure, is defined from (2.24) by specifying an upper bound on the norm: Xtc (b) = {A:Ae A/c, a(A) < b Â£ ft + } (2.25) 2.2.3 Representation of Uncertainty in State Space Models Let us now assume that the nominal plant is described by a state space model. The dynamics of the physical process is captured by the matrix A. Since A has fixed dimension in the state space model, it implies that the dynamical order of the process is well determined. Thus, uncertainty caused by neglected high order dynamics cannot be taken into account in the usual state space model. On the other hand, the state space model is well suited to the representation of parametric uncertainty. Variations in system parameters are represented as perturbations in the
PAGE 41
34 elements of the real matrices that define the model. The perturbations can be collected in the error matrix E, so that the perturbed matrix is represented by MÂ„ = M + E (2.26) where M can be either one of the real matrices in the state space representation. Particular forms of E are discussed below. Unstructured uncertainty As in frequencydomain models, the class of unstructured errors is characterized by a norm upperbound: Â£u d = {E : Â£
PAGE 42
35 The perturbed matrix takes the form of an interval matrix: M P = (M + E), EeS si (2.30) If only e is known, this representation can be used with the error matrix elementwise bounded by the matrix P = eU n , where U n (i,j) = 1, i,j = 1 [58]. If some of the entries of M are exactly known, the corresponding entries of U n are set to zero, thus accommodating the extra information on the error structure. Dependent variations of elements . This case differs from the previous one in that it admits correlated variations between entries of M. This assumption is actually required in practical cases. For example, consider the case of an openloop state space model in which the output matrix has some uncertain entries, due to variations in a physical parameter that affects the output gain. If an output feedback controller is used, the dynamical matrix of the closedloop system is likely to have several uncertain entries. However, the variations on these entries are not free, since they depend on the same physical parameter. A convenient representation for such cases is to obtain the error matrix in terms of the physical parameters. Suppose that an mdimensional vector of parameters can be identified, and assume that the dependence of M on each parameter is linear. This assumption is not too restrictive, since it is possible to redefine nonlinear combinations of physical parameters such that the assumption is satisfied. The perturbation class can be characterized as: C def Â£sd = {E : Â£? = Â£>Â£?*, k = i Pk  < k = (2.31) Each Ek is a constant matrix which expresses the structural dependence of M on the parameter p*. Such representation has been largely used in stability analysis [4, 51, 61].
PAGE 43
36 The perturbed matrix is represented by: M P = (M + E), E 6 S SD (2.32) Notice that M v Â— M + PkEk is (affinely) linear on the parameters. The following example illustrates the use of this representation of parametric uncertainty. Example 2.1 Consider the circuit diagram represented in Figure 25. + V 0 x 2 Figure 26. Elementary electric circuit Let the input be u(t ) = u,(f) and the output be y(t) = v Q (t). Then, one has: Â“ i\ Â— i R\ 0 1 C Xi + 1 R\C ii 1 L x 2 0 x\ V 0 R 2 x 2 Assume that R\, R 2 are uncertain, and that the components are rated at L = lif, C Â— IF, Ri o = 0.5fi, R 2o = lfi. The nominal matrices are: C = 0 1 Given that R\,R 2 are uncertain, the terms they affect can be written as:
PAGE 44
37 1 1 1 c, 1 s o W ~ T l ~T t . + H Rl ) 2 + p ' Â— R2 = Ri 0 + HR2) = 1 + pi where Â£(Â•) represents the unknown variation. Therefore, the perturbed openloop model is given by: xi X2 2 1 1 1 + Pi 1 0 0 0 + P 2 0 0 0 1 Â£1 + 2 + Pt 1 Â£2 0 0 Eb . y = 0 1 Â“ * Â£l + P2 0 1 x 2 Ec Thus, uncertainties in the physical parameters Ri,R.2 are reflected by the state space model as uncertain input and output gains, plus uncertainties in the dynamic matrix A. Assuming that an output feedback controller K = 1 is used, one has A c = (A + BKC ), where 0 (Â— (2 + pi + 2p 2 + P 1 P 2 )) 0 0 Defining p 3 d = f p x p 2 > the closedloop perturbed matrix becomes: BKC = Xi X2 2 3 1 1 + Pi 1 1 0 0 + P 2 0 2 0 1 + P3 0 1 0 0 xi x 2 Now, let p = f \pi p 2 p 3 ] T . The objective of stability analysis is to find out which is the largest  p  such that the perturbed system remains stable, and to characterize the
PAGE 45
38 allowable intervals [Â—a*, a*,]. Alternatively, assume that the parameter ranges are known. For example, assume that the variations in R x , R 2 are within Â±10% of the rated value. Then, the parameters are in the ranges: Pi e [0.202,0.202]; p 2 Â€ [0.100,0.100]; p 3 Â€ [0.020,0.020] In this case, the objective of stability analysis is to check whether or not the system remains stable for all possible combination of parameters in the hypercube defined by these ranges. 2.3 Conclusions This chapter puts together basic concepts concerning system models and uncertainty representation, which will be relevant for subsequent development. Since the objective of this dissertation is the study of robust stability under parametric uncertainty, the state space model will have an important role in following chapters. Also very useful will be the uncertainty description given by (2.31), which accommodates practical cases of parametric uncertainty, as demonstrated by Example 2.1. In Chapter 5, the problem will be given a frequencydomain treatment, and the diagonalization of uncertainty will be employed. Although the diagonalization technique has been used for some years, no explicit derivation has been found. For this reason, indications found in the literature were put together in Procedure 2.1, and the steps leading to the linear fractional transformation (2.22) were completely worked out. The review of fundamental concepts will continue in the next chapter with a summary of stability conditions.
PAGE 46
CHAPTER 3 STABILITY ANALYSIS OF LINEAR SYSTEMS 3.1 Introduction Stability of control systems is a fundamental requirement, which must be ensured prior to any other. This chapter presents a review of stability conditions and stability analysis techniques applicable to linear systems. Both state space and transfer matrix models are considered; in each case, nominal stability and robust stability under additive perturbations are addressed. 3.2 Stability of State Space Systems 3.2.1 Nominal Stability Condition Let us consider the linear, timeinvariant system x{t) = Ax{t) (3.1) This model can be interpreted as the representation of either an unforced system or a system under fixed, known input [52]. The following theorem gives necessary and sufficient condition for asymptotic stability: Theorem 3.1 [52]. The equilibrium point 0 of (3.1) is asymptotically stable if and only if all the characteristic values of A have strictly negative real parts, that is lim x(t) = 0 <=> Re[A,(A)] < 0, Vz (3.2) t OO o 39
PAGE 47
40 An asymptotically stable linear system is globally asymptotically stable, because II x (0 II Â— * 0 independently of the initial state x(t Q ). Equation (3.2) states that asymptotic stability depends on the eigenvalues of A. However, it is not necessary to compute the eigenvalues in order to check stability. The RouthHurwitz criterion gives necessary and sufficient condition for stability based on the signs of the coefficients of the characteristic polynomial. Furthermore, the Lyapunov direct method permits sufficient conditions for stability to be derived from a matrix function involving A. Nominal stability assessment through the Lyapunov Direct Method . The stability properties of the equilibrium point x(t) = 0 of the system x(t) = Ax(t) can be determined through the Lyapunov direct method, which does not require the computation of the characteristic polynomial. According to Lyapunov theory, a sufficient condition for global asymptotic stability of the equilibrium point x(t) = 0 is the existence of a scalar positive definite function of x, say V(x), having a negative definite timederivative V(x) [52]. For LTI systems, the natural choice of a Lyapunov function candidate is the quadratic function V(x) = x T Px T (3.3) where P is a real symmetric matrix. As long as P is positive definite, the scalar function F(x) is positive definite. The time derivative of the quadratic function is given by: V(x) = x T Px + x T Px = x t (A t P + PA)x d = f x t Qx (3.4) from which the matrix Lyapunov equation , relating the matrices A,P and Q, is obtained: (A T P+PA) d ^Q (3.5) Global asymptotic stability of the equilibrium point x = 0 of x = Ax(t ) is ensured if, for a given A, it is possible to find symmetric positive definite matrices P and Q satisfying
PAGE 48
41 equation (3.5). It is so because, if such P and Q exist, F(x) is a scalar positive definite function whose timederivative is negative definite. On the other hand, if there exists Q positive definite such that the corresponding P is negative definite, the equilibrium point is unstable. The following theorem formalizes the relationship between the asymptotic stability of A and the matrix Lyapunov equation. Theorem 3.2 [52]. The following statements are equivalent, VA Â£ 3? nxn : 1. All eigenvalues of A have strictly negative real parts; 2. For every positive definite Q Â£ $?' ixn , the equation (3.5) has a unique, positive definite solution for P; 3. There exists some positive definite matrix Q Â£ SfJ nxn such that the equation (3.5) has a unique, positive definite solution for P. O This theorem provides a computational device for assessing stability without computing the eigenvalues of A. Choosing any positive definite Q and solving (3.5) for P, if the solution exists, is unique and positive definite, then A is asymptotically stable. If there is no solution, or if the solution is either not unique or not positive definite, then A is not asymptotically stable. 3.2.2 Assessment of Robust Stability Robust stability assessment through the Lyapunov Direct Method Let us consider the perturbed state equation x(t) = A p x(t) = (A + E)x(t) (3.6)
PAGE 49
42 where the nominal matrix A is asymptotically stable. Since A is stable, the matrix Lyapunov equation for the nominal system, namely A T P + PA = Q, has a unique, positive definite solution P for every positive definite matrix Q ; let P a be the solution corresponding to some positive definite Q a . Now, let V'p(x) = x T Px, where P is symmetric and positive definite, be a Lyapunov function candidate for the perturbed system (3.6). The time derivative of V^x) is: V p (x) = x T Px + x T Px = [(A + E)x] T Px + x T P[(A + E)x] = x t [(A t P + PA) + ( E t P + PE)]x Let us choose P = P 0 , the positive definite matrix defined above. Then, the last equation becomes V p (x) = x t [Q 0 (E t P 0 + P 0 E))x = { x T Q p x(t) (3.7) According to Theorem 3.2, since P 0 is positive definite, A p is asymptotically stable, if Q p is positive definite. Therefore, the robust stability analysis problem becomes that of finding conditions on E which ensure the positive definiteness of Q p . Certainly the conditions that can be derived depend on the description of the uncertainty E. Although stability conditions obtained from the Lyapunov direct method are only sufficient, a positive feature of the method is that it can be applied with virtually all uncertainty descriptions, including timevarying and nonlinear uncertainties. In Chapter 4, a detailed treatment of stability conditions according to the Lyapunov direct method will be given, for the case of E belonging to the class Â£sd defined by (2.31).
PAGE 50
43 Other results A Perron radius stability bound [44]. Sufficient conditions for ( A + E ) being asymptotically stable are A stable and (A f E) without eigenvalues on the imaginary axis of the complex plane, for all E in an admissible class. It can be show that ( A + E) has no eigenvalue on the imaginary axis if there exists a nonsingular matrix R Â€ 3J nxn such that  RE(juI n A) 1 # 1  p < 1, Vie > 0, VÂ£ (3.8) Assuming that the uncertainty can be decomposed as E Â— 5jAÂ£; 52, where 5i Â€ 3J nXp and S 2 G 5R 9Xn are known constant matrices which account for the structure and the matrix A# 6 R pxq , p < n, q < n contains the perturbation factors, and using condition (3.8), with the further assumption that  A < e,yc; e,j > 0, e > 0, where e is unknown, the following sufficient robust stability condition can be obtained [44]: ( < sup w > 0 7T [ I s 2 (jul A) _1 5i I U ] where U = [e,y], and x(*) is the Perron eigenvalue. The advantage of a condition based on the Perron eigenvalue is that it is easily computable; however, it can be too conservative. It will be shown in Chapter 5 that a less conservative robust stability condition can be obtained by explicitly using Perron scaling. The relevant concepts of Perron theory are reviewed in Section 3.4 ahead. Stability radius condi tion [24]. The objective of the stability radius method is to compute the distance from the stable matrix A to the set of unstable matrices of the same dimensions. The distance is measured by the smallest norm of a destabilizing matrix, namely the smallest norm of E such that ( A + E) have a purely imaginary eigenvalue.
PAGE 51
44 Considering the decomposition x(t) = (A + E)x(t) = {A + BDC)x(t) (3.10) where A Â£ Â§Â£ nxn is stable, B Â£ SfJ nxm and C Â£ 5i pxn are known constant matrices which define the uncertainty structure, and D Â£ $J mXp is a matrix of unknown factors, the stability radius of A is: r&(A; B , C) = inf { D  : {A + BDC) unstable} (3.11) An analytical expression for the real stability radius has been obtained [24], but the computation is too complex, even for unstructured perturbations. In the case of structured perturbations of rank 1, namely when either only one row or only one column of A is perturbed by each factor, the computational burden of the analytical expression is considerably simplified. Letting G(s) C(sl A)~ 1 B , and defining as Gn(ju) and Gi(ju>), respectively, the real and imaginary parts of G(ju), and as Q and 0, respectively, the set of frequency points for which G\(ju) = 0 and its complement in SR, the real stability radius for the case of rank 1 perturbations is given by: rÂ»(A; B,C) = min max^gQ  G(ju) sup 1 GrO'w) 12 2 I I flCTOT Â— J 2 (3.12) Therefore, in the case of rank one perturbations the computation of the real stability radius involves an unidimensional optimization problem. If only one entry of A is under perturbation, then D and the associated G{s ) become scalars; the second term in the right side of (3.12) becomes infinity, and the real stability radius is easily computable.
PAGE 52
45 3.3 Stability of Transfer Matrix Models 3.3.1 Nominal Stability Analysis Inputoutput stability A linear system is BoundedInput, BoundedOutput (BIBO) stable if an input bounded in magnitude always produces a bounded output. Let H(s) be a matrix whose elements are proper rational functions of s. H(s) can be written as H(s) N[s) d(s) N(s) ni,(Â» Pi) (3.13) where dj. is the degree of the denominator polynomial d(s), which is given by the least common denominator of all (nonidentically zero) minors of H(s) [39]. The transfer matrix, which was assumed proper, is stable if all poles p, are in the open LHP. If pj = 0, for some j, then stability requires that the multiplicity of pj = 0 be 1. Under the assumptions that each element is a proper rational function of s, the transfer matrix possesses a state space realization [A, 5, C, D], such that the transfer matrix relates to the state space realization by H(s) = C(sl Â— A)~ X B + D. Although the transfer matrix representation of a system is unique, the state space realization is not. This transfer matrix can be rewritten as C(slA)~ X B + D = Z(s) Z(s) det(sIA) n?=i [* A t (A)] (3.14) If there is no cancellations of terms of the form [s Â— A,(A)], between the denominator and all the elements of the numerator in (3.14), then (3.13) and (3.14) are equivalent; the pole polynomial d(s) of the transfer matrix and the characteristic polynomial det(s/ A) are the same. In this case, inputoutput stability is equivalent to the asymptotic stability of the dynamic matrix A.
PAGE 53
46 A necessary and sufficient condition for noncancellations of system poles in (3.14) is that the state space realization [A, B , C, D ] be a minimal realization of the dynamic system, that is be state controllable and observable. Internal stability of closedloop systems Asymptotic stability of closedloop systems, like the feedback system shown in Figure 21 (a), is equivalent to the internal stability of the loop [53]. A closedloop LTI system is internally stable if any two points of the loop are connected through an exponentially stable transfer matrix [38]. Let Ii(s) in Figure 21 (a) be a stabilizing compensator for G 0 (s), and let rj designate an external signal placed at the plant input. The vector [j/, u] T , formed by the outputs of plant and compensator, is related to the vector [r, r^] T of their inputs by: H(Go,K) = 1 " r rd (3.15) (/ + G 0 K)~ X G 0 K (/ + G 0 K)~ l G 0 (/ + KG 0 )~ l K (/ + KG o )~ x KG 0 Therefore, internal stability of the unity feedback system with cascade compensation is equivalent to the stability of the four transfer matrices in H(G 0 K). The characteristic polynomial of each of these matrices must be checked in order to assess the internal stability of the closedloop system. Also, it can be shown that external stability and internal stability of the closedloop system are equivalent if the state space representations of the plant and controller are stabilizable and detectable [53]. Note that, if the compensator K is already known to be stable, then the stability of (/ + G 0 K)~ X G 0 is necessary and sufficient for the stability of H(G 0 ,K).
PAGE 54
47 Spectral radius condition for stability The term (I + G 0 K)~ 1 G 0 represents the transfer matrix of a feedback loop, with G 0 in the forward path and K in the feedback path. This loop can be represented in state space form by r c = [A c , B c ,C c , D c \. Stability of the feedback loop depends on the pole polynomial of its transfer matrix; therefore, it depends on the characteristic polynomial of A c . The following result relates the characteristic polynomial of A c to the characteristic polynomials of Aq and Ak . Assume that G 0 (s) and Ks are proper transfer functions having respectively minimal realizations [Ac, Bq, Cg, Dq] and [Ar, Br, Ck, Dk], and define the returndifference operator as F(s) = f [/ + K(s)G 0 (s)] (3.16) further assuming that det[/ + F(oo)] = det[/ + K (ooJGofoo)] = det[7 + Dj^Dq] / 0 Let 4>c be the closedloop characteristic polynomial. Then [26]: (j>c Â— det(s/ Â— A c ) = det(sl A a ) det(sl A k ) det[F[oo)\ ni'AitM.)] = n t< a,( A g )j f[, a,(a K )] . d ; i t .^ (a) j, <3.17) 1=1 1=1 t= i aei[,r ^oo;j The important fact revealed by this equation is that, when Aq and Ak are Hurwitz, the matrix A c is Hurwitz if and only if all the zeros of det[7 + 7i (s)G 0 (.s)] have negative real parts. It is important to notice [11] that, if cancellations of terms [5 Â— A,(*)] occur between the left and the right side of equation (3.17), the zeros of det[7 f7f(s)(j 0 (.s)] are a proper subset of the closedloop eigenvalues.
PAGE 55
48 Assuming that G 0 (s ) and Ii{s) are stable, equation (3.17) shows that a necessary and sufficient condition for stability of a feedback loop is that, for all s such that Re(s) > 0, det[/ + KG 0 (s)] f 0 : <=> A t[I + KG 0 (s)} Â± 0, Vi <=> \i[KG 0 (s)]tl, Vi <= p[KG 0 (s)} < 1, (3.18) 4= a[KG 0 {s)] < 1, Thus, small loop gain is a sufficient condition for stability of a feedback loop. Internal stability of a feedback loop can alternatively be checked through the Nyquist criterion, which is reviewed next. Nyquist stability criterion The Nyquist stability test permits the assessment of closedloop stability without requiring the solution of the closedloop characteristic polynomial. Due to its graphical character, it is very appealing in computeraided analysis and design environments. Let us initially discuss the case of scalar system. Suppose that plant and controller in Figure 21 (a) are scalar transfer functions. Let q 0 {s) = g 0 kq 0 (s ) = and let f(s) represent the return difference transfer function. Then, A .).i +fcW .:SJl+fif! (3.19, d(s) It can be easily verified that (3.20)
PAGE 56
49 where d> 0 (s), <^ c (s) designate respectively the openloop and the closedloop characteristic polynomials, and let p 0 , p c be their respective number of unstable poles. Closedloop stability analysis requires the determination of the number p c ; for closedloop stability, p c must be zero. The Nyquist criterion obtains p c from the knowledge of p 0 and the application of the principle of the argument to equation (3.20). Let n 0 be the number of clockwise encirclements of the origin by the map of the standard Nyquist contour under f(s). Equivalently, n 0 corresponds to the number of clockwise encirclements of the critical point ( Â— 1 , jO) by the map of the contour under g 0 (s). Since n 0 corresponds to the difference between the number of roots of the numerator and denominator of /(s), which are respectively p c and p 0 , the following relationship is satisfied: Pc = n Q + p 0 (3.21) The closedloop system is stable if and only if p c = 0, or, equivalently, if and only if n a = p 0 . That is, if and only if the map of the Nyquist contour by q 0 (s) encircles the critical point, in the anticlockwise direction, a number of times equal to the number of unstable poles of (f> 0 . Now, consider the case in which G 0 (s), K(s) in Figure 21 (a) are MIMO transfer matrices. Let 4>c be respectively the openloop and the closedloop characteristic polynomials, and consider the return difference operator defined by equation (3.16). Defining 7 = det[/(A'(oo)G(oo)], equation (3.17) shows that det[A(s)] = 7 ^ (3.22) o,
PAGE 57
50 and the number of encirclements of the origin by the characteristic loci of F(s ), which is the same as the number of encirclements of the critical point by the characteristic loci of Q 0 . The characteristic loci of F(s) are the maps of the Nyquist contour under the characteristic values of F(s). Let g,(s) be the characteristic values of F(s ), Q 0 (s), respectively, and recall that Q 0 Â€ C pxp . The characteristic values q,(s) are the solutions of the characteristic equation V(q,s) d = det [q(s)I <2 0 (s)] = 0. In general, the characteristic equation can be factored as a product of irreducible polynomials, V(g, s) = Vi (q, s), . . . , V/(< 7 , s ). Each polynomial V, is a polynomial of order n, in qi, with coefficients a,y(s), j Â— 1, . . ., n t , such as: Vi(q,s) = q?'(s) +
PAGE 58
51 The map of the standard Nyquist contour under the characteristic values of Q 0 (s) generates a set of closed curves, which constitute the characteristic loci of Q 0 (s). The number of encirclements of the critical point by the characteristic loci of Q a (s) and the number of unstable poles of (f>o(s) are used to assess closedloop stability. The generalized criterion is formally stated as follows: Generalized Nyquist criterion. Let no be the number of encirclements of the critical point by the characteristic loci of the openloop transfer matrix Q 0 (s), and let pc and po be the number of unstable poles of c and
PAGE 59
52 assumption of nominal closedloop stability is equivalent to assuming the correct number of encirclements of the critical point by the nominal eigenloci. Now, under the assumption that Q 0 and Q p have the same number of RHP poles, the perturbed closedloop system remains stable as long as the net number of encirclements of the critical point does not change under perturbation. A change in the number of encirclement occurs if and only if there is a nonnull net number of crossings of the critical point by the perturbed eigenloci. The following theorem formally states these considerations. Theorem 3.3 [19]. Let the unity feedback system of Figure 21 (a) be closedloop stable. Assume the presence of additive perturbations, belonging to a given class, such that Q a and Q p have the same number of RHP poles. Then, the perturbed system remains stable under unity feedback, for all perturbations in the given class, if and only if no p = n 0 (3.26) where no p and no are respectively the number of encirclement of the critical point by the perturbed and the nominal characteristic loci. O Two remarks are in order here. First, the assumption that Q 0 and Q p have the same number of RHP poles requires that the perturbation itself be stable. Also, if the controller K(s) is an openloop stabilizing controller for G 0 (S), then n 0 = 0; but n 0p = 0 if and only if the controller openloop stabilizes the perturbed plant G p (s), for all perturbations in the allowable class. Second, the application of the Nyquist criterion requires graphical displays of eigenloci; however, the perturbed eigenloci are not known. Fortunately, there exist methods for determining regions in the complex plane which include the eigenvalues of a perturbed complex matrix. Computed in a pointbypoint
PAGE 60
53 basis as the complex frequency describes the Nyquist contour, each region containing one eigenvalue generates an inclusion band in the complex plane which contains one perturbed eigenlocus. Thus, the perturbed eigenloci is contained in the set of bands described in the complex plane by the set of inclusion regions. If the openloop compensated plant Q 0 is stable, the stability requirement (3.26) is equivalent to the requirement that the critical point does not be belong to set of the inclusion bands. Therefore, to in robust stability analysis the generalized Nyquist criterion is applied to the inclusion bands. The size of the inclusion regions depend on the construction method and on the norm upper bound on the uncertainty class. Methods of computation of inclusion regions are next briefly reviewed. Condition number method . Let 6 T>u, defined in equation (2.14), Q p = Q 0 + A^, and assume that Q 0 has the characteristic decomposition Q 0 = Wh. 0 \V~ l . Then, it can be shown that [54] I A(Q Pi ) A(Q 0i )  < k w Â« 5, Vi (3.27) where K\y is the condition number of the eigenvector matrix W . The quantity gives the radius of regions in the complex plane, centered at the nominal eigenvalues, which include the perturbed eigenvalues for all perturbations in the class characterized by ^^^(s)] < 6(s). The inclusion regions defined by (3.27) are easily computable, but the method has disadvantages. If the condition number of the nominal matrix is large, the radius is large, and the computed inclusion regions may be very conservative. Also, if the eigenvectors of the nominal matrix are too skewed, the condition number can be very sensitive to small perturbations, thus unfavorable for computations.
PAGE 61
54 Normal approximations method . Let us consider again the openloop perturbed matrix, namely Q p (s) = Q a (s) + A^, where Q 0 (s) Â€ C mXm . Using the rectangular decomposition technique, Q 0 (s) can be decomposed into the sum of two normal matrices, one hermitian and one skewhermitian. The method of normal approximations to perturbed matrices [7, 8] consists of the substitution of the nominal matrix by the hermitian part of a rectangular decomposition. The skewhermitian part is considered an approximation error, and included in the perturbation. Let Q n and Eq be respectively the hermitian and the skewhermitian parts of the decomposition of Q 0 . The rectangular decomposition is chosen such that the norm of Eq is minimized. Assuming that Eq is characterized by a norm upper bound, say ^[Â£q( s )] < e ( 5 )> Vs, the perturbed matrix can be written as Q p = Q n + (Eq + A^), where (Eq + A^) represent the total perturbation to the normal matrix Q n The application of the condition number method to Q p yields: I HQ Pi) HQn,)  < Â«w(<5 + c) = (6 + e), Vi (3.28) since the normal matrix Q n has condition number k\v = 1. By adequately choosing the normal approximation, the radius (S + c) given by the last equation can be made smaller than the radius given by (3.27), thus reducing the conservatism of the inclusion region. Inclusion regions determined by normal approximation can be made tighter by taking their intersection with the region determined in the complex plane by the numerical range of the matrix Q p . The numerical range of Q a Â£ C pxp is given by [23]: K = {zee : z = o ?xec p } x*x The numerical range of Q p , which obviously includes the eigenvalues, is contained in the region of the complex plane determined when the numerical range of Q 0 is extended by
PAGE 62
55 6 in all directions. That is, {zee {zee 2 = Z = x*(Q 0 + A a )x x*x x*Q 0 x x*A a x*x xÂ“x o^xe c p }, , 0 ?xÂ€C p } M v C tf 0 &6 (3.29) where 1+) means the extension in all directions. Since the perturbed eigenvalues are included in the regions defined by both equations (3.27) and (3.29), they are included in their intersections. Whence, tighter inclusion bands are obtained by the computation of those intersections as the complex frequency describes the standard Nyquist contour. The regions given by the intersections are still not tight, in the sense that they may include points which cannot be made eigenvalues of the perturbed system, for any of the perturbations in the allowable class. A method which yields tight inclusion regions for the case of unstructured perturbations is next summarized. Econtours method . Let z Â£ C be an eigenvalue of the perturbed openloop matrix. Then det[(Q 0 + A.4) zl p ] = 0, what means that (Q 0 + A a zl p ) loses rank at z = 0. Therefore, a(Q 0 + A.4 Â— z/ p ) = a[(Q 0 Â— z/ p ) + A4] = 0. The inequality z[(Qo zip) + A a ] > 6, then z cannot be an eigenvalue of Q p , for any A^; 2. If c(Q 0 zip) < 6, there always exists A^ such that z is an eigenvalue of Q v . O
PAGE 63
56 This lemma leads to an algorithm for the computation of the Econtours inclusion regions. Letting X 0i , i = 1,. ,.,p be the nominal eigenvalues, the Econtours are the loci of the Â‘firstÂ’ solution for z of the equations gXQo zip) = [ Qo (A 0t + pe je ] = 6 (3.31) as p is increased from 0, and 0 < 9 < 2i r. It can be shown [9] that the contours constructed as above described always form closed curves, and that the perturbed eigenvalues are contained in the union of the contours. Plotted as function of the frequency, the contours sweep bands to which the generalized Nyquist criterion is applied. Singularvalue condition for stability under unstructured uncertainty Let us consider again the unity feedback system of Figure 21 (a), assuming that K is a stabilizing controller for the nominal system. Furthermore, let us assume that the plant is subject to additive unstructured uncertainty Aa belonging to the class V\j. The presence of the controller in the forward path of the feedback loop changes the openloop perturbation. In order to assess robust stability, one may consider the perturbed openloop compensated system, given by: Q p (s) = [G 0 (s) + A a (s)]K(s) = G 0 K(s) + A A I((s) Q 0 ( s ) + A q (s) (3.32) Notice that the resultant perturbation is Aq(s). In order to characterize the class containing the uncertainty in the compensated openloop plant, the norm upper bound ct[AaA'(s)] must be obtained. It may happen that, due to the controller structure, the upperbound results too large, thus causing the uncertainty description to be unacceptable.
PAGE 64
57 Â“ Â“ VA = M ua = y r T " Ua r (3.33) Using the method described in Chapter 2, the system can be rearranged so that the uncertainty becomes an additive term to the closedloop system, as in the M Â— A representation of Figure 24 (b). The nominal interconnection structure is given by: (I + GoK)1 (I + GoK)1 (I+GoK)1 C I + G 0 K)'G 0 K The transfer matrix from r to y, in the presence of uncertainty, is given by the linear fractional transformation F\j(M, A) = [M 22 + M 2 \Aq(I Â— MwAq) 1 M\ 2 , which is represented by the blockdiagram in Figure 25. The equation (3.33) above shows that the transfer functions M 12 , M 2 1 and M 22 are stable, since they depend only on the nominal system, which is by assumption stabilized by K(s). Therefore, the stability of the linear fractional transformation depends only on the transfer matrix Â— [Aq(/T A/jjAq) *] , which represents a feedback loop with A Q (s) in the forward path and M u (s) in the feedback path. Let [A\\,B\\,C\\, Du] be a minimal state space realization of M n (s), and let us assume that the perturbation Aq(s), which is itself a dynamic system, has a minimal realization [A&, Da]. Using equation (3.17), the characteristic polynomial of the feedback loop is given by: I Tr r . . detf/4M,, Arts'll (3.34) i=l Therefore, if the perturbation Aq(s) is stable, the stability of the feedback loop involving the perturbation can be derived from the stability of det[7 + A7nA(s)]. Stability of Aq(s) is a requirement stronger than the requirement of stability of Aa(s). Alternatively, the perturbed system can be rearranged so that the original perturbation A a (s) becomes the additive perturbation to the closedloop system. In this case, the
PAGE 65
58 nominal interconnection structure becomes: VA = M Ua . Â—(I + KG 0 )~ l K (. I+KG 0 )~ l K UA y r (I + GoK)1 (/ + G 0 K)~ l G 0 K r Under the assumptions that the controller K(s) stabilizes G 0 and that the controller itself is stable, the transfer matrices M 12 , M?\ and M22 are stable; thus the stability of the transfer matrix from r to y depends on the feedback loop Â— [A(/ + A/hAa) 1 ]. Furthermore, M\ 1 ( 5 ) itself is stable. If the perturbation A a (s) is stable, then the zeros of the closedloop characteristic polynomial of the feedback loop are in Left Half Plane (LHP) if and only if the zeros of the return difference matrix are in the LHP. Therefore, the perturbed system is stable, Vs : Re(s) > 0 and VAa Â£ V u, if and only if det[/ + M u A a (s)] ^ 0 <=>Â• m h a a(s)] # 0 X <*=*> A,[MhA a (s)] ^ Â— 1, Vi <=> p[MhA a (s)] < 1 (3.36) Recall that the spectral radius condition for nominal stability, given by equation (3.18), is only sufficient. The last inequality however shows that, in the presence of unstructured uncertainty, the spectral radius condition is necessary and sufficient. Necessity is obtained from the phase freedom of the elements of the unstructured perturbation, and the possibility of scaling the perturbations, in order that A' a = c A a , e e [0, 1], is obtained from A a . For suppose that p[iVf n A A (s)] > 1, for some perturbation in the allowable class, and some s. Then, by changing only the phase of the perturbation elements and scaling by multiplication by e, it is possible to obtain a perturbation, say Aa, such that det[7 + MhAa(s)] = 0, for some s.
PAGE 66
59 It is always possible to find a perturbation in the allowable unstructured class, say A^(s), which satisfies p[M n A 0 A (s )] = a[M u A^(s)j = ?[Mn(s)] ff[AÂ£(s)] (3.37) Therefore, necessary and sufficient condition for robust stability, VAa G V u is: ^[Af n (s)] a[A A (s)] < 1, Vs <=>Â• ^[M n (s)] < 1 , , Vs (3.38) cr[A A (s)j Stability under structured perturbations Let us consider the M Â— A form of a perturbed system, represented in Figure 24 (b), and assume that A G X/c(6), defined by equation (2.25), and that the associated block structure has k r = k mc = 0. This is the case of a perturbation composed by complex blocks, which emerges naturally when the diagonalization technique is applied to an interconnected system whose subsystems are subject to unstructured uncertainty. Applying the same reasoning used above leads to a necessary and sufficient stability condition in terms of the spectral radius, namely p[M u A(a)] < 1, VA Â€ X K (S) However, this perturbation class does not admit all perturbations with norm less than 6, but only those which satisfy the norm constraint and the block structure, whence the inequality chain />[MÂ„A( 5 )] <
PAGE 67
60 is only sufficient. The conservatism of this condition can be arbitrarily large, since it may happen that no perturbation satisfying (3.40) and having the required structure will destabilize the system. Spectral radius preserving transformations have been widely used to scale the relevant matrices such that the gap between the spectral radius and the singularvalue is reduced, thus reducing the conservatism of the stability condition obtained from (3.39). Scaling techniques are reviewed in the Section 3.4. Next, two tighter criteria for stability under structured perturbations are reviewed. Structured singularvalue stability condition . Given a matrix M and the associated block structure AC, the structured singularvalue of M, or /jfunction, is defined by [12]: M(M) d 4 f 1 min A6Xc ( 5 ) MA(s)] : det[/ MA(s)] = 0} (3.41) if there is A Â€ Xjc(S) such that det [I A/A] = 0; if there is not such a A, then n(M) = 0. The following theorem states the necessary and sufficient condition for stability of the M Â— A representation, in terms of the ^/function. Theorem 3.4 [13]. The system A/ A is stable, VA G X/c(S), if and only if: /x[A/ n (s)] 6(s) < 1, Vs <=> fi[M n (s)] < Â— Vs (3.42) O If the perturbations are weighted such that ctA(s) < 1 , Vs, and the frequency dependent weight is included in A/, the above result asserts that: stability <=> sup /x[A/ n (s)] < 1 (3.43)
PAGE 68
61 The tightness of the above stability condition stems directly from the definition of the pfunction: / i{M ) is defined on the basis of a destabilizing perturbation having the required structured. However, although it clearly addresses the robust stability problem, the definition is not of much help from a computational point of view. Actually, the computation of the exact value of p(M) can be done only in special cases. Usually only upper and lower bounds are computable, even for the purely complex case, namely when m r = 0 in the block structure [41]. The computation is specially demanding in the mixed case, namely when m T ^ 0. Computation of bounds for p{M) relies on a set of properties of the pÂ— function, proved by Doyle [12], the most important of which are given below: H{aM) =  a  /x(M), VM Â€ C mxm , V scalar a (3.44) < a(Mi) VM l5 M 2 G C mxm (3.45) If m = 1, m c = 1, n(M) = a(M), VM G C mxm (3.46) If m = 1, m c = 1, /x(M) = p(M), VM G C mxm (3.47) The equality in (3.46) is attained in the case of one single complex block of any size, since the conditions imply m r = 0 and m c = 0. On the other hand, (3.47) concerns the case of one complex scalar, since the conditions mean that m c = 0 and me = 0. From the computational point of view, the following property is fundamental. Let U K {U : U is unitary } with the same blockdiagonal structure as X/c, and let S/c d = (5:5 = diag {s,/,}, s, G &+} (3.48) the set of real positive diagonal matrices with blocks having the dimension of the corre
PAGE 69
62 sponding block in X/c . Then, VM G C mXm , sup p{UM ) < p(M) < inf a(SMS~ 1 ) (3.49) Util*. s Â£ s k. It has been shown [12, 15] that the left inequality of (3.46) is actually always an equality; however, the optimization problem involved is not convex, what may lead to the existence of local maxima. On the other hand, it has been proved [49] that the optimization problem involved in the right inequality of (3.46) is always convex, and hence has only global minima, as a consequence of the fact that a(e s Me~ s ) is convex in D. Since S has rn elements, one of which can be fixed, the minimization is done over (mÂ— 1) variables, no matter what the sizes of the blocks are. Equality is always attained on the right side of (3.46) when there are 3 or fewer nonrepeated blocks in the blockdiagonal perturbation, regardless of the dimension of the blocks. For more than 3 blocks, the lower and upper bounds in (3.46) usually stay within 5% from each other, and almost always within 15% [38]. Furthermore, it has been shown [29] that, for the case of complex perturbations, the right inequality holds with equality, regardless of the number of elements in the perturbation, provided that Â‘inf^ in (3.49) occurs at a stationary point of a(SMS~ l ) relatively to the elements of the scaling 5. This case occurs when there is no cusping of a(SM S~ 1 ). Multivariable stability margin . Consider the perturbed M A form where A G .â€¢*;(Â£), is A = dia S K ^m r > *m r+ 1 . Â• Â• Â• , *m r +m e } The multivariable stability margin of the MIMO structure M is defined as follows [10]: k m = f min {& G [0, oo) : det [I kAM] = 0} (3.50)
PAGE 70
63 Let Vi be the known domain of the parameter and let the actual perturbation be A ac G Ax(Â£). Then, the perturbed system is stable if and only if A ac . Â€ k m VÂ„ Vi Therefore, given a set of parameter ranges, if k m > 1, it indicates how much the ranges can be extended without the system becoming unstable for any combination of parameters inside the extended domain. Conversely, k m < 1 indicates how much the ranges must be shrunk so that the system can stand all perturbations in a given class. An algorithm for the computation of the multivariable stability margin, which can be applied also to the case of purely real uncertainty, has been given by De Gaston and Safonov [10]. The algorithm avoids a burdensome search over the parameter space by exploiting the mapping theorem due to Zadeh and Desoer. 3.4 FrequencyDomain Scaling Techniques The fundamental condition for robust stability of the M Â— A representation is given by equation (3.36), namely p[MnA(s)] < 1, Vs. Equation (3.38) shows that, if the perturbation belongs to an unstructured class characterized by the norm upper bound 6(s), a necessary and sufficient condition for stability is ct[Mh(s)], < Â— ^,Vs (3.51) The sufficiency of the condition comes from the inequality p[MnA(s)] < ff[MnA(i)j < ct[Mh(s)]ct[A(.s)] < ?[Mn(s)] Â£(s) (3.52) which applies in general. Necessity arises because, since the only constraint posed on the unstructured class is the norm bound, it is always possible to find a member of the class for which all the above inequalities become equalities.
PAGE 71
64 If a structured uncertainty class is considered, constraints are posed on the norm and on the structure of the admissible perturbations. Under these constraints, it is not possible to guarantee that (3.52) holds with strictly equality for some member of the class. Consequently, if the uncertainty is structured, a singularvalue condition in the form of (3.51) is in general only sufficient. In fact, it has been shown [29] that the worst case perturbation A(s), namely the one for which p[JVfnA(s)] =
PAGE 72
65 3.4.1 Similarity Scaling The advantageous application of similarity scaling in robust stability analysis was first reported in the context of the block diagonal uncertainty problem [15, 47]. Let us review this case. Consider the M A perturbed representation, and let A be a member of the structured class Xfc(S) defined by (2.25), with the further assumption that A has no real elements. Applying condition (3.36), stability is guaranteed, V A 6 Xjc(6) , if and only if sup p[MnA(s)] < 1, Vs (3.53) A A well known property of nonsingular similarity transformations is that they preserve the eigenvalues of the transformed matrix. Therefore, for some S Â€ S/c defined by (3.48), the spectral radius and the maximum singularvalue of [AfnA(s)] are related by: sup p[SM n A(s)S~ l ] = sup p[M n A(s)] < sup ct[5MhA(s)5 _1 ], Vs A A A Letting S range over the set S/c, one has that sup p[M u A(s)] < inf {sup a[5M n A(s)5 1 ]}, Vs (3.54) A S a Let A u (Â«) 6 Xic(S) be the worst case perturbation, which is characterized by p[M n A w (s)] = max/9[M n A(s)] It has been shown [12] that, in the case of purely complex perturbations, there exist a worst case perturbation in which each element is in the boundary of its domain in C. Therefore, the worst case perturbation can be decomposed as A w (s) = P A U 0 (3.55)
PAGE 73
66 where P& is a diagonal real matrix containing the known upper bounds on the norm of the complex blocks, and Ug Â€ 7/jc, the set of unitary matrix having the same block structure as Xz. Substitution in (3.54) gives: sup p[M n (s)P^Ug) < inf {supcf[5Mn(s)PA^5' 1 ]}, Vs u g s Ue Observing that Ug and S 1 commute, because by definition they satisfy the same block diagonal structure, and that the spectral norm is invariant under multiplication by an unitary matrix, the last equation can be written as: sup p[M\\(s)PfrUg] < inf ^[SMu^PaS 1 ], Vs Ug S Defining M tt (s) d = f Mu(s)P&, the above inequality becomes: p[M a (s)] < sup p[M a (s)Ug] < inf o : [5Â’M a (s)S' _1 ], Vs (3.56) Ug S Therefore, a sufficient condition for stability of the M A representation, under blockdiagonal structured uncertainty, is inf ?[5M a (s)r'] < 1, Vs (3.57) 3.4.2 NonSimilarity Scaling In the derivation above, the commutative property of block diagonal matrices was invoked to do a swap of positions between Ug and 5 1 , thus allowing the phase matrix to be discarded in the term involving the spectral norm. This property could not be used if the perturbations had a more general structure than the blockdiagonal form. This is the case of the elementbyelement bounded perturbations in the class T> s , defined by (2.17). However, this case can be handled by the technique of nonsimilarity scaling [28, 33].
PAGE 74
67 Let us consider the M Â— A representation, assuming that Mu Â£ C mXm and the allowable uncertainty class is V s defined in (2.17). Then, the perturbation A(s) is a full matrix satisfying A + < P A , for some P A Â£ $f mXm . Now, let
PAGE 75
68 3.4.3 Suboptimal Scaling Both stability conditions (3.57) and (3.61) are optimal in the sense that the norm of the scaled matrix is minimized over the set of scaling matrices. However, consider S Â€ S. The following inequalities follow from equation (3.56), under the assumption of complex perturbations: p[M u (s)P A ] < sup p[M u {s)P&U e } < mia[SM u (s)P^S1 ] < a[SM n (s)P^S~ l ] U e s (3.62) In the same way, for Si, Â§2 Â€ S, equation (3.60) yields: p[M n A(s)] < inf (3.63) If the similarity scaling 5, or the nonsimilarity scaling pair S\ and S 2 , is chosen according to some criteria, equations (3.62) and (3.63) can be used to obtain sufficient stability conditions. Although more conservative, these conditions save computation time, since they do not require a search over
PAGE 76
69 The eigenvalue of A which equals the spectral radius is called Perron eigenvalue and denoted by 7r(A). The associated eigenvectors are the right and left Perron eigenvectors. Lemma 3.2 [3, 29]. For any A G C mxm , and S G S, inf a(5A + 5" 1 ) = jt(A+) (3.64) O The minimizing scaling S d = f S,, called Perron scaling, is given by S* = [Y^A^ 1 ]^, where Ya and X\ are diagonal matrices containing respectively the elements of the left and of the right Perron eigenvector of A + . Lemma 3.3 [3. 28]. Given matrices A and B of compatible dimensions, with both A{ 3 and Bij G 3?+, and Si and S 2 G S, then inf {aiSxASi^S^BSf 1 )} = tt(AB) (3.65) S U S2 O The scaling defined in this lemma is called Perrons u s 2 scaling [28]. The optimal pair of scaling matrices, for which equality is obtained in (3.65), is determined by [3, 28]: S\n Â— [1.4BA^g]2; Â‘^27 t = [X baYba]^ (3.66) where Xab and Yab are diagonal matrices whose elements are respectively the entries of the right and of the left Perron eigenvectors of (AB). X ba and Yba are defined in a similar manner, regarding (BA). Lemma 3.4 [28]. Let A and B be complex, with compatible dimensions. Then, for Si and S 2 G S, inf {a(S 1 AS 2 )a(Sf l BSf 1 )} < tt(A + B + ) (3.67)
PAGE 77
70 where A + and B + are matrices whose elements are the magnitudes of the elements of A and P, respectively. O Let us return to the problem of robust stability under structured perturbations characterized by [A(s)] + < Pa, Vs. Using equation (3.59), the following inequalities apply: p[MnA] <
PAGE 78
71 the application of nonsimilarity scaling and Lemma 3.4 results in the inequalities p[M n A] < inf < tt{M+P a ) (3.72) Si,S2 from which condition (3.69) can also be obtained. The Perron scaling for (M{jP a ) is S\* [ y M+ ( ^a/+ p A ) * ] 2 1 $2* [Xp^M+iYp^M*) '] 2 (3.73) Substituting S i* and 5 2 ,r for S 1 and S 2 in equation (3.63) gives />[M u (s)A] < a[5 lff MÂ„( 5 )5 27r ]a[5 2 1 P A 5 1 1 ] Thus, a sufficient condition for robust stability, based on explicit nonsimilarity Perron scaling, is: a[5Â’i ff M n (s)5 2ff ]o : [5 : ^ 1 PA5' 1 '; 1 ] < 1, Vs (3.74) Osborne scaling OsborneÂ’s scaling process [43] comprises an iterative procedure to find the scaling which minimizes the Froebenius norm of an irreducible matrix A G C nxn , defined as A llr'W Â£ I Aii Â«.J= 1 Let S 0 be the scaling obtained from OsborneÂ’s iterative process applied to the matrix [Mh(s)P a ], A stability condition analogous so (3.71) can be obtained using S 0 , namely a[S 0 M n (s)P A S 0 *] < 1, Vs (3.75) 3.5 Conclusions This chapter summarizes robust stability conditions and techniques that will be employed in the next chapters. One important topic is the application of the Lyapunov direct
PAGE 79
72 method under uncertainty. This method will be explored in Chapter 4, and the sufficient condition obtained in Section 3.2.2 will be studied in detail. Also important is the notion that singularvalue stability conditions are only sufficient in the presence of structured uncertainty, and that the conservatism of singularvalue conditions can be reduced through scaling. These concepts will have significant roles in Chapter 5, where an alternative frequencydomain approach is proposed for the assessment of robust stability of state space system under structured uncertainty. Although the generalized Nyquist criterion and its extension to systems under perturbation will not be applied in the next chapters, the review undertaken above is justified because this technique is a relatively recent generalization to MIMO systems of a classical tool in frequencydomain analysis of SISO systems, which can have a prominent role in computeraided analysis and design environment.
PAGE 80
CHAPTER 4 LYAPUNOV DIRECT METHOD IN THE PRESENCE OF STRUCTURED UNCERTAINTY 4.1 Introduction The objective of this chapter is to obtain conditions for robust stability of linear state space systems under structured uncertainty, using the Lyapunov direct method. Although Lyapunov theory yields only sufficient conditions for stability, it can be applied to a wide class of dynamic systems, including nonlinear, timevarying systems. The difficulty in general associated with the application of Lyapunov theory direct method is that it requires the construction of a suitable Lyapunov function. In the case of linear systems, this difficulty is not present, since an immediate choice is a quadratic function of the form V(t,x) = x(t) T P(t)x(t ) , where P(t) is a symmetric matrix. Furthermore, in the case of timeinvariant linear systems, the positive definiteness of the derivative of the function V(x) = x(t) T Px(t), which depends only on P, can be checked through the Lyapunov matrix equation, given by (3.5). This property extends to the analysis of linear systems whose matrix A is uncertain. In this situation, however, besides the inherently sufficient nature of the stability condition, there is an additional cause for conservatism, as illustrated by the following case [42]. Let us consider the application of the Lyapunov indirect method to a nonlinear system. After linearization around an equilibrium point, the linearized system can be viewed as a perturbed linear system . , where the perturbation is the linearization error, namely the neglected high order terms. Let the perturbed model be x(i) = A m x(t)+B m u{t) + f[x(t), u(<)], 73
PAGE 81
74 where A m and B m describe the linear part and / is a nonlinear vector function. A nominally stabilizing linear quadratic state feedback control yields the closedloop i(t) = (A m B m R~ 1 BlP)x(t) + f[x(t)} A c x(t ) + /[*(<)] which is stable for / = 0. Let V(x) = x T Px be a Lyapunov function candidate, where P comes from the solution to the Ricatti equation associated with the LQSF problem. Then, the derivative is V(x) = x T (A c P + PA c )x + 2 f T (x)Px. The following robust stability condition can be derived [42]: /(*) < 1 +^ t, Vxeft" II * lb 2 a(D~')a(P) k(PY where D = Q + P B m R~ 1 B^P , Â«(Â•) is the spectral condition number, and a is a parameter in the Ricatti equation. This case exemplifies two facts about the use of the Lyapunov theory in robust stability analysis. First, the problem of nominal stability analysis of a nonlinear system can be approached by robust stability analysis of the corresponding linearized system. Second, and more important for the objectives of this chapter, stability conditions obtained from the application of the direct method generally involve some function of the norm of the perturbation. Consequently, the method cannot discriminate between real and complex uncertainties having the same norm bound. If the uncertainty is known to be real, and the stability result is given in the form of a norm bound on the perturbation, a larger class of perturbations is virtually admitted, namely the class of complex perturbations with the same norm bound. Therefore, the result is not tight. The Lyapunov direct method can handle timevarying perturbations as well, in which case F(x) is required to be negativedefinite at each instant t. In the case of nominal timevarying systems, the use of the Lyapunov matrix equation is precluded. However, if
PAGE 82
75 the system matrix can be decomposed into a constant part plus a timevarying part, this case also can be handled, by looking at the timevarying part as a perturbation to the timeinvariant part, and requiring negative definiteness of V(x) at each instant t. Examples of the application of the Lyapunov direct method to systems under unstructured and under structured perturbations are available in the literature. For instance, assuming Q = 27, the following robustness condition can be derived for the system x(t) = [A + E(t)]x(t ) , where E(t ) is a timevarying unstructured perturbation [57]: Â°W)\ < where P is the solution to the Lyapunov matrix equation. The application of the method in the presence of structured perturbations can be illustrated by the case below [55, 56]. A bound on the magnitude of each perturbation element is given, namely Â£,j(i) < e,j, Vi, with max.y d = e. Using Q = 27, the following condition for robust stability can be derived: 1 ' < o[{P m Vn) a ] where {P m U n ) s is the symmetric part of the matrix P m U n , P m contains the magnitudes of the elements of the P, and U n is such that U n
PAGE 83
76 Recall that according to Theorem 3.2, the system dynamic matrix is asymptotically stable if and only if there exists some positive definite symmetric matrix Q such that matrix Lyapunov equation A 7 + PA Â— Q has a unique, positive definite solution P. It is important to keep in mind that the theorem does not guarantee that, picking a positive definite P, the corresponding Q is positive definite. Now, consider the following lemma: Lemma 4.1 . Given a real symmetric positivedefinite matrix P , the set of systems i(t) = Ax(t) for which V'(x) = x T Px is a Lyapunov equation is a convex set. Proof. Let Mpo d = f {M : M is symmetric, positivedefinite} A(P) = {A : A 7 P+PA = Q, P,Q eM PD j (4.1) (4.2) Then, for A\ and A 2 G A(P) and P G Mpd, one has A 7 P + PA\ = Qi G Mpd and A 7 P + PA 2 = Â—Q 2 G Mpd Taking aq, a 2 G 9?+ such that ot\ + a 2 = 1, and defining A% = ot\Ax + a 2 A 2 , one has: A 7 P + PA 3 = [<*! At + ( 1 aj ) A 2 ] T P + P[ ai ^ + ( 1 c*! ) A 2 ] = ot\[A 7 P + P A\] + (1 Â— a\)[A 7 P f P A 2 ] = &\{Ql) + (1 Qri )( Â— Q2 ) Q3 G Mpd Therefore, ^1 3 G A(P), which shows that A(P) is a convex set. Let us now turn the attention to the matrix A p = (A + E), where A is Hurwitz and E is some perturbation in the admissible class. Define A\ = A, and A 2 = A A *yE, 7 G 3?+, further assuming that A 2 is also Hurwitz and that, for a given P , the function V'(x) = x T Px
PAGE 84
77 is a Lyapunov function for both x(t) = A\x(t) and x(t) = A 2 x{t). Letting A 3 be a convex combination of A\ and A 2 , one has: A 3 = aj A\ + (1 aj) A 2 = A x + a 2 ')E = A x + (3 E where f3 6 [ 0 , 7 ]. According to the preceding lemma, V(x) = x T Px is a Lyapunov function for x(t) = A 3 x(<). Now, suppose that A 4 = A f Â£E, ( > 7 . Even if A 4 is Hurwitz, it may happens that V{x) = x T Px is not a Lyapunov function for x(i) = A 4 x(t). Since the choice of Q determines P, it also determines the size of the convex set of system equations for which V(x) = x T Px is a Lyapunov function. Therefore, the conservatism of a computed stability condition will be reduced if Q is selected such that the corresponding P yields the largest possible convex set A(P). However, notice that in the above lemma, a fixed perturbation E is taken into account, while in a robust stability problem one deals with an admissible class of perturbations. The question of selecting Q such that the corresponding P generates a Lyapunov function for the largest possible set of perturbed systems, for any perturbation in the admissible class, does not have a straightforward analytic solution; possibly it has no analytic solution at all. It was seem in Chapter 3 that, choosing the Lyapunov function candidate V p (x) = x T P 0 x for the perturbed system x(t) = ( Af E)x(t) , it leads to the derivative equation (3.7), namely V p (x) = x t [Q 0 ( E t P 0 + P 0 E)\x d ^ f x T Q p x where Q 0 and P a are respectively the choice of Lyapunov matrix for the nominal system and the solution of the nominal Lyapunov equation. A sufficient condition for stability of the perturbed system is the positive definiteness of Q p . Defining, for simplicity, F d = E t P 0 + P 0 E (4.3)
PAGE 85
78 robust stability requires positive definiteness of Q p = ( Q 0 F). Since both Q 0 and F are real symmetric matrices, one has: ( Q 0 Â— F ) positive definite min{Re[A,(Q 0 F)]> 0 <=> Z.{Qo F) > 0 (4.4) 4= v(Qo)o{F)> 0 (4.5) in view of the inequality a{Q 0 Â— F) > ff(Q 0 ) a(F) (4.6) Since the analysis objective is to find explicit conditions on E, equation (4.4) is not useful, and the only alternative is to apply (4.5). Obviously, this condition is not tight, since, as shown by (4.6), it may be possible that a(Q 0 Â— F) > 0 even if a(Q 0 ) Â—
PAGE 86
79 between the major input principal direction of F and the minor input principal direction of Q. O The proof of this theorem is derived from a similar proof [30], and is given after the following lemma, which establishes necessary and sufficient conditions for alignment between the relevant principal directions of Q, F and ( Q F). Lemma 4.2 . Given Q, F G 3? mxm , then a(Q F) = a(Q F) = a(Q) a(F). Assume conditions (4.9) to (4.12) are true, and consider the input Vq_ f applied to [ Q Â— F]. Then, IQ F k Q F = Qy Q F F y Q F = ^Qy Q ~e^ 9 Fy F by (4.9), (4.10) Applying the relationships a(M)x M = My M and a(M)x M = My M , VM , to the last equation, it becomes: [Q ~ F]y Q _ F = e J,/, a(Q)xQ e je a(F)x F = Â°{Q)xqF a(F)x Q _ F by (4.11), (4.12) \zkQ)~ ^(^)kgF
PAGE 87
80 The last equation implies that g_(Q F) = (4.9) to (4.12). Assume cr(Q F) = a(Q) a(F). Now, Vz Â£ 9? n , ( Q F)z = Qz Fz. For z = Hqf' this expression becomes ( Q F)y q _ p = o(Q F)xq_p = QVq_ F Fy lq_ F Given the assumption above, v{Q)xq_ f a(F)xg_ F = QVq_ f Fy I_q_ f , which is equivalent to Â°(Q)xqF = QVq. f (413) Â°{F)xq_ f = Fy qp (4.14) Equation (4.13) means that, since Q applied to y q _ F produces a magnification ct(Q) , Vq_ f and y q must be aligned, that is, y q _ F = e ^Vq > fÂ° r arbitrary ip, which is (4.10). Now, 3. (Q)xqf = Qy Q _ F = Qe j % = ^QVq = ^Â£(Q)xq = S [Q)eÂ»x Q Therefore, xq_ F = e^Xq, which is (4.12). Similarly, equation (4.13) shows that, since F applied to y q _ F produces a magnification &(F) > Vq_f an( * Vf must be aligned, that is, y q _ F = e y F , for arbitrary 9, which is (4.9). Since v(F)z Q f = Fy q _ p = F e jd y F = e jd Fy F = e j6 o(F)xq = ~d(Q)e l6 x F xqf = e^XF, which is (4.11). Proof of Theorem 4.1 . Necessity: a(Q F) = v(Q) a(F) => (4.7) and (4.8) Rewriting (4.9) as y F = e~^y Q _ F and using (4.10), one gets y F = e'^e^y , and letting 0 = ip Â— 9, one obtains y F = e J ^y q , which is (4.8). Similarly, rewriting (4.11) as xf = e J 6 xq_ f and using (4.12), one gets xp = e~^ e e^X g . and using the definition of /?, one obtains xp = e^iq, which is (4.7). Therefore, necessity is proved.
PAGE 88
81 Sufficiency: (4.7) and (4.8) => a{Q Â— F) = g_(Q) Â— (7(F) Assume (4.7) and (4.8) and consider the input to (Q F). Then, [QF]y Q = Qy Q Fy Q = Â°(Q)xq e~ jP Fy F , by 4.9 = S.(Q)xq e]f) a{F)y F =
PAGE 89
82 This section shows that the choice of the nominal Lyapunov matrix has an important role in determining the conservatism of robust stability conditions. In the next section, the problem of the choice of Q is addressed, in the context of structured perturbations. 4.3 Stability Under Structured Uncertainty 4.3.1 Uncertainty Description In this section, the uncertainty class E Â€ Â£sd defined in (2.31) is adopted. Uncertainty in this class can be represented as E Â£X =1 Pk Ek, where Ek , k = 1, . . ., m, is a constant matrix which accounts for the structure of the perturbation due to the parameter pk . Without loss of generality, a symmetric range about the origin is assumed for each parameter, namely p k Â€ (a k ,a k ), Vfc. This description is well suited to the representation of real world systems uncertainty, since it accounts for the possibility that changes in one physical parameter may affect several entries of the matrix A. However, it requires that the perturbation to each element of A be linear in the parameters, and thus may require parameter redefinitions. That description has already been used in robust stability analysis of state space systems [4, 51, 61]. 4.3.2 Sufficient Condition for Robust Stability Let p = [pi,p 2 , . Â•Â•,Pm] T be a vector containing the system parameters, an let us define M~ xn ^ { M G Â£ nxn : Re[A,(M)] < 0, Vi } (4.15) where either K, = 3? or K, = C, according to the context, and 771 = {PSSÂ™ : (A + Y.PtE,) Â€ AC*Â„} Jt=l Then, Sj represents the stability domain in the space of system parameters. (4.16)
PAGE 90
83 Given the nominal system model and the parametric uncertainty description, the objective of robust stability analysis is to determine the stability domain in the space of parameters, which is usually specified by an admissible upper bound on some norm of p. The Lyapunov Direct Method has been used in robust stability analysis by several authors [4, 16, 42, 51, 55, 56, 58, 59, 61]. Particularly, the uncertainty description above has also been adopted [4, 51, 61]. Introducing that uncertainty description in (3.7), the equation of the derivative of the Lyapunov function becomes: V p (x) = Â—x T Q p x = x 1 Qo J2p^ E k p o + P 0 Ek) k= 1 (4.17) where Q a and P Q are respectively the Lyapunov matrix for the nominal system and the corresponding solution of the Lyapunov equation. Therefore, positive definiteness of the matrix [Q 0 Â£Â£* =1 Pk(E'[P 0 + PoEfc)] is a sufficient condition for asymptotic stability of ( A + E). In order to obtain the stability domain, an explicit condition on some norm of p must be derived. A derivation of stability domains is presented in Section 4.3.4. Before this, some available results are reviewed. 4.3.3 Available Results for Admissible  p  For simplicity, the subscript will be dropped on the notation of Q 0 and P Q . Therefore, Q and P will mean nominal matrices. Let us define: F k t f ( ElP + PE k ), k = 1, . . . , to P k l{ElP + PE k ), * = l,...,m P e ^ [A ...P m ] F Qk t f Q~hF k Q~\ (4.18) (4.19) (4.20) (4.21)
PAGE 91
84 The following norm bound [4] gives a condition for robust stability: sHQ) Y M 2 k= 1 = II P 1 1 2 < Q a parameter (4.22) [Â£?=i[<7(^)] 2 ]* Notice that both the numerator and the denominator depend on Q, which is treated as a free parameter. Results for a fixed Q have been reported. Using Q = 2 the following conditions can be derived [61]: YpI k= 1 = II P 112 < 73 Y \pk\^{ p k) < i, k= 1 \Pj\ < 7=1 j = (4.23) (4.24) (4.25) [*(EZU \Pi\] The choice of Q = 2 I n has been justified [59] on the basis that it maximizes the ratio Â§ Fixing Q yields ready to use analytic expressions for bounds on p\ however, in view of the facts pointed out in the last section, it is a potentially conservative option. Actually, it has been acknowledged [61] that a state transformation [58, 59] can be applied to the system description, in order that improved results are obtained with Q = 2I n for the transformed system. Yet there is no systematic method for choosing the adequate state transformation. The following stability conditions have also been reported [51]: k (Q)fi Y pi k=l = llPl2 < Y \Pk\ a ( F Qk ) < 1 A:=l 1 max pJ =  p Iloo = < Â— * HP Iloo ct(Â£Â£Lj \F Qk \ (4.26) (4.27) (4.28) can It has been shown through examples [51] that less conservative stability conditions be obtained from these expressions with a choice of Q other than Q = 2 /Â„ . Furthermore,
PAGE 92
85 it has been argued [51] that regarding Q as a free parameter inherently incorporates the degree of freedom brought about by a state transformation [58, 59]. However, no analytical method has been proposed for the choice of Q. Note that, since Q is a free parameter in (4.22) and in (4.26) to (4.28), and no analytical method is available for the selection of Q, it implicitly means that some sort of search over the space ofnxn symmetric, positivedefinite matrices is required. In the following, a derivation of stability conditions on norms of p, which was independently developed, is explicitly presented, and the corresponding stability domains in the parameter space are defined. 4.3.4 Derivation of Admissible  p  Using the definition of F k in (4.18), equation (4.17) can be rewritten as: V(x) = QiQix r Q * Â£ PkQ'F k Ql 2) q\ X \k = 1 From the innerproduct properties < p, y > =  y  2 and < y, My > < a(M) < y, y >, and defining y(t) = Q 2 x(t), the inequality below follows from the last equation: V(x) < 1 F k Q(4.29) Since the norm term on the right side is always bigger than zero for nonzero y , a sufficient condition for robust stability is Y.PkQ^FkQh < i \k = 1 (4.30) New result for admis sible  p  2 Let us define M p def [Pi In \ Â• Â• Â• I Pm In\ (4.31)
PAGE 93
86 m q Â« [F e ,...F Sm f (4.32) Then, substituting in (4.30), one obtains o PkQ ~%FkQ~^ = g(M p Mq) < a(M p ) a(M Q ) The maximum singularvalue of M p is given by: Â°(M P ) = [max{ A, (Af p r M p )}]i = (^p fc 2 (4.33) m \ 2 >2 \ (4.34)
PAGE 94
87 where m is the number of parameters of the system. Using (4.39), one obtains from equation (4.33) that p*  (m )2 ct(Mq) < 1 is a sufficient condition for robust stability; equivalently, Ip.I = II P IU < , . * Â„ ' = '.Â»(
PAGE 95
in 9? m , defined by 88 SdUQ) = {P :  P lliw < 1} (4.44) The largest possible value of a semiaxis is \pk\ < =^j,VÂ£. Notice that the weights are parametrized by the Lyapunov matrix Q. Admissible  p From (4.35), one obtains  ^ =1 F Q)t  i2 <  Â£fc=i \p k F Qk \ Â« 2 Now, letting def P * = Pj Â• \pj\ = max*, \pk\ and substituting p, for p k , Vfc in the last inequality, one obtains the sufficient condition p,  FqJ  t2 < 1 or, equivalently, 11 p **Â°Â° < nuL* i^qJ] = ( 4 45 ) which is identical to (4.28). The corresponding stability domain is Sdoo(Q) d = { P  P Hoc < Loo } (4.46) Comparison of new results to previous results The new result of equation (4.35) is analogous to the earlier results of equations (4.22) and (4.26). Now, consider the following possible derivation of equation (4.26). The matrix in equation (4.30) can be written as YT=i PkQ~^F k Q~ 2 = Q~* M p Fq, where M p is given by (4.31) and Fq d = [F\Q ~2 ... F m Q ~2 ] T . Therefore, one obtains: PkQ~*F k Q~2 j < ff(M p ) v(Fq) (4.47) from which equation (4.26) follows. However, from (4.32) and the definition of Fq above, it follows that Mq = diag[Q~i] Fq. Therefore, a(M Q ) < v(Qi)a(F Q ). Using (4.33) and (4.47), it follows that a(M p )a(Mg)< a(M p ) a (q*) a {Fq) (4.48)
PAGE 96
89 Consequently, condition (4.30) is satisfied with less conservatism by ct(M p )ct(Mq) < 1, as in the new result (4.35), instead of by a(M p ) a(Q~^ ) v(Fq) < 1, which is the case in the derivation of (4.26) given above. Similar reasoning can be applied relative to the derivation of (4.22). The new result for the admissible 2norm of p is superior to previously available results, in the sense that, if an arbitrary Lyapunov matrix Q is used, equation (4.35) will give better 2norm bound on P than either (4.22) or (4.26). Therefore, the new result in Â‘nonconservativeÂ’, relative to the others. However, the conservatism of all the results depend on the adequate choice of the Lyapunov matrix Q. On the other hand, the derivation of the new result for the admissible oonorm of p, given in equation (4.40), requires that the inequality (4.33) be used, while the derivation of the result (4.33) does not. Therefore, given a Lyapunov matrix Q, the new result is expected to be more conservative than the previously available result. However, while the latter is given in terms of FqJ , the new oonorm result is given in terms of the same matrix function Mq that appears in the new 2Â— norm result. Furthermore, it will be shown in the next section that the derivative of cost functionals relative to the elements of Q, are easier to obtain for a functional based on the new oonorm result than for a functional based on the previous result. 4.3.5 Admissible Weighted Stability Domains In the derivation of the norm bounds (4.35), (4.40) and (4.43), it was implicitly assumed that no Â‘a prioriÂ’ information was available about the relative range of the individual parameters. It is equivalent to assuming that the largest value that can be taken is the same for all the parameters, that is p* < a,Vfc, a max*: a*. Consequently, the stability
PAGE 97
90 domains defined in the parameter space by those equations are, respectively, a hypersphere, a hypercube and a hyperrhombus. If information is available on the actual relative range of the parameters, the conservatism of the stability domains Sd 2 and Sdoo can be reduced, by shaping them such that their relevant dimensions become proportional to the ranges of the parameters. The adequate shape can be obtained by weighting the parameter ranges [51]. Let us rewrite the uncertainty description as E = p k E, t = Y1T= 1 s k E k where s k ,k = 1 , . . . , m are adequately chosen scalars, and define ' def Pk tti ' def r , , . Pk = Â— . E k = s k E k (4.49) s k so that m E = Â£ PkE'k (4.50) k=l Considering the weighted uncertainty description above, and proceeding as in Section 4.3.4, admissible norms for p'^Vk, are derived. The corresponding stability domains, in the weighted parameter space, are given by (4.35), (4.40) and (4.43). The stability domains in the original parameter space are then obtained using (4.49). 2norm weighted stability domain Following the same steps of the derivation of equation (4.35), one obtains k = 1 < = r 'M ) (4.51) where M' q is obtained by substituting E' k for E k in the definition of Mq. The stability domain in the weighted parameter space is given by S 'di (Q) = {P :p2< r' 2 (Q)} (4.52)
PAGE 98
91 To obtain the stability domain in original parameter space, consider [^ l 2 + [^] 2 + + [^] ! < K (0)] 2 (4.53) This inequality defines a hyperellipsoid with semiaxes a*, given by a k < r' 2 (Q), VAr (4.54) oonorm weighted stability domain Proceeding as in the derivation of (4.46), one obtains II P lloo < . = l'soo(Q) (4.55) (m ) 2 (t{M q ) where Mq is as above defined. The stability domain in the weighted parameters is S'dM) = f { P =  P oo < I'sooiQ) } (4.56) Since, I I m F ~ < l 'Â»oo(Q ) => max p fc  < s k l' 300 (Q) (4.57) K k the stability domain in the original parameter space is a hyperrectangle, with semisides l k given by h < s k l' soo (Q), Vfc (4.58) The choice of weights The norm bounds for weighted parameters define either regions with equal axes or equal sides, depending on the norm used. It is convenient to obtain stability regions whose relevant dimension is proportional to the corresponding actual parameter range. Let us assume that p, is the original parameter with the smallest range. Then, one possible choice of the weights is (4.59)
PAGE 99
92 4.4 Maximization of Stability Domains 4.4.1 The Â‘optimalÂ’ choice of Q Let us recall the expressions obtained for stability domains in the parameter space: (4.35), (4.36): S d2 {Q ) = {p : P h< r s2 (Q), r*2(Q) (M , (t{Mq) (4.40), (4.41) : Sdoo(Q) = {p :poo< Isoo(Q); Isoo(Q) = T~ (m)2 ct(Mq) (4.42), (4.44) : Sdiw(Q) = {P :pi U ;< 1; II P HiÂ™ = X] \PÂ‘Â‘\
PAGE 100
93 The above equations show that the functional N(Q) is highly nonlinear, and complex enough to void the possibility of a simple analytical solution for the best choice of Q. Moreover, Q must be restricted to Q, the set of n x n symmetric, positivedefinite matrices, which means that the eigenvalues of Q are constrained to have strictly positive real parts. A feasible alternative to the analytical solution is to treat the problem of selecting Q 6 Q as a constrained parameter optimization problem, where the real elements of Q are the parameters. In the following, the problem of the computation of Â‘nonconservativeÂ’ stability domains in the system parameter space is recast as optimization problems over the set Q. Although the discussion refers to the stability domains Sd 2 , Sd <*, and Sd lw , it applies, with the obvious changes, to the weighted domains S' d2 and SÂ’ d . Â‘OptimalÂ’ 2Â— norm stability domain The objective to be optimized can be derived from any of the inequalities which give the admissible  p  2 as a function of Q. However, it is convenient to choose the least conservative condition, namely the one which yields the largest stability domain for a given Q. As shown in the previous section, the least conservative condition on the 2Â— norm is given by equation (4.35). Therefore, let us elect that equation as the basis of the optimization procedure. Let us define the objective functional HQ) =* a(M Q ) (4.61) Then the optimized stability domain can be obtained as: 1 Â°( m q) S&Q) = {p:\\ph< r; 2 (Q)}; r; 2 (Q) = (4.62)
PAGE 101
94 where Mq is determined by Q *, the solution to the constrained optimization problem min J 2 (Q) = min a(Mn) (4.63) QeC v ' Qe<2 This optimization problem has the following main characteristics: Nonconvexity. Although the functional J 2 is convex over the set Mq, it is nonconvex over Q, due to the multiplication by Q ~2 in the equation of FQ k . Therefore, the existence of local extreme points is possible, and the optimization may find only a local Â‘optimumÂ’ instead of a global minimum. Existence of inequality constraints. The restriction Q Â£ Q is equivalent to the requirement Re[A,(<2)] > 0,Vi, which can be transformed into a set of inequality constraints, namely fi(qij) < 0,1 = 1, . . ., n. Although the existence of inequality constraints is a factor of complexity, methods are available for numerical solution of problems with inequality constraints [6]. The implementation requires the computation of gradients of the objective functional and of the constraints relative to the parameters, namely the elements qij. Analytical expressions for the relevant partial derivatives are given in the next section. Although the characteristics of the proposed optimization are not ideal, let us emphasize that the selection of Q has two goals: 1. To improve the computed stability domain, relative to the ones given by stability conditions using a fixed Q, like condition (4.23); 2. To obtain a systematic procedure for the selection of Q. Accomplishment of these goals does not require that the global minimum be reached in the optimization. Should one use a fixed Q, the choice Q tn , Â— cl n , c > 0, is a good choice due to its simplicity. Then, with this choice as the initial value of Q in the optimization, the
PAGE 102
95 first goal above is ensured whenever Q, at any subsequent step, yields a stability condition less conservative than the one obtained with Qi n i, regardless of a stationary point being reached or not. Furthermore, since no analytic solution for a nonconservative choice of Q is available, one has to resort to some numerical technique. The optimization problem, together with Q, n , proposed above, constitutes a systematic procedure for the selection of Q which satisfies the second goal; the only factor that depends on the specific problem is the dimension of I n . Indeed, the following 2 lemmas provide a basis for this reasoning. Lemma 4.3 . Given A , Â£*, k = 1, . . . , m, then N( 1 Q)=N(Q), V 7 Â€Â» + Proof . Consider the Lyapunov matrix equation A T P + PA = Q where the n x n matrix A is stable by assumption, and Q is a n x n symmetric, positivedefinite matrix. The unique, symmetric positivedefinite solution for P is obtained from a linear system of equations whose unknowns are the entries of P. The coefficients depend only on the entries of A. Discarding the equations which correspond to the terms below the main diagonal, the system of equations can be written in the form 5,4 v p = q, where Sa is the nonsingular matrix of coefficients, v p is the vector of entries of P and q is the vector of the negative of the entries of Q. Therefore, v p = [5^] _1 9. Now, letting Q i d = f 7 Q, 7 G 9?+, the matrix Lyapunov equation gives Sa v Pl = q\ = 7 g. The corresponding solution is v Pl = [5^] _1 7 g = 7 [5 / t] _1 9 = 7 v p . Therefore, Pi = 7 P. Substituting P x for P in definition (4.18), one obtains: Put = El P x + P x E k 7 (PJP + PE k ) = ~rFk, k = l,...,m
PAGE 103
96 Since Q\ = 7 Q =$> Q x 2 = 3, it follows that Qi 2 FikQ x 2 = ^Q 27 F k ~~Q * = Q *F k Q 2 Vi Vi Thus, M Ql = M lQ = M q => g(M Q i ) = a{M Q ) = N(Q X ) = NfrQ) = N(Q). Lemma 4.4 . Let Q = 2/Â„. Then, N(2I n ) = where P e is defined by (4.20). Proof . With Q = 2 /Â„, one has Substituting for Q ~ i in (4.32), one obtains n T m 2 /Â„ = i F\ Applying the definitions given by (4.18) and (4.19), it follows that M 2 /Â„ = P e T . Therefore: N(2I n ) = 1 Â°(Pe T ) 1 npT) Lemma 4.3 shows that multiplication of a given Q by a positive scalar does not alter the functional N(Q), whence does not alter the objective J 2 (Q)Furthermore, Lemma 4.4 shows that if the selection 7 = 2, and Q Â— I n is made, equation (4.35) becomes equal to (4.23). Therefore, if this selection is used as the starting point for the optimization based on (4.35), the initial value of N(Q) is equal to the value of  p  2 computed from (4.23), which is the best available result that can be obtained without searching for Q. The first of the goals previously enumerated is attained by any Â‘optimumÂ’ Q that yields a smaller value for the objective functional than the starting Q. The nonconservative character of equation (4.32), and the properties given in Lemma 4.3 and 4.4, lead to a systematic method for the computation of Â‘optimalÂ’ 2norm stability domain.
PAGE 104
97 Let us emphasize that the usefulness of the optimization procedure does not depend on the convergence to a global extreme, or even to a local extreme. For, suppose that the objective is to check whether or not a system is stable against parameter variations in a given interval set. If, at a certain stage of the optimization, a Q is found which yields a stability domain that contains the given interval set, the system is guaranteed to be robustly stable for the expected perturbation, and the optimization can be terminated. In Section 4.5, examples will be shown for which the proposed optimization approach improves available results on allowable parameter norm bounds. Therefore, although J 2 is not convex over the set Q, the proposed optimization is of advantage. As a matter of fact, a similar nonconvex optimization has been used in the somewhat different context of robustification of nominal nominally stable static controllers [4]Â‘OptimalÂ’ oonorm stability domain Let us consider the upper bound l soo given by (4.40), and define the objective functional Joo(Q) d = (m)?a(M Q ) (4.64) It follows that Joo(Q ) = (m ) 2 J 2 (Q). The optimized stability domain based on  p is: SUQ) = ( p : II p lloo < }; IUQ) = (4.65) CT(Mq) where Mq is determined by Q * , the solution to the optimization problem (4.63). Evidently, the objective functional could alternatively be based on equation (4.45). In this case, defining m w>.i] fc=i (4.66)
PAGE 105
98 the optimized stability domain based on  p Hoc can be obtained as: s d 'oo(Q) = {p:ploo< isoo(Q) }; lUQ) (4.67) where F* k is determined by Q m , the solution to the constrained optimization problem m min J 0 0 (Q) = min a[J2 1^3*1] (4.68) Â‘OptimalÂ’ 1norm stability domain Let us rewrite equation (4.43)as \Pi\Â°(F Ql ) + Â•Â•Â• + \Pm\o{F Qm ) < 1 The hypervolume defined by this equation is a function of the product of the semisides, Vm = / (nr=i Therefore, defining the objective m MQ ) = n (4.69) k = 1 the optimized stability domain can be obtained as VlJQ) = {P : II P lllto < II P* lllu,} (4.70) where pÂ‘ is determined from IPfcl CT [(^Q*)*] < 1 5 and ( Fq h )* is determined by the solution to the optimization problem min QeQ J\(Q) = min QeQ m n *(*Â«.) k=i (4.71) 4.4.2 Analytical Gradients The optimization problems defined in the last section can be numerically solved using iterative methods based on gradient descent procedures[6]. The implementation of the
PAGE 106
99 optimization procedures requires the computation of gradients of the objective functional and constraints in the space of parameters Since Q is a symmetric matrix, the number of parameters equals the number of independent elements in Q, that is n q = nilLtil, L e t q [^,j] n
PAGE 107
100 Proof . Using (4.73), the nominal Lyapunov equation is A r P + PA = (Q. + q tJ L l >), V qij Taking the derivative with respect to q,j, one obtains d[Q] = d[Q. + q tJ L'>] = _ Lij = {T d[P} + d[P) A dqij dqij dq tJ dq tJ Defining P'i it follows that A T P'i + P tJ A = Â— Z ,J . Now, = mÂ± FkQ \ + igi + dqij dqij dqij W + V (4.79) and d[F k ) = dq (4.80) (4.81) Using (4.80) and (4.81) in (4.79), one obtains: = QH\(L ij QL *QiF k + F k QH^Qi) + (E^pd + p*iE k )\Q\ Since F k and Q are symmetric, (4.75), one has that F kq = (F k Q~ 1 *) T = Q~ 1 *) T F k Therefore, the expression for the derivative can be written as FÂ£ + F t , L;Â’) + (E T k P>> + PÂ‘Â’El)\Qh from which (4.74) follows.
PAGE 108
101 Partial derivative of J\ The next lemma gives an analytical expression for the partial derivative of the functional J\, defined in (4.69), with respect to elements qij. Lemma 4.6 . The partial derivative of J\ with respect to is given by 1 w H z'Jw Â°( F Qk) *( F Qk+i) Â•Â•< 3 ( f qJ ? (482) where W is the eigenvector associated with the largest eigenvalue of ( F ol F Q k ), normalized such that  W H W = 1; ZÂ£ 3 is given by Z' k Â‘ Â« AT Â« F Q , + F Q Â„ NÂ‘Â‘ (4.83) and FQ k and N are defined respectively by (4.21) and (4.75). Proof . The functional is Ji = p{F Qi ) dqij dq tJ (4.86) 2 Â•j dqij Therefore,
PAGE 109
102 Using the result of Lemma 4.5 for d[F Qk ] dq,, 2 Â°( F Qk) d[ Wi & Qk+ Qk ~^r (4.90)
PAGE 110
103 Using the expression given in (4.75) for J , the last equation becomes oq '> k = 1 Substituting for Â—Â— Q Q ^ in (4.89), one obtains (4.91) dlxj 2o(Mq) = W H CjT,( N lÂ’ CÂ«. + F Â«. "i? )) w k= 1 from which (4.87) follows. Partial derivatives of Since, by definition of = (m ) 2 the partial derivative d }^ can be obtained from equation (4.87). Derivative of j, 23. Let us define, for simplicity of notation, R = f Â£X=i TqJThen, the objective functional of equation (4.66) becomes m jÂ°o = p[J2\ F Qk\] = Â°( R ) (4.92) it=i The following lemma gives an analytical expression for the derivative of j ^ with respect to elements . Lemma 4.8 . The partial derivative of with respect to is given by d[Jo W H H ij W (4.93) dqij 2 a{R) where W is the eigenvector associated with the largest eigenvalue of (R H R), and H ,J is m ( m (4.94) k=l K 1=1 ' with N' k J and N\ 3 obtained from equation (4.75) in Lemma 4.5, by making use of the definition: (4 )ij Â— 4,y, if A,j > 0, (4 )ij Â— A ij , if A{j < 0 V4, < (4.95)
PAGE 111
104 Proof . Using the definition of R above, d[Joo] = d[a(R)} dqij &Qij and from Lemma 2.1, with the substitutions x Â— * qij and M Â— R, SMÂ£Â£J = = w Â« miMw dqij dqij dqij (4.96) (4.97) Since FQ k is symmetric, R h R = Â£ i f Â«.i iH k=l Therefore, where Â£ l F Â«*i k= 1 m ( m = Â£ Â£l f Â«.ll f e k = 1 l/=l 4 (4.98) m H R)) = V If' \ d[\F Qk \\F Ql \] 1 9 Vi (4.99) Now, notice that ^ ^ ^ = dq ?q
PAGE 112
105 Substituting in (4.99) and using (4.97), one obtains: m Cm dqa ,k = 1 W= 1 w Defining the underbraced term as H{j , expression (4.93) follows. Partial derivative of constraints In the optimizations over Q, constraints must be imposed to guarantee the positive definiteness of Q. Since the eigenvalues of Q are real, and defining X(Q) = f min { A ,(Q) } (4.103) it follows that Q > 0 *=> A i(Q) > 0, Vi <=> min{ A,(<2) } > 0 i <=> MQ) > o Defining the constraint function as J c ^ A(Q), the following lemma applies: Lemma 4.9. The derivative of the constraint function J c with respect to element q t] is given by = W h L' ] W Hj (4.104) where W is the eigenvector associated with A(Q), normalized such that  W H W = 1, and L'i is defined by (4.73). Proof. Making the substitutions x Â— qij , M H M Â— * Q, W Â— + W and A Â— * A, in Lemma 2.1, one has that mo)\ = wH d M w dqij dqij (4.105)
PAGE 113
106 From equation (4.75), d[Q] ^ d[Q. + qij L'3} L Â„ dqij dqij Therefore, substituting in (4.105), equation (4.104) follows. In the implementation of the proposed optimization problems, the derivatives of functionals with respect to elements of Q can be computed by finite difference. However, it is expected that using the analytical expressions above, the derivatives can be computed fast than by finite difference. Comparative results will be given in Section 4.5. Obviously, since all the analytical gradients computed above involve derivatives of maximum singularvalue, they apply only if the maximum singularvalue is distinct. Next, it will be shown that the technique of similarity scaling may be explored to improve the stability domain determined by a given Q, and thus can be applied to effect further improvement on available results. 4.4.3 Improvement Through Similarity Scaling As discussed in Section 4.4.1, similarity scaling is an eigenvalue preserving transformation which can be applied to reduce the norm of a given matrix. It has been used with this objective in frequencydomain robust stability analysis, namely in the computation of less conservative spectral radius upper bound; see for example [28] and references therein. The norm of a matrix is not reduced by similarity scaling when the major input and the major output principal directions of the matrix are aligned, because then the spectral radius equals the maximum singularvalue [29]. This is the case, for example, of symmetric matrices.
PAGE 114
107 Notice that equations (4.24)(4.35) [61] and (4.26)(4.28) [51] give stability conditions on the norm of p in terms of symmetric matrices. Let us point out that this is also the case of some previous results [58, 59], where state transformations were used to improve the stability condition. In particular, the matrix PiQ~* FiQ~^ in equation (4.30) is symmetric. Therefore, a straightforward application of similarity scaling to those matrices would not reduce their norm. However, further manipulation of the appropriate symmetric matrix may still enable the advantageous use of the technique, as shown below. Let us define Ml d = f (Â£)]Â£_! pk Q~^F k Q~% ). Using (4.33), one has o{Ml) < 1 (4.106) Since Ml is symmetric, its norm is not altered by similarity scaling. Therefore, for some S nxn 6 S , defined in (3.58), one has a(M L ) = a(SM L S~ 1 ) (4.107) Now, let us define mnxmn d J f d iag { 5, 5, Â• S d Â€ Â£ Then, the scaled matrix Ml can be expanded as ;S) (4.108) m terms SM l S 1 = S ^ PkQ *F k Q n 5' 1 = M p S d M Q S~ l d = M P M QÂ§ (4.109) where the matrices M p and Mq are defined respectively by expressions (4.31) and (4.32), and MqÂ§ = S d MqS~ l . Therefore, using (4.107), if follows that cj(Ml) = cj{M p MqÂ§) < a{ M p ) inf o{M Q Â§)
PAGE 115
108 from which a sufficient condition for stability, for a fixed Q, is obtained, namely: 1 (4.111) infs a{Sd P e S~ x ) infscr(P e ) This equation shows that the 2norm bound can be improved by the similarity scaling. A numerical example of the improvement is given in the next section. Notice that the above development regards Q as a fixed matrix. Obviously, Q can be the solution to the optimization problem given in (4.63), so that it is possible to combine the choice of Q through optimization with similarity scaling. Numerical applications of the proposed method for choosing Q are given in the next section. 4.5 Application of Optimization Over Q In this section, 2norm stability bounds on p are computed for some examples found in the literature. In all of them, results obtained with the selection of Q through the proposed optimization problem are better than available numerical results.
PAGE 116
109 The starting point for the minimization, as proposed is Section 4.4.1, is Q m , = 2/ n , where n is the dimension of the state vector. In the interest of saving implementation time, since the objective here is to show the feasibility of the proposed optimization, all the implementations were done using standard MATLABÂ™ functions and available optimization routines. Unless otherwise stated, necessary derivatives were computed by finite difference. Example 4.1 Consider A [61], E\, Ev and Q * below: ' 3 2 ; Â£1 = 1 0.5 ; Â£2 = 1 0 ; Q' = 2.2157 0.2472 1 0 0 0 1 O 00 0 1 1 0.2472 1.7667 Choosing Q = 2/2 and using (4.23), one obtains the condition  p < 1.0862. The stability domain corresponding to the optimal solution Q* above to the problem (4.35) is  p  < r* 2 = 1.1142. Therefore, Q* yields a norm bound which is 2.6% better than the previous result. Applying similarity scaling to Q*, the choice S = diagjl, 1.0668} yields  p < 1.1225, thus giving a total improvement of 3.4% over the result obtained with Q = 2 / 2 . Example 4.2 Consider the matrices below, from[51]: 1 0 2 0 0 2 A = 0 1 ; Â£1 = 3 0 ; Â£2 = 0 1 With Q = 2/2 and using (4.23), one obtains the condition  p < 0.3159. Using Q sa given below, the result computed according to (4.26) [51] is  p < 0.3378, which is 6.9% better. Using the optimal solution Q* to (4.63), also given below, the stability domain is
PAGE 117
no given by  p < r* 2 = 0.3486. Qs , 1 0.5 0.5 1 ; Q2.2594 1.1549 1.1549 1.6074 Therefore, Q* yields a norm bound which is 3.2% better than the previous result [51], thus 10.3% better than the result obtained from Q = 2/ 2 . Example 4.3 Consider the following matrices [61]: 2 0 1 1 0 1 0 0 0 A = 0 3 0 ; E x = 0 0 0 ; e 2 = 0 1 0 1 1 4 1 0 1 0 1 0 Choosing Q = 2/2 and using (4.23), one obtains the condition  p < 1.6515. Using the optimization approach, the optimal choice of Q is: 1.6337 0.1632 0.8834 0.1632 2.0260 0.0329 0.8834 0.0329 2.3095 The corresponding stability domain is given by  p < r* a2 = 1.7438. Therefore, Q* yields a norm bound which is 6.6% better than result of obtained with Q = 2 1 2 . Example 4.4 Consider the following matrices, from [58]: " 1 0.2356 0.1246 0.22377 0.2277 0 0 0 0 1.7021 4.908 3.0859 3.98 ; E 1= 0 0 0 0 2.8732 2.539 4.5369 6.408 0 1 0 0 0 0 1 0 0 0 0 0
PAGE 118
Ill Â” 0 0 0 0 0 0 0 0 e 2 = 0 0 0 0 ; e 3 = 0.4960 0.0139 0.5390 0.8060 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Choosing Q = 2 / 4 and using (4.23), one obtains the condition  p < 1.7016. Using the optimization approach, the optimal choice of Q is: 1.8696 0.2755 1.0888 1.1738 0.2755 2.2751 0.6587 1.6441 1.0888 0.6587 1.2808 1.4210 1.1738 1.6441 1.4210 2.5403] The corresponding stability domain is given by  p )< r * 2 = 2.3287. Therefore, Q * yields a norm bound which is 36.7% better than result of obtained with Q = 2 / 4 . In the optimizations required by the 4 previous examples, gradients were computed by finite difference. Examples 4.1 and 4.2 were also implemented with computation of gradients based on expression (4.74). Corresponding results are compared below. Example 4.5 . Consider again the data matrix for Example 4.1. With finite difference gradient,
PAGE 119
112 Also, the convergence at initial steps was accelerated by introducing a multiplicative factor bigger than one in the expression of the analytic gradient. With a factor of 7.45, it was verified that in 10 iterations with analytic gradient, ct(Mq) = 0.897546, averaging 1,515 flops per iteration. With finite difference gradient, it took 33 iterations for obtaining
PAGE 120
113 3. The formalization of optimization problems for the choice of the best Lyapunov matrix to be used in a given robustness problem. Additionally, derivations were independently developed for results already presented in the literature. This is the case of the  p and weighted  p i stability conditions, and of parameter weighting for changing the form of the computed stability domain. Also, analytical expressions for the derivatives of the objective functions of all optimization problems were obtained. It is worth mentioning here that a short paper entitled Â‘On the Computation of Allowable Bounds for Parametric UncertaintyÂ’, containing a summary of the results presented in this chapter, has been reviewed and accepted for publication in the Proceedings of the 1991 American Control Conference. The numerical examples presented in Section 4.5 demonstrate that the choice of Q through the optimization problem (4.63) leads to the computation of less conservative norm upper bound on the parameter vector. The improvement over existing results can be significant; in the case of Example 4.4, a substantial improvement of 36.7% on the admissible  p H 2 was obtained. Furthermore, Example 4.1 also demonstrates that similarity scaling may be effective in improving a stability domain computed without optimization over Q. Although optimization over Q is not proposed in Sezer and Siljak [51], where equation (4.26) is derived, both this equation and (4.22) [4] could be utilized in an optimization procedure. However, they were tried and found to be more conservative than (4.35). Actually, this was expected, since the derivation of (4.35) is potentially less conservative, as shown before. Indeed, it was verified in many cases that the optimization based either on (4.22) or on (4.26) gives no improvement over results obtained from (4.23) with Q = 21.
PAGE 121
114 Examples 4.5 and 4.6 show that using the analytical gradients, a fast convergence was obtained in the first iterations of the optimization; however, the convergence was not satisfactory after a certain number of iterations. This difficulty can possibly be eliminated by using either curvature information or variable incremental steps in the optimization problem. Since standard optimization routines were used in the computation of the examples above, these options were not tried. In the next chapter, a new approach to robust stability analysis of state space systems under structured perturbations is proposed. Results obtained with the LDM method the results from the new method are compared in Chapter 6.
PAGE 122
CHAPTER 5 STABILITY UNDER DIAGONAL PARAMETRIC UNCERTAINTY 5.1 Introduction In Chapter 4, sufficient conditions for robust stability of state space systems under structured perturbations were obtained with application of the Lyapunov Direct Method. The LDM can be applied in the presence of both timevarying and timeinvariant perturbations. The mechanics of the method does not differentiate between these perturbations; the only distinction is that under timevarying perturbations, the derivative of the Lyapunov function must be negative definite at each instant. It has been argued that, if the perturbations are known to be timeinvariant, the generality of the LDM is an extra factor of conservatism. In many practical problems with parametric uncertainty, although the parameters can assume any value in a given range, the value assumed at the beginning of the time interval of interest can be considered constant during the whole interval, thus characterizing a timeinvariant perturbation. This chapter proposes a distinct approach to the robust stability problem, which is applicable exclusively to time invariant perturbations. Necessary and sufficient conditions for robust stability of the perturbed state space system are derived from an equivalent frequencydomain formulation, which takes advantage of diagonal uncertainty description. Sufficient parameter norm bounds are computed resorting to the frequencydomain scaling techniques discussed in Chapter 3. 115
PAGE 123
116 5.2 Diagonal Representation of State Space Perturbations Let us assume that the perturbations to the state space system depend linearly on a mdimensional vector of parameters. The perturbed system is x(t) = (A )E)x(t), where the nominal matrix A is assumed asymptotically stable, and the perturbation has the representation given by equation (2.31), namely m E = '52PkE k ,  p*  < Ofc, k = 1 , . . . , m (5.1) k = i where Ek , k Â— 1, . . ., m are constant matrices. For the objective of this chapter, it is convenient to decompose the uncertainty as E = LDR (5.2) where D is a diagonal matrix whose elements are the plant parameters. The following 2 lemmas and 2 theorems establish the existence of such a decomposition. ^ ma 51 [ 34 1The matrix product M nXn = F nxp diag{m,} pXp Z pXn is equivalent to p M nxn = TUi y. zj (5.3) 1=1 where y. is the i th column of Y and zj is the i th row of Z. O Lemma 5.2 [34]. Any matrix M of order n and rank r < n can be decomposed as ^nXn = T nXn I n r Z nXn (5.4) where Y and Z are nonsingular, / n , r = diag {I T , 0 n _ r }, and 0Â„_ r is the null matrix of order (n r). O
PAGE 124
117 The following theorem addresses the decomposition, and the next one establishes conditions under which D has no repeated diagonal elements. Theorem 5.1 . The perturbation E = YJk\ PkEk can be decomposed as Enxn = L nxq D qxq Rqxn (5*5) where L nXq and R qxn are constant matrices, and D qxq = diag{p fc }, possibly with repeated elements, in which case q > m. Proof . By assumption, E = YJk=\ PkEk Let r k be the rank of E k , k = l,...,m. By Lemma 5.2, each matrix Ek can be decomposed as Ek = Yk I n , Tk Zk, for some matrices Y and Z. Therefore, PkEk =PkYkI n ,r k Z k = Y k dia.g{p k , . . . ,pk,0, . . . ,0} nxn Z k r k Since the product terms involving the last m r k elements of the diagonal matrix are all null, the last expression can be rewritten as PkEk = nj Â•Â•Â• I n rt Y k Pk o 0 ... Pk Zk, Zk T (5.6) b k z k Altogether, there are m products of the form (5.6), which can be represented bv the following matrix product: Pi I Yt\ Â•Â•Â• I Y m D x 0 ... 0 0 D 2 Z i Z2 0 0 ... Dr (5.7)
PAGE 125
118 Computing the product above, one has Ylk=i Yk D k Z k = l Pk E k = E. Defining the matrices in (5.7) from left to right as L , D and R, it follows that E = LDR. m Theorem 5.2 . The perturbation E = PkE k has a decomposition Enxn Â— E n xm ^mxn (58) where D mxm = diag{p!, . . . ,p m }, without repeated diagonal elements, if and only if rank(Â£fc) d = r k l k = 1, . . . , m Proof . Sufficiency : Assume r k = 1, k = l,...,m. Then, in equation (5.6), Y k and Z k are respectively column and row vectors, and D k = p k , Vk, so that in equation (5.7), D = diag {pi, . . . ,p m }. Necessity : Assume that D is of order m, with nonrepeated diagonal elements. Then, by Lemma 5.1, E = Y?k=i PkV k l k , for some y k and z k . Since, by assumption, E = PkE k , one has that E k = y k l k Since E k can be written as the product of a column vector by a row vector, it has rank 1. Therefore, r k = 1, k = q ,. . . , m Notice that the decomposition of E given by these theorems is not unique, since scalar factors can be factored out of a column Y k and included in the row Z k . A decomposition is obtained by solving the equation E k = L k I nirk Rl, k = 1, . . . , m (5.9) for the column vector L k and the row vector i?jT, where r k is the rank of E k . The following examples illustrate the decomposition. Example 5.1 . Suppose that E = ~ ~ Pi Pi = Pi 1 1 + P2 0 0 Pi Pi +P2 1 1 0 1
PAGE 126
119 Since r x = r 2 = 1, it is possible to obtain a decomposition with D = diag{p x ,p 2 }. Possible solutions of (5.9) for E\ and E 2 are: so that Â£i = Â£2 = 1 1 1 1 1 1 0 0 0 0 1 1 l l 0 1 = L,R[ E Â— L D R = " 1 0 Pi 0 1 1 1 1 0 Pi 0 1 Example 5.2 . Consider the case with r x = 2 and r 2 = 1, where E = Possible solutions of (5.9) for E\ and Â£ 2 and LDR are: Pi Pi = Pi 1 1 + Pi 0 0 Pi Pi +P2 0 1 1 1 Â£1 = Â£2 = 1 1 0 1 0 0 1 1 L\\ L12 = L 2 R\ = 0 1 1 0 0 1 1 1 Â£ = LDR = 1 0 0 0 1 1 Pi 0 0 0 Pi 0 0 0 p 2 1 1 0 1 1 1 till 1 0 1 0 till 0 1 0 1 1 1 0 1
PAGE 127
120 The matrices L and R can be obtained by using Equation (5.9) for each E k . However, at least in the case of where r k = 1 , VA;, the solution can be straightforwardly obtained using the next lemma: Lemma 5.3 . The perturbation E = Y1T=i PkE k can be decomposed as E = L D R, where L Â£ ^ g jjmxn are ma t r i C es which account for the structure of E, and D = diag {p fc }, using, for each element E ki} , i,j = 1 the operations indicated in Table 51 below, where 7 is a constant for each k. Table 51. Nonnull elements of L, D, R Lik Dkk Rk] HjPk Uk (chosen) Pk ^ = e Proof. The desired decomposition is Eij = Â£Â£Li E ki] p k = ( LDR) { j . Now, since ( LDR)ij = J2 T=i Li k p k Rkj = J2T=i Li k R k jPk = Ylk= 1 E klJ p k , one has that EkÂ„ = L ik R k: (5.10) which is the result obtained by arbitrarily choosing L lk and computing R k j as indicated in the table above. Note that each perturbation element Eij may depend either on one or more than one parameter. If, for some i, j , the dependence is on more than one parameter, the element E^ is given by a linear combination of some of the parameters, because of the assumptions on E. In this case, each component E kij is considered separately when the algorithm is applied, as indicated by the first column of the table. The example below illustrates the application of the algorithm in Table 51 to a case where r k = 1 , k = 1,2.
PAGE 128
121 Example 5.3 . Consider the following perturbation matrix [25], which is a function of two parameters: ~ 0 0 0 0 0 0 0 0 10 0 10 0 + P2 0 10 0 10 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 1 Since E is a 4 x 4 matrix and there are 2 parameters, L G & 4x2 , R Â£ R 2x4 and D = diag{p x , p 2 }. Table 52 illustrates the choice of nonnull elements of L and R, obtained through the application of the algorithm in the previous table. Table 52. Nonnull elements of Z, R Perturbation Ei> Lik Dkk (chosen) Rkj E 2 i = 10pi l n = 10 Â£11 = Pi r 21 = 1 E 22 = Â— 10 p 2 l 22 = Â— 10 D 22 = P2 r 22 = 1 K5 CO II o 2 hi = 10 D 11 = pi ^13 = Â“I E 24 = 10 p 2 I22 = Â— 10 D 22 = p 2 r 2 4 = 1 I 3 II u 1 = 1 Du = Pi r n = 1 E42 = P2 U2 = 1 D 22 = P2 r 22 = 1 E43 = Pi /41 1 Â£11 = Pi r 13 = 1 cs 1 II l 42 Â— 1 b to to 11 Â“a to ' r 24 = 1
PAGE 129
122 A decomposition of E is, therefore: E = LDR = 0 0 10 10 0 0 1 1 Pi 0 1 0 1 0 0 P2 0 1 0 1 Since D has neither complex nor repeated elements, it is a particular case of the blockdiagonal class of uncertainty given by (2.24), whose associated block structure has m c = 0 and m c = 0, namely AC(m) = (k u ...,k m ), = 1, Vi. Thus, a class of admissible perturbations can be defined following the structure specified by (2.25), namely : x iCr( s ) = {D : D = diag{p!,...,p m }, Pi 6 [a t ,a,], max  a k < <5} (5.11) In view of this characterization of the class of allowable perturbations, the questions addressed by stability analysis of x(t) = (A + LDR)x(t) can be formulated as: 1. Given A and Â£*, k = 1, . . ., m , find 6 such that (A + LDR) remains stable, VP Â€ J0c R (tf); 2. Given A,Ek and a*;, k = 1, . . . , m , check whether or not the system is stable. These questions are addressed in the following sections. 5.3 Problem Formulation Stability of the nominal state space system x(t) = Ax{t), concerns the behavior of the time response to a perturbation to the equilibrium point 0. Let x(t a +) = x 0 be the
PAGE 130
123 perturbation of the equilibrium point. Then, the time response is given by: x(t) = e At x 0 , t > 0 (5.12) Clearly, x(t) = 0, Vt > 0, as long as x Q = 0. If x 0 ^ 0, x(t) Â— * 0 exponentially if and only if the eigenvalues of A have strictly negative real parts. Since A is a constant matrix, the Laplace transform can be applied to the system equation with initial condition x c , resulting in: X(s)= (sl n A)~ l x 0 (5.13) The matrix function ( sl n .4) _1 is the resolvent of A [32]. Asymptotic stability of the nominal matrix A, characterized by Re[A,(A)] < 0, Vi, implies stability of the associated resolvent, characterized by all the poles of the resolvent being in the LHP. Since no mode cancellation occurs in the resolvent matrix, the converse is also true. With the uncertainty decomposition given by (5.2), the perturbed state space system becomes: x(t) = {A + LDR)x(t) (5.14) The parameters pk,k = 1 ,...,m may take any value in the real intervals [Â—a*, a^], but the values are assumed to remain constant during the time interval of interest. With this assumption, all matrices in (5.14) are constant, thus allowing the application of the Laplace transform, which yields: *(*) = [sin (A + LDR)]X x 0 * H p (s)x 0 (5.15) where H p (s) is the resolvent matrix of the perturbed system. The next lemma shows that stability of H p (s) is equivalent to asymptotic stability of the perturbed matrix (A + LDR ).
PAGE 131
124 Lemma 5.4 . The following statements are equivalent, regarding equations (5.14) and (5.15): 1. (A + LDR) is asymptotically stable; 2. H p (s) is stable. Proof . (1) =s> (2) Assume that ( A + LDR ) is Hurwitz. Then, the characteristic polynomial of H p (s ) has all the poles in the open LHP, and hence H p (s) is stable. (2) => (1) Assume H p (s ) is stable, and, without loss of generality, that ( A+LDR ) has distinct eigenvalues. Since ( A + LDR ) is square, it has a modal matrix whose columns are the vectors of a canonical basis, whence linearly independent. Letting W be the modal matrix and A = diag{A t },i = l,...,n, the perturbed matrix has the characteristic decomposition ( A + LDR ) = WAIT 1 , and the resolvent matrix has the dyadic expansion: [Â»/Â„ (A + LDR)]' = Â± Â», W T where W{ is a column of W. Since none of the Wi,i = l,...,m is null, because they are eigenvectors, none of the terms of the dyadic summation is null. Consequently, no cancellations occur in the resolvent matrix of the perturbed system, and all the eigenvalues of (A f LDR) are poles of H p (s). Since H p (s) is stable by assumption, all the eigenvalues of (A + LDR) must lie in the open LHP. Therefore, stability of H p (s) implies that (A + LDR) is Hurwitz. Now, let us obtain an expression for A(s) in the presence of uncertainty. Defining: y A Rx(t), u A ^ Dy A (5.16)
PAGE 132
125 and substituting in (5.14), the following equations are obtained: x(t ) = Ax(t) + Lu A ; x(t 0 ) = x Q 2/A = Rx(t); u A = Dy A Considering u A (t) and x 0 as inputs, the application of the Laplace transform yields: X(s) = [s/ n A] l x 0 + [sl n A\ 1 Lu a (5.17) 2/a(s) = RX(s ); u A (s) = Dy A (s) (5.18) The diagrams in Figure 51 below represent this system. D Figure 51. State space system with parametric uncertainty in diagonal form. a) Equivalent frequencydomain representation; b) The M A form. Regarding j/a(s) and X(s) as outputs, one obtains the matrix representation: 2/A (5) R(sl n Â— A)~ l L R(sl n A)~ x U A (s) X(s) (sI n A)~ x L (sin A)' *0 def M n A /12 u A (s) A /21 A /22 v ^ M From equations (5.18) and (5.19), it follows that (5.19) X(s) Â— M 22 X 0 + M 21 UA = M^Xo + A/ 21 D 2 /A (5.20)
PAGE 133
126 2/a = M u u& + M U X 0 = [I m M n D] 1 M u x 0 (5.21) Substituting (5.21) in (5.20) yields: X(s) = [M 22 + M 2 iD(I m M ll D)~ 1 M 12 ]x 0 = (M, D)x 0 (5.22) Definitions in (5.16) imply that the signals delivered by the perturbations are regarded as additional inputs to the nominal system. The following lemma demonstrates that this approach does not alter the dynamic behavior of the perturbed system. For simplicity, let N (s/ n Â— A) in equation (5.15), which becomes X(s) = [/Â„ N~ l LDR]x N'x 0 (5.23) Lemma 5.5 . The linear fractional transformation E\j(M,D) of equation (5.22) is an equivalent representation of H p (s) of equation (5.23). Proof . The substitution D Â— > Â— D in equations (5.23) and (5.22) yields: H p (s) = [/Â„ + N~ x LDR]~ X N~ l Fv(M,D) = [M 22 M n D(I m + M u D)x M l2 ] Now, using Mu, Mu, M 2 i, M 22 from (5.19), the last equation becomes ?v(M,D) N~ l Nl LD[I m + RN~ X LD}X RN~ X N~ l N~ x LD[I m R(LDR+ N)x LD}RN~ l N~ l N~ x LDRN~ X + N~ X LDR(LDR+ N)~ X LDRN~ X N~ x N~ X EN~ X + N~ X E(N + E)~ X EN~ X {/Â„ N~ X [E E(N + ^)1 E]}iV1 {/Â„ N~ X E[I ~(N + E)~ X E]}N~ X
PAGE 134
127 Now, [/f NE *] 1 = [I Â— (E + iV) 1 _B]. Substituting in the last equation, it becomes: fu(M,D) = {/Â„ N~ l E(I + Nl E)~ l }N~ l = {[(/Â„ + N~'E) N~ l E](I + N _1 E)~ 1 }N~ 1 = {(In + N'EN'E^In + N'Ey^N1 = {In + N'E )1 }^ 1 = H P {s) Note that the above manipulations do not require invertibility of either E , L or R. Therefore, the equivalence between T\j (Af, D ) and H v {s ) applies regardless of the ranks of the matrices E, L and R. Results of Lemma 5.4 and 5.5 combine to show that (^4 + LDR) is Hurwitz, and hence Â£(<) = ( A\LDR)x(t ) is asymptotically stable, if and only if D) is stable. Therefore, stability of the perturbed state space system can be analyzed through the application of frequencydomain techniques to T\i (M,D). 5.4 Necessary and Sufficient Conditions for Robust Stability The following theorem states the necessary and sufficient condition for stability of the linear fractional transformation; consequently, it establishes necessary and sufficient condition for asymptotic stability of ( A + LDR). Theorem 5.3. Consider the system i(t) = (A + LDR)x(t), where A nxn is Hurwitz, L and R are constant matrices, and D e X/c K (S). Let M u (s) = R(sl n A)~ 1 L. Then, the
PAGE 135
128 system is asymptotically stable if and only if p R [M n (s)D] < 1, Vs, Vi? G X^S) (5.24) Proof . By Lemmas 5.4 and 5.5, ( A + LDR) is asymptotically stable if and only if = [A / 2 2 M 2 \D(I m + M n D)~ l Mi 2 \ is stable, Vs, Vi? G Ax TC (<5). Taking into account that D is a real, diagonal matrix, the derivation which has led to equation (3.36) permits to conclude that T\j (M, D) is stable if and only if Pr [M 11 (s)D } < 1 , Vs, Vi? G X K K (S) From equation (5.24), the singularvalue stability condition ct[M h ( 5)] < 1, Vs (5.25) is obtained; as discussed in Chapter 3, it is only sufficiency, because D is structured. Certainly, the /xÂ— function theory can be exploited in the derivation of a necessary and sufficient condition for stability of D ). The following theorem states this condition. Theorem 5.4. Let A, L,D, R and A/n(s) be as previously defined. Then, the perturbed state space system x(t) = (A + LDR) x(t) is asymptotically stable, VZ? G X^S), if and only if sup/z[A/n(s)] < (5.26) Proof. Applying the result of Theorem 3.4, one has that the linear fractional transformation is stable, VZ? G X<; R (^), if and only if /x[M n (s)]<5 < 1, Vs <=> sup p[M n (s)] < ^ s 0 (5.27)
PAGE 136
129 The last expression comes directly from the definition of the /zfunction applied to the fundamental stability condition, namely det [ I + M n (s)D ] / 0]. It is shown next that (5.27) is necessary and sufficient for the fundamental condition. Sufficiency. Assume sup, p(Mu ) < 1 Then, H(Mu)6 < 1, Vs n(M n )v(D) < 1, Vs, VDeX,c R (6) By (3.45), p(DM n ) < n{Mn)o(D) < 1, Vs, VD. By the left inequality in (3.49), p(DMu) < p(DMu) < 1, Vs, VT> which implies that A ,(Â£>M n ) ^ 1, Vs, VZZ and det[/Â„ + Mn(s)T)] ^ 0,Vs, VZT Therefore, sufficiency is proved: sup fi(Mu) < 7 => det [Z n + A/Â„(s)ZZ] ^ 0, Vs, VZ) s 0 Necessity . Assume that det [ZÂ„ fi Afn(s)Z)] ^ 0, Vs, VÂ£> and suppose that, for some DÂ° Â£ Xz R (S), with a(DÂ°) = 6 Â°, sup s p[M n (s)DÂ° ] > 1 . Then, 3s = s such that, by (3.45), 1 < p[M n (s)DÂ°) < p[M n {~s)\a(DÂ°) < /i[MÂ„(i)] 6Â° By definition of /z(*),^[M n (s)](s)<5Â° > 1 implies det[Z + M u (s)DÂ°] = 0, while a{DÂ°) < 6. Therefore, A i[M n (s)DÂ°] = 1, for some t, which contradicts the assumption that det [Z n f Mu(s)D ] ^ 0, Vs, VD. Thus, necessity is proved: det [Z n + M\\(s)D] Â± 0, Vs, VZ? sup [Afu(s)] < 7 s 0
PAGE 137
130 This theorem establishes that: (A 4LDR) asymptotically stable <=> 6(s) < VD 6 X K R (Â£) (5.28) However, in general only upper and lower bounds for [i Â— function can be computed. Equation (3.49) shows that p[Mn(s)] < /i[[Mn(s)] < a(Mu(Â«)j, Vs. The lower bound is attained when the diagonal perturbation is composed by a single complex scalar, while the upper bound correspond to a perturbation constituted by a single unstructured complex block. The next lemma shows that, in the case of real structured uncertainty, the lower bound on /i[Mn(s)] is given by the real spectral radius. Lemma 5.6 . Let D Â€ X^ R (S). Then, Vs, Pr[Mh(s)] < //[M n (s)j (5.29) Proof . To obtain /x[Afn(s)] for the real diagonal uncertainty, one must find min {a{D) : det [I + M n (s)D]} = 0 Initially, let D = diag{ dA,[Mn(s)] = 1 , for some i Since d e the equality can happen if and only if M n (s) has real eigenvalues. Therefore, letting A/j(Â«) designate a real eigenvalue, d A,[Afn(s)] = Â—1, for some i
PAGE 138
131 d = Â— 1, for some j 1 \d\ = A* ; [M u (s) l^lmin Â— Mlmin Â— 1 maxj A fij [M u (s)] 1 />r[Mh(s)] Mlmin = Pr[A/h(>s)] Inequality (5.29) follows because, for perturbations not restricted to a single repeated scalar d, it is possible to have det [I + M n (s)D] = 0 for a{D) = d < I^UnThe exact computation of /i[A/ n (s)] for real parametric perturbations requires a search over the parameter space, which becomes computationally unwieldy as the number of parameters increases. The algorithm proposed by De Gaston and Safonov [10] for the computation of the multivariable stability margin k m incorporates techniques which reduce the computational requirements of the search. Since k m and the /rfunction are related by k m (Â») = 1^, that algorithm can be used to compute parameter bounds such that the stability condition based on the /rfunction is satisfied. An application of the algorithm to a four dimensional system with 3 parameters has been reported [10], and it has also been anticipated that the computation time is expected to increase by a factor of 2 for each new uncertainty added to a given robustness problem. However, no assessment of performance of the algorithm in presence of larger number of parameters has been given. The conservatism of the singularvalue condition given by (5.25) can be reduced through scaling. This is the subject of the next section.
PAGE 139
132 5.5 Sufficient Conditions for Robust Stability 5.5.1 Stability Domain in the Parameter Space In the last section, two frequencydomain necessary and sufficient conditions for stability of the perturbed state space system x(t) = (A + L D R)x(t) were presented, namely structured singularvalue condition given by equation (5.26), sup/r[A/n(s)] < Â— * <5(s) and the spectral radius condition given by equation (5.24), sup {sup p R [M n (s)D] } < 1 * D where D Â£ X/c R is a diagonal matrix containing the real parameters of the system, and M n (s) = R(sl n A)~ l L. The parameters belong to symmetric real intervals, namely a* < p k < ak, by assumption. The objective of stability analysis is either to check whether the nominal system remains stable for a given set of parametric intervals, or to compute upper bounds on parametric intervals for which a given nominally stable system remains stable. Unfortunately, computation of allowable intervals by directly using the above stability conditions would require a search over the parameter space. The computational burden of this approach increases exponentially with the number of parameters, and would eventually become unfeasible. In this context, it is of interest to acquire stability conditions which, although conservative, are computationally tractable. The scaled singularvalue stability conditions introduced in Chapter 3 satisfy these requirements. To simplify references, the relevant sufficient conditions are collected in Table 53 below.
PAGE 140
133 As seen in Chapter 3, the similarity scaling conditions of equations (3.57), (3.71) and (3.75) are obtained under the assumption of complex diagonal uncertainty. Such assumption is not explicitly needed in the derivation of the nonsimilarity conditions. However, the terms of[5f and o^S^PaS^ 1 ], which show up respectively in equations (3.61) and (3.74), does not discriminate between diagonal real or complex uncertainties having the same element bounds Pa Â• The application of scaled singularvalue stability conditions in the presence of diagonal real uncertainty implies that the uncertainty description is relaxed. The results obtained for D G X>c K could as well be obtained for complex uncertainty in a class X/c c , defined as Xtc = f {D c : D c = diag(cfc), c* = c* e j9k , c* Â€ [0,0*], 0 < 6 k < 2i r,V&} (5.30) Table 53. Sufficient stability conditions Equation Stability condition Scaling 3.57 infs ct[5M 0 (s) 5 1 ] < 1, Vs Optimal similarity 3.61 in fs,,s 2 {ff[5Â’ 1 Mn(s)5 2 ] o : [5^ 1 Pa 5' 1 _1 ] < 1, Vs Optimal nonsimilarity 3.69 7t(MjjPa) < 1, Vs Perron radius 3.71 ct^Mh^PaS1 ] < 1 Vs Perron similarity 3.74 a[5i T M n (s)5 2ff ] ct[5^ 1 Pa 5 1 t 1 ] < 1, Vs Perron nonsimilarity 3.75 a[5 0 Afu(s)P A 51 ] < 1, Vs Osborne similarity The adoption of the class X/c c means that the domain [ Â— or A: , Â£*&] G 3? of the parameter Pk is replaced by a disc of radius r*. < a* in the complex plane. Therefore, the scaled singularvalue stability conditions are more conservative when the uncertainty is known to be real than they are for the case of complex uncertainty.
PAGE 141
134 However, the intrinsic conservativism does not bar the use of scaled singularvalue conditions, since there is no general, computationally efficient method for obtaining necessary and sufficient stability conditions in the presence of real perturbations. Furthermore, the conservative results obtained can be used for numerical comparisons with other conservative results. Although some other techniques which result in sufficient stability conditions are available, no assessment has been found in the literature of how these conditions compare among them. For instance, the widely used Lyapunov direct method yields only sufficient results. As discussed in Chapter 4, the conservatism of these results is too large, unless one resorts to a numerical optimization, which involves nonconvex problems. Recall that, in Chapter 3, the stability condition using nonsimilarity scaling was derived under the assumption that D Â€ VsIn the particular problem discussed in this chapter, the matrix D in the decomposition E Â— LDR is diagonal; consequently, there is no advantage in using nonsimilarity scaling instead of similarity scaling. Because of this, only similarity scaling will be considered from now on. As demonstrated by Theorems 5.1 and 5.2, if, for some k, the matrix E k , which accounts for the structure of the perturbation due to the parameter p k , has rank r k > 1, then the matrix Z?, and consequently has repeated diagonal elements p k In this case, alternative decompositions in the form E = LAR, where A is not diagonal, can be obtained. With such a decomposition, tighter parameter upper bounds are in general computed using nonsimilarity scaling, which has more degrees of freedom than similarity scaling. For nondiagonal A containing the system parameters, the relationship between the norm of A and the parameters is not clear. On the other hand, a decomposition where D is diagonal has a very favorable property. Once the allowable upper bound on the norm of
PAGE 142
135 P a has been computed, the determination of the stability domain in the parameter space is straightforward, because parameter upper bounds are obtained from the product of the norm upper bound by the diagonal matrix PaTo explore this property, assume that the actual D is diagonal, satisfying D tj < a > 0, Vi,;' (5.31) Introducing this assumption in the optimal similarity scaling result of equation (3.57), given in the first row of Table 5.3, the following condition is obtained on the quantity a: as(5) < inf 5 a[5(Mn(s)PA)5 1 ] Â’ V * (5 ' 32) In a similar manner, the following suboptimal conditions are obtained: < a[5.(M 11 (s)PA)5T 1 ]Â’ VS (5 33) where S T is the Perron scaling for A/^Pa, and Q Â° < a[5 0 (M 11 ( a )P A )5 0 1 ]Â’ VS (5,34) where S Q is the Osborne scaling for M n (s)P A . The quantity a in these expressions is analogous to the multivariable stability margin; however, it is a conservative indicator of stability. The stability domain in the parameter space is obtained by multiplying a by P A . Therefore, the stability domain can be expressed as s d Q d = {p : \pk\
PAGE 143
136 5.5.2 Procedure for Computation The computation of a sufficient stability domain in the space of parameters is described in the following procedure. Procedure 5.1 . Computation of stability domain: 1. Given A , Ek,ctk, k = 1 find the matrices L and R of the decomposition E = Pk Ek = LDR, where D = diag{p fc }; 2. Compute the associated matrix A/n(s) = R(sl n A)~ l L ; 3. Compute M a (s) = Mii(s)P & ; if the matrix P& is not given, assume P& = /; 4. Compute a, using one of the expressions (5.32), (5.33) or (5.34); 5. Compute Sdon using definition (5.35). An application of this procedure is given in the following section. 5.6 Numerical Application Example 6.4 . Let us consider a unity feedback system, as seen in Figure 21 (a), where the SISO nominal plant and the controller are modeled, respectively, by the following transfer functions: G 0 (s) = v pÂ° r; K ( s ) = s + Z\ (5.36) s ( s + P\o)( s + &2o) ' 5 + 7 The nominal values of the parameters are: k 9o = 800 , /?i 0 = 4Â— , p 2o = , z\ = 2Â— and 7 = 10^ . The plant has three uncertain parameters, namely the gain k g and the
PAGE 144
137 poles f3\ and P 2 , whose uncertain representation is: k 9 P = k 9o+PU I Pi I <0.10 Pi p = Plo + P2, I P2 I < 0.20 p2 p = P2o+P3, I P3 I <0.30 Nondestabilizing parameter upper bounds were computed by De Gaston and Safonov [10]. The algorithm there used applies to frequencydomain models, and yields exact nondestabilizing parameter bounds, thus generating a necessary and sufficient condition for stability. Considering the range 0 < u> < 10 the following worst case bounds, corresponding to w = 8.22^, were reported: 0.344
PAGE 145
138 Pi 0 0 0 Pi 0 0 0 P2 The perturbation and its decomposition are respectively: E = LDR Â° i r i f 0 0 800 0 0 10 0 10 0 0 The transfer matrix Mn(s) associated to the diagonal uncertainty description is: Â•Mn(s) = R(sI 4 A)~ 1 L 800(s + 2) 800(5 + 2) 800(5 + 2)(s + 4) s(5 + 6)(5+10) Â— 5(5 + 6)(s + 10) 800(5 + 2) 5(5+10) 5(5 + 10) Â— 5(5 + 4)(5 + 6) Â“ ~P3 0 0 0 0 0 1 0 P 2 800pi 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5(5 + 4)(s + 6)(5 + 10) + 800(5 + 2) and the magnitude bounds on the perturbation are given by (5.39) P A = diag{0.1,0.2, 0.3} Computation of the optimal similarity scaling bound of equation (5.32), over the same range 0 < u < 10 ^ yields: maz^inf a[5M a (5)5 1 ]} = 0.4575 (5.40) with the maximum occurring at the frequency w = 7. 27^. This result indicates the existence of a stability margin r 1 1 m max u/ {inf s a[5M a (5)51 ]} Â“ 0.4575 ~ 2 ' 1857 (5 ' 41) which guarantees that stability is preserved for parameter variations: 0.2186 < pi < 0.2186; 0.4371 < p 2 < 0.4371; 0.6557 < p 3 < 0.6557 (5.42)
PAGE 146
139 Compared to the exact results in (5.37), the bounds above are seem to be conservative, as already expected. They reach about 63.5% of the true allowable bounds. Additional results for this example, obtained with LDM and also with the technique of this chapter associated to different scaling techniques, will be given in Chapter 6. Indeed, it will be seem that results obtained with LDM have almost the same degree of conservatism, however involves a computationally demanding nonconvex optimization. It is interesting to point out here that, if the original transfer matrix model is manipulated to put the uncertainty in diagonal form, the transfer matrix from uncertainty outputs to its inputs equals the matrix Afu(s) given in (5.39). This would not necessarily happen if another realization were adopted. For instance, the same numerical results were obtained applying the procedure above to a different minimal realization. 5.7 Some Extensions of Previous Results 57.1 An Alternative Parameter Norm Bound A suboptimal sufficient stability condition using Perron scaling is given by expression (5.33). Since a r is analogous to a multiloop stability margin, the allowable stability domain in the space of plant parameters is obtained as in equation (5.35), that is Sda K = {p : \pk\ < a K P A , fc = l,...,m} (5.43) In general, S d0)r defines an hyperrectangle in the parameter space, whose sides are proportional to the elements of P A . Therefore, if no a priori information on parameter bounds is available, P A can be given the role of weighting the parameters, so that stability domains with different ratios among sides can be computed. For example, letting P A = I m
PAGE 147
140 yields the computation of an hypercube. Notice that the information obtained from the computation of a*is applied to all the parameters when Sda n is computed. Now, consider a robustness problem with 2 parameters, so that p = [pi p 2 ] r > and suppose that for pi in some range, say Pi < pi < /3 U , stability is not affected by values of P 2 in a given range, say for example 7/ < P 2 < l u Furthermore, assume that the perturbed matrix becomes unstable for p x = p l , Pi
PAGE 148
141 be written as m max{Â£ \[PÂ±M T (s)] kj \} < 1 (5.47) 31 But multiplying M , r to the left by P& is equivalent to multiplying each row by the term Pa kk = pfcTherefore, (5.47) is equivalent to 171 m m K F A^(Â«))bl} = max{ Pl  [Af ir (Â«)] li , . . . , \p m \ Â£ [MÂ»(s)] mj } (5.48) 3=1 3=1 j = 1 Thus, sufficient conditions for stability are given by: Pfc om iri/ / o T i k Â— l,...,m, Vs (5.49) Obviously, optimal similarity scaling can be considered in the derivation above, instead of Perron scaling. An example will be shown in the next chapter where the parameter bounds of (5.49) are of advantage. 5.7.2 Inclusion of Unstructured Perturbations In the treatment of perturbed systems in previous sections of this chapter, only the possibility of parametric perturbations was considered, which can be accounted for by the model x(t) = (A + Y1T=i PkEk)x(t). The perturbations to the entries of the matrix A represent perturbations to the coefficients of differential equations describing the system behavior. This perturbed model cannot account for effects of neglected high order dynamics. If such effects have been excluded from the nominal model, the order of the actual system is unknown. It means that the actual dimension of the matrix A is unknown in the presence of high order dynamics. The possibility of structured high order perturbations is likely to worsen the stability properties of the nominal system, thus changing its robustness to parametric uncertainty. A pertinent question in this context is whether or not a reliable stability domain in the
PAGE 149
142 parameter space can be obtained by the technique proposed in previous sections, despite the presence of unstructured dynamic perturbations. The ensuing development shows that, under certain assumptions, even in this case that technique can be applied. Since high order perturbations are themselves dynamic systems, let the actual system be represented by the model x(<) / An A12 E 0 x(t) = + z(t) A21 A22 0 0 ) z(t ) where the nominal state vector x(t) is supplemented by z(t), which is assumed to have at most dimension q. Then, with an adequate partition of the matrix A, the block A\\ represents the nominal matrix, which is subject to the parametric perturbation E. Therefore, one has the system of equations i(t) = A\\x + Ex + A X2 z x(t) = A 2 \X + A 22 z Assuming that x( 0 ) = x 0 and z( 0 ) = 0 , application of the Laplace transform yields Z(s) = [sl z A 22 )' A n X(s) X(s) = [aI x A ll ]x x 0 + [sI s A l i]x EX{s)^[sI x A ll ]l Z{a) from which it follows that A (s) Â— [s/ x Â— An] x 0 + [ sl x Â— An] 1 EX (s) + [s/* Â— An] 1 A\ 2 [sl z Â— A22] 1 A2iA r (s) ( 5 . 50 ) Assuming as before that the parametric uncertainty can be decomposed as E = LDR , and defining N x (sl x AÂ„); A2 Â— [sl z Â— A 22 ] 1 A21
PAGE 150
143 the procedure described in Section 5.2 yields the representation 2/A,(s) RN^L RN1 RN{ 1 u Al (s) 2/A 2 (s) N'L JVf 1 N1 ua 2 (s) X(s) _ 1 7^ 7Â„ i *o( S ) M (5.51) Letting A = f diag{jD,A 2 }, one obtains that in the presence of uncertainty, A(s) is given by the upper linear fractional transformation X(s) = F\j(M, A) = [M 22 + M 21 A(IM 1 iA) _1 Mi 2 ]x 0 (5.52) where [A/n] 2x2 , [A/i 2 ] 2x i, [M 2 i]i x2 and [Af 22 ]i x i are adequate partitions of M. Now, assume that an upper bound on the worst case unstructured perturbation can be estimated as a function of the frequency. Then, with adequate use of weighting, and taking into account the upper bounds on the parameters, presents in D, a class of admissible uncertainty can be defined as V = { A : A = diag { D, A 2 },cr[A(s)] < <5(s),Vs} Conditions for stability of the M A representation, already presented, apply to this case. The fact that A now includes a complex block must be taken into account when using scaling to compute less conservative upper bounds for the spectral radius. Optimal similarity scaling can be applied, by conveniently dotting the scaling matrix of a q x q Hermitian block. On the other hand, the use of Perron scaling still has to be investigated. 5.8 Conclusions The approach proposed in this chapter yields the computation of sufficient conditions for robust stability of state spaces systems under structured, timeinvariant perturbations.
PAGE 151
144 An equivalent frequencydomain stability problem was formulated for the derivation of stability conditions, whose attainment involves techniques used in several contexts of robustness analysis, such as diagonalization of uncertainty and scaling for conservatism reduction. The example given in Section 5.6 demonstrates the applicability of the proposed technique to robust stability analysis of perturbed state space systems. Computation of the sufficient conditions obtained involves a frequency sweep. For each frequency point, the adequate scaling matrix must be computed. If optimal similarity scaling is used, the choice of the scaling matrix involves a smooth optimization problem. The optimization can be avoided by using Perron scaling instead of optimal similarity scaling, although in principle at the cost of additional conservatism. However, numerical experience indicates that the results obtained with Perron scaling are very close to the ones obtained with optimal similarity scaling. Examples will be given in the next chapter. The derivations in this chapter were independently obtained. Although many references are found in the control literature to the generality of the diagonal representation of uncertainty, no explicit derivation of its application to perturbed state space systems has been found. Fan, Tits and Doyle [18] have introduced a new upperbound for the spectral radius, less conservative than the maximum singularvalue bound, and presented a frequencydomain condition for robust stability of state space systems, however without explicitly presenting the derivation of the results. The use of that upperbound in conjunction with the derivation in this chapter should be object of further investigation. The subject of the next chapter is a comparative analysis between sufficient parameter upper bounds obtained with the technique of this chapter and with application of the LDM.
PAGE 152
CHAPTER 6 COMPARISON OF SUFFICIENT PARAMETER NORM BOUNDS 6.1 Introduction Sufficient conditions for robust stability of state space systems under structured perturbations were obtained in Chapters 4 and 5 using respectively, the Lyapunov direct method and frequencydomain approach of Chapter 5. In this chapter, numerical results are given which permit comparison of computational cost and relative accuracy of those methods. In the following tables, Â‘LDMÂ’ designates results obtained with the Lyapunov Direct Method, while Â‘DUÂ’ designates results from the frequencydomain problem having diagonal uncertainty, as proposed in Chapter 5. In this case, Â‘OSSÂ’ stands for Optimal Similarity Scaling, Â‘PSÂ’ designates Perron Scaling, and Â‘OSÂ’ refers to Osborne Scaling. Bounds obtained using Perron Radius bound are designated by Â‘PRÂ’. Parameter norm bounds obtained for DUOSS, in each example, were taken as reference, and attributed the index 1. Thus, indices attributed to other parameter bounds indicate how they compare to the parameter norm bound computed using optimal similarity scaling. Computational requirements are given in number of Â‘flopsÂ’, or floating point operations, required by implementations using standard MATLABÂ™ functions. For LDM results, just the total of flops is given. This total actually depends on the tolerances and stopping conditions specified in the optimization program, and can change if tolerances are changed. For DU results, which are computed with frequency sweep, the flops amount depends on the number of frequency points at which scaling is computed. The average number of flops 145
PAGE 153
146 per frequency point is given, as well as the total corresponding to the complete frequency sweep. The next section includes results obtained for problems discussed in the control literature, all of which have either 2 or 3 parameters. Results for cases having 4 and 9 parameters, randomly generated, are given subsequently. 6.2 Results for Problems with 2 and 3 Parameters 6.2.1 Examples with 2 Parameters Example 6.1 . For this example, matrices A, E\ and Ei were given in Example 4.1. Applying the technique of Chapter 5, one obtains D = diag{pi,p 2 }, and ~ 1 1 1 0.5 L = ; R = 0 0.8 1 0 The results obtained are given in the following table. Table 61. Parameter upper bounds and flops Method Eq. Bounds Index Flops bil < Iftl < Average Total LDM ooÂ— norm 5.45 0.8352 0.8352 0.8632 44,428 DU OSS 6.36 0.9675 0.9675 1 5,536 38,755 PS 6.36 0.9675 0.9675 1 1,321 9,247 OS 6.36 0.9675 0.9675 1 1,069 7,551 PR 4.54 0.9573 0.9573 0.9894 785 5,496 Applying equation (4.35), the LDM 2norm bound is  p  2 < 1.1142. This option demanded 32 iterations of the optimization problem, requiring 22,352 flops. All DU bounds
PAGE 154
147 given in the table are better than the LDM oonorm bound. Note the low cost of Perron and Osborne scaling techniques relative to optimal similarity scaling. Example 6.2 . Let us consider again the matrices A, E\ and E 2 given in Example 4.2. Uncertainty diagonalization gives D = diag{pi,p 2 }> and " 2 2 1 0 L = ; R = 3 1 0 1 Applying equation (4.35), the LDM 2norm bound is  p  2 < 0.3486. The computation took 37 iterations of the optimization problem, requiring 25,072 flops. Other results are given in next table. Table 62. Parameter upper bounds and flops Method Eq. Bounds Index Flops bil < IP 2 I < Average Total LDM 00 Â— norm 5.45 0.25 0.25 1 11,447 DU OSS 6.36 0.25 0.25 1 5,384 26,922 PS 6.36 0.25 0.25 1 1,352 6,763 OS 6.36 0.25 0.25 1 1,106 5,531 PR 4.54 0.25 0.25 1 798 3,990 For this case, the LDM oonorm and all DU results are equal. Computational cost for PR, OS and PS was considerably small than for OSS and LDM.
PAGE 155
148 Example 6.3 The matrices A, E\ and E 2 are the ones already given in Example 4.3. The diagonal uncertainty description is D = diag{pi,p 2 }> and 1 0 L = 0 1 ; 1 1 R = 1 0 1 0 1 0 The LDM 2norm bound, from equation (4.35), is  p  2 < 1.7438. The computation required 64 iterations of the optimization problem, at a cost of 158,088 flops. Other results are given in the following table. Table 63. Parameter upper bounds and flops Method Eq. Bounds Index Flops bil < Pa < Average Total LDM 00 Â— norm 5.45 1.7481 1.7481 0.9989 1,619,735 DU OSS 6.36 1.7500 1.7500 1 9,307 46,536 PS 6.36 1.7500 1.7500 1 1,875 9,377 OS 6.36 1.7500 1.7500 1 8,886 44,429 PR 4.54 1.7500 1.7500 1 1,309 6,546 The LDM oonorm computation was slowly convergent, having required 606 iterations, which demanded 1,619,735 flops. The DU scaled norm bounds are exact and have much smaller computational costs; costs of the PS and PR computations were particularly low. Example 6.4 For this example [18], the matrices are D = diag{pi,p 2 } and Â“ Â• r 1 A = 3 2 ; E l = 1 0 ; e 2 = 0 0 ; l = 1 0 ; R = 1 0 1 0 0 0 1 0 0 1 1 0
PAGE 156
149 The LDM 2norm bound, from equation (4.35), is  p  2 < 0.9751. The computation took 40 iterations, requiring 27,860 flops. The next table shows other results. Table 64. Parameter upper bounds and flops Method Eq. Bounds Index Flops IPil < H < Average Total LDM oonorm 5.45 0.7924 0.7924 0.8660 42,463 DU OSS 6.36 0.9150 0.9150 1 4,760 47,604 PS 6.36 0.9150 0.9150 1 1,212 12,121 OS 6.36 0.9150 0.9150 1 1,007 10,072 PR 4.54 0.9150 0.9150 1 722 7,217 Exact parameter upperbounds are pi < 1; p 2  < 1. Therefore, bounds obtained with DU reach about 91.5% of the exact value, while the LDM oonorm result reaches about 87%. Again, the costs of PR, OS and PS were very low. Example 6.5 In this example [18], A = diag{2, 4}, D = diag{p!,p 2 } and 1 Ei = 49 56 ; E 2 = 48 56 ; L = 7 8 ; R = 7 8 84 96 84 98 12 14 6 7 Exact parameter upper bounds are p x  < 0.0406; p 2  < 0.0812. The LDM 2norm bound, with parameter weighting, yields as stability region the ellipse with semiaxes [0.0543, 0.1086]. The computation took 100 iterations, requiring 64,608 flops. The following table shows that results obtained with DU are slightly better than LDM ooÂ— norm, except for the OS. Note that OSS, PS and PR achieved exactly the admissible parameter bounds, thus yielding a necessary condition for stability; again, the comparison of computational costs is favorable to PR and PS.
PAGE 157
150 Table 65. Parameter upper bounds and flops Method Eq. Bounds Index Flops IpiI < IP2I < Average Total LDM ooÂ— norm 5.45 0.0406 0.0812 1 31,533 DU OSS 6.36 0.0408 0.0816 1 5,554 38,880 PS 6.36 0.0408 0.0816 1 1,299 9,091 OS 6.36 0.0309 0.0618 0.7574 1,055 7,387 PR 4.54 0.0408 0.0816 1 764 5,352 Example 6. 6 The next table shows results for a problem obtained from Example 5.4, by arbitrarily neglecting the influence of the first parameter. Table 66. Parameter upper bounds and flops Method Eq. Bounds Index Flops IpiI < N < Average Total LDM ooÂ— norm 5.45 0.9803 1.4704 0.8573 6,661,789 DU OSS 6.36 1.1434 1.7151 1 5,874 52,867 PS 6.36 1.1434 1.7151 1 2,569 23,123 OS 6.36 1.1434 1.7151 1 2,401 21,613 PR 4.54 1.1406 1.7108 0.9975 2,045 18,410 The LDM 2norm bound, with parameter weighting, yields the ellipse with semiaxes [1.176, 1.6764]. The computation was slowly convergent, requiring 7,862,018 flops in 1000 iterations. DU bounds shown are better than the LDM oonorm result; yet, LDM required a much bigger number of flops.
PAGE 158
151 Example 6.7 Consider the nominal system matrix [25]: 0 1.0000 0 0 2.5000 0.2250 2.5000 0.2250 0 0 0 1.0000 0.8450 0.2525 1.570 1.6825 and the matrices E\, E 2 , L and D are as given in Example 5.3. Sufficient results are shown in the next table. Table 67. Â’arameter upper sounds and flops Method Eq. Bounds Index Flops IPil < IP2I < Average Total LDM 00 Â— norm 5.45 0.0542 0.0059 0.9360 4,607,098 DU OSS 6.36 0.0579 0.0063 1 7,502 60,020 PS 6.36 0.0579 0.0063 1 2,517 20,139 OS 6.36 0.0579 0.0063 1 2,371 18,966 PR 4.54 0.0579 0.0063 1 2,023 16,187 The LDM 2norm bound, with parameter weighting, yields the stability region defined by the ellipse with semiaxes [0.0649, 0.0071]. The computation was slowly convergent; it took 1007 iterations, at a cost of 6,287,756 flops. The DU results shown in the table are slightly better than the LDM 00 Â— norm, but have much smaller costs. However, all the results are excessively conservative in this case, since the parameters could be increased, without bringing stability, up to  Pl  < 0.2485; p 2  < 0.0248. Although no particular reason has been found for the excessive conservatism of the sufficient results, it must be pointed out that jÂ£t = 10, thus showing that parameter ranges were very different in this case.
PAGE 159
152 6.2.2 Examples with 3 Parameters Example 6.8 Let us consider again the system in Example 4.4. The diagonal representation of uncertainty has D = diag{pi, p 2 , P 3 }, and 0 10 0 0 0 0 1 0.4670 0.0139 0.5290 0.8060 Sufficient stability results are given in the following table. u u u L = 0 0 1 1 1 0 ; R = Table 68. Parameter upper bounds and flops Method Eq. Bounds Index Flops bil < H < N < Average Total LDM 00 Â— norm 5.45 1.2814 1.2814 1.2814 0.8272 1,356,198 DU OSS 6.36 1.5490 1.5490 1.5490 1 16,680 83,401 PS 6.36 1.5490 1.5490 1.5490 1 5,086 25,430 OS 6.36 1.5483 1.5483 1.5483 0.9995 15,228 76,144 PR 4.54 1.5467 1.5467 1.5467 0.9985 3,322 16,608 .411 DU bounds are better than the LDM oonorm bound result; OSS and PS gave the best bounds. The cost of DU computation is much smaller than LDM computation, with clear advantage of PS. Equation (4.35) yields the LDM 2norm bound  p U 2 < 2.3295; the corresponding computation demanded 518 iterations, requiring 3,733,200 flops. Example 6.9 The following table shows parameter bounds computed for the problem already considered in Example 6.4.
PAGE 160
153 The LDM 2norm bound define the stability ellipse with semiaxes measuring [0.2046, 0.4092, 0.6137]. The cost of this option was very high due to slow convergence, having reached 7,099,131 flops in 996 iterations. As computed in Example 5.4, the OSS result reaches about 66% of the admissible parameter bounds. The table below shows that all DU bounds are practically equal, and that they are better than the LDM ooÂ— norm bound. Also, it can be seem that the computations of DU bounds are much more inexpensive than LDM computation. Table 69. Parameter upper bounds and flops Method Eq. Bounds Index Flops IpiI < P2 < IP3 1 < Average Total LDM ooÂ— norm 5.45 0.1403 0.2806 0.4209 0.6418 6,825,710 DU OSS 6.36 0.2186 0.4372 0.6558 1 39,700 317,596 PS 6.36 0.2186 0.4372 0.6558 1 4,944 39,551 OS 6.36 0.2186 0.4372 0.6558 1 22,876 183,009 PR 4.54 0.2184 0.4368 0.6552 0.9998 3,178 25,428 Example 6.10 Let us consider the following problem [18]: 3 2 II 1 0 II c* 0 1 II 63 0 1 1 0 1 o o 1 1 0 0 0 for which one obtains D = diag{p l5 p 2 , } and 1 0 1 0 1 0 R = 1 0 1 0 0 1
PAGE 161
154 The exact parameter bounds are pi < 1; p 2 1 < 1; IP 2 I < 1Equation (4.35) yields  p j 1 2 < 0.9385, obtained at the cost of 28,628 flops. The next table shows that the best result was obtained with OSS, which reached about 72% of the exact value; PS and OS gave very close results, at much smaller costs. Table 610. Parameter upper bounds and flops Method Eq. Bounds Index Flops \Pi\ < IP2I < IpsI < Average Total LDM oonorm 5.45 0.5469 0.5469 0.5469 0.7623 19,026 DU OSS 6.36 0.7174 0.7174 0.7174 1 22,560 315,847 PS 6.36 0.7166 0.7166 0.7166 0.9989 3,282 45,498 OS 6.36 0.7166 0.7166 0.7166 0.9989 17,130 239,827 PR 4.54 0.6848 0.6848 0.6848 0.9546 1,640 22,959 The next section presents parameter bounds computed for randomly generated perturbed matrices. 6.3 Results for Randomly Generated Matrices Results presented in this section refer to random matrices having independent perturbed elements. Parameter upper bounds, also random, were limited to 15% of the absolute value of the corresponding perturbed matrix element. In the tables below, the notation Â‘LDMÂ’ designates bounds computed with the Lyapunov Direct Method, using oonorm and parameter weighting. The bounds computed with diagonal description of uncertainty are designated by Â‘OSSÂ’, Â‘PSÂ’ and Â‘PRÂ’, meaning
PAGE 162
155 respectively optimal similarity scaling, Perron scaling and Perron radius. Bounds using Osborne scaling were not computed. In each case, allowable parameter bounds and the number of flops obtained for OSS were used as reference.The remaining bounds and corresponding flops, were transformed in relative values. Tables 611 to 616 show respectively frequency distributions of relative parameter bounds and relative flops requirements. 6.3.1 Random 2x2 Matrices with 4 Parameters Independent perturbed elements yield a diagonal uncertainty decomposition where D = diag{p!,p 2 ,P3,M, and 110 0 10 10 L = ; R = 0 0 11 0 10 1 The next table shows the distribution of relative upper bounds for 65 cases. Table 611. Distribution of relative upper bounds for 4 parameters problems % of OSS bound % of cases Method [7080] [8190] [9196] [9799] 100 > 97% LDM 2 5 3 25 30 84.6 PS 2 18 45 96.9 PR 13 6 5 41 70.8 The second row reveals that, in about 97% of the cases, the bounds obtained with PS reach at least 97% of the bounds obtained with OSS. Furthermore, as shown by the next 2 tables, the computational cost of OSS, as well as the cost of other alternatives, is well below the cost of OSS.
PAGE 163
156 Table 612 condenses the relative flops requirements for 47 of the cases included in the table above. For these cases, the frequency sweep was executed over all the complete range of interest. Table 612. Distribution of relative number of flops for 4 parameters cases with complete frequency sweep Ratio to OSS flops % cases Meth < 0.01 (0.010.02] (0.020.05] (0.050.1] (0. 10.3] > 0.3 < 0.05 LDM 2 11 22 10 2 74.5 PS 17 27 3 36.2 PR 34 10 2 1 97.8 The average number of flops required by OSS, in these 47 cases, was 1,481,268. The table shows that, in almost all cases, the other techniques require less than 10% of that amount. Results were particularly favorable to PR, with 97.8% of the cases requiring less than 5%. LDM also presents a comparatively low cost. It was noticed, during the computations of OSS, PS and PR, that the curve of norm of the scaled matrix versus frequency had only one point of maximum. Although convexity of this function has not been proved, this tendency was assumed as a general property, and the computation of the bounds was interrupted after the first maximum of the curve had been reached. With this assumption, the average flops requirement for the remaining 18 cases included in Table 611 has decreased to 779,413. Since no frequency sweep is involved in the computation of LDM, this assumption does not affect LDM results. The next table shows that, as expected, the cost of LDM relative to the cost of OSS was increased, with only 33.3% of the cases falling below 5% of the amount required by OSS in the same problem.
PAGE 164
Table 613. Distribution of relative number of flops for 4 parameters cases with reduced frequency sweep 157 Ratio to OSS flops % cases Meth < 0.01 (0.010.02] (0.020.05] (0.050.1] (0. 10.3] > 0.3 < 0.05 LDM 1 5 3 7 2 33.3 PS 1 3 9 5 22.2 PR 2 12 3 1 77.8 Note that in the majority of the cases, flops requirement of PS and PR are below 10% of the requirement of OSS. 6.3.2 Random 3x3 Matrices with 9 Parameters Table 614 presents the distribution of relative upper bounds for 30 cases. Table 614. Distribution of relative upper bounds for 9 parameters problems % of OSS bound % of cases Method [7080] [8190] [9196] [9799] 100 > 97% LDM 3 4 5 12 6 58.1 PS 2 11 18 93.5 PR 2 4 5 2 18 64.5 For these cases, the uncertainty decomposition has D = diag{p 1( . . . , p 9 } , and L = 1 1 1 0 0 0 0 0 0 000111000 ; 0 0 0 0 0 0 1 1 1 R = T h h I3
PAGE 165
158 The table above shows that the bounds obtained with PS are very close the bounds from OSS. As shown by the second row, in 93.5% of the cases they reach at least 97% of the OSS bound. Computational costs are shown in Tables 615 and 616, respectively for the cases where complete sweep and reduced sweep were used. Table 615 summarizes the relative flops requirements of 18 of the cases included in the table above, computed with complete frequency sweep. Table 615. Distribution of relative number of flops for 9 parameters cases with complete frequency sweep Ratio to OSS flops % cases Meth < 0.01 (0.010.02] (0.020.05] (0.050.1] (0. 10.3] > 0.3 < 0.05 LDM 1 2 10 5 72.0 PS 2 6 9 1 94.4 PR 17 1 100.0 The average number of flops required for OSS, in these 18 cases, was 37,184,972, which is 25 times the average required by the 4 parameters problems. The next table shows that in all cases, the other techniques require less than 10% of amount required by OSS in the same problem. Results were particularly favorable to PR, with 100% of the cases requiring less than 5%, followed by PS with 94.40% of the cases. Table 616 presents flops requirements of the remaining 12 cases included in Table 614, with reduced frequency sweep. The average flops requirement for the 12 cases was 8,924,840, therefore about 24% of the average requirement with complete sweep.
PAGE 166
159 That table shows that while in 72% of the cases the cost of LDM felt below 5% of the cost of OSS with complete sweep, only about 17% of the cases felt below 5% when the frequency sweep was reduced. Table 616. Distribution of relative number of flops for 9 parameters cases with reduced frequency sweep Ratio to OSS flops % cases Meth < 0.01 (0.010.02] (0.020.05] (0.050.1] (0. 10.3] > 0.3 < 0.05 LDM 2 2 i 1 16.7 PS 1 3 5 3 75.0 PR 4 4 4 100.0 6.3.3 Complementary Results Example 6.11 . The stability domain computed for Example 6.3, using = / 2 was given by pi < 1.75, p 2  < 1.75 It can be easily checked that, for p\ = 1.75 and for a large range of values of the system perturbed matrix has max, i2e[A(y4 p )j = 0. On the other hand, applying the conditions in (5.49), one obtains: \pi\ < 1.749, \p 2 \ < 3.000 Therefore, using the bounds (5.49), it is possible to recover the information about insensitivity of the critical eigenvalue to variations of p 2 in the range [3.0, 3.0], when pi is at its upper bound.
PAGE 167
160 The following table shows upper bounds on parameters, computed using both equations (5.33) and (5.49), for some of the examples previously presented in this chapter. Table 617. Comparison of parameter upper bounds Order Example Expression (5.33) Expression (5.49) Pi P2 P3 Pi P2 P3 1 Ex. 6.1 0.9675 0.9675 1.1462 0.8311 2 Ex. 6.2 0.2512 0.2512 0.2247 0.2899 3 Ex. 6.4 0.9150 0.9150 1.4362 0.7582 4 Ex. 6.5 0.0408 0.0816 0.0519 0.0712 5 Ex. 6.9 0.2186 0.4371 0.6557 0.1682 0.6510 0.8359 6 Ex. 6.10 0.7166 0.7166 0.7166 0.9150 0.5690 0.7991 These results reveal that: 1. Parameter upper bounds computed with (5.34) and (5.50) differ in all the examples. In all cases, each equation give better upper bounds for some of the parameters; 2. For some problems, a less conservative stability domain can be obtained by combining the best bounds from each equation. For example, for the case in row 5, the system is stable for any parameter combination such that Pi< 0.2186, p 2 < 0.6510, p 3 < 0.8359 3. For other cases, such a combination would lead to instability. It happens that, with the bounds of each equation, the perturbed matrix has one null eigenvalue. That is the case of the example in row 2. Therefore, both results indicate possible forms of stability domain for this example.
PAGE 168
161 In view of the above results, and given the small cost of the Perron scaling alternative, it seems that it is convenient to combine the results of both equations (5.33) and (5.49), in order to obtain a less conservative stability domain. 6.4 Conclusions In this chapter, sufficient stability conditions were compared according to two criteria, namely parameter norm bounded and computational cost in flops. The tables presented have shown that, in none of the examples the LDM ooÂ— norm parameter bound was better than the best among the DU bounds, namely bounds obtained with the frequencydomain method of Chapter 5. They were equal in some cases, like in Examples 6.2, 6.5, and some cases included in Tables 611 and 614. On the other hand, the LDM result was much more conservative in some other cases, like in Examples 6.9 and 6 . 10 . Among the DU bounds, the result obtained with Perron scaling, that is, PS, was always equal or very close to the result obtained with optimal similarity scaling, namely OSS. The absolute conservatism, namely relative to the exact admissible parameter norm bound, was variable. DU bounds in Examples 6.2, 6.3 and 6.5, and also LDM ooÂ— norm in Example 6.2 are in fact necessary; however, sufficient bounds were very conservative in Examples 6.7, 6.9 and 6.10. The computational costs of LDM results were very did not show a well defined pattern. The cost of the LDM oonorm bound was lower than the OSS cost in Examples 6.2, 6.4, 6.5, 6.10 and all the results with 4 and 9 parameters; however in Examples 6.3, 6.6, 6.7 and 6.9, the LDM cost was much higher. This fact shows that in some cases, the convergence of the optimization over Q is too slow.
PAGE 169
162 On the other hand, the cost of computation of bounds using Perron scaling was always very low relative to the cost of OSS and LDM bounds; an exception was Example 6.10, where LDM was less expensive than PS. Based on the results presented above, and considering both criteria, the frequencydomain approach of Chapter 5 associated to Perron scaling, emerged as the most effective in the computation of sufficient stability conditions for state space systems under structured, timeinvariant perturbations. An interesting remark is that, in the computation of LU bounds for all the examples, it was observed that the curve of maximum singularvalue of the scaled matrix versus frequency presented one welldefined point of maximum. If it could be proved to be a general property, then the computation of LU results would be less expensive, because computations would be done only up to the occurrence of the point of maximum, instead of been done for the whole frequency range of interest. In the next chapter, additional results from LDM and the frequencydomain method are compared, now in the context of robustification of static controllers.
PAGE 170
CHAPTER 7 ITERATIVE CONTROLLER ROBUSTIFICATION 7.1 Introduction The main objective of this chapter, which is addressed in Section 7.3, is to show that the analysis technique proposed in Chapter 5 is of advantage in iterative robustifi cation of nominally stabilizing controllers. Before carrying out this objective, the concept of iterative robustification of a controller of fixed structure is reviewed. The ensuing discussion concerns the output feedback system depicted in Figure 71 below, where Ea,Eb and Ec represent additive parametric perturbations to the openloop system matrices. Figure 71. Output feedback system representation Let us assume that the open loop nominal plant and the controller have, respectively, the state space descriptions: 163
PAGE 171
164 Plant : i(t ) Controller : x'g(t) A 0 x(t) + B 0 u(t); y(t) = C 0 x(t ) (7.1) Aaza(0 + B K y(t)\ u(t ) = C 0 x K (t) + D K y{t) (7.2) Defining x c (t ) ^ [x(t) xx(t)] T , one obtains, for the perturbed closedloop system, the extended state space representation x c (t) Â— (A c + E)x c (t); y c (t) = C c x c (t), where, (A 0 + B 0 DkC 0 ) B q Ck A c = ; C c Â— Co 0 BkC 0 Ak E = (7.3) (Ea + B 0 DkEc + EbDkEc + EbDkC 0 + EbDkEc ) EbCk BkEc 0 If the control objectives can be met by a static controller, the closedloop representation reduces to: A c Â— (A 0 + B 0 Ii C 0 ); C c = C Q E = (E a + B 0 KE c + EgKEc + E b KC 0 + E B KE C )\ (7.4) The term E in equations (7.3) and (7.4) represents the uncertainty in the closedloop system. If Eg = Eg = 0, E reduces to the uncertainty in the open loop dynamic matrix, namely E A Â• Otherwise, the closedloop perturbation depends on the controller. Suppose that, given a controller, one is interested in analyzing the stability properties of the system in Figure 71. The controller is then included into the closedloop matrix A c , and the dynamical equation of the perturbed system becomes x c (t ) = ( A c + E)x c (t). This is just the perturbed system equation used in the analysis methods discussed in Chapters 4 and 5. Therefore, the results there obtained apply directly to the closedloop system. Under the assumption of time invariant perturbations, an equivalent frequencydomain representation of the perturbed system was obtained in Chapter 5, and illustrated by the
PAGE 172
165 blockdiagram of Figure 51 (b). In that case, the controller was implicitly included in the interconnection structure Â‘A/Â’. Now, suppose that the objective is to design an stabilizing controller for the perturbed system. Using the procedure given in Chapter 5, however keeping the controller explicitly represented, the diagram of Figure 72 below is obtained. This representation is adequate for design purposes. A Figure 72. Standard design problem Under uncertainty, the system viewed from the controller terminals is given by the upper linear fractional transformation A) = [A /22 + A/ 2 iA(/ M u A) 1 A/ 12 ]. Let V exp be the class of expected uncertainty. K is a robust stabilizing controller for the closedloop system if and only if the following requirements are satisfied: 1. I( stabilizes 0), that is, the nominal closedloop system; 2. K stabilizes /\j(Af, A), VA Â€ V exp .
PAGE 173
166 The second requirement implies that the stability domain in the parameter space, determined by choice of the controller, must contain all possible combinations of parameters admitted by the uncertainty class. The perturbed model in Figure 72 is generic. Many control problems can be rearranged to fit that model; performance objectives can be included by taking external inputs and outputs into account. Methods for derivation of controllers can be classified into two main categories, namely direct synthesis techniques and ad hoc design techniques. Examples of frequencydomain direct synthesis methods, which have received a great deal of attention in the last decade, are the II <*, design [20, 36], which can handle unstructured uncertainties, and its development, the /zsynthesis design [13], which accommodates structured uncertainty. Although the design problem is formulated in the frequencydomain, state space methods can be used in the solution [36]. In these methods, control objectives are mathematically converted into objectives involving the minimization of the oonorm of a problem dependent system transfer matrix, namely the minimization of its largest maximum singularvalue, over the frequency range of interest. The design problem is solved by selecting the controller that minimizes the objective, out of the set of controllers that stabilize the system matrix. Synthesis techniques are appealing because of their generality. After the control objectives are mathematically translated, the solution of the design problem follows from well established mathematical procedures. Indeed, the recent literature contains several examples of the applications of such techniques. Nonetheless, restrictions against their use have been pointed out.
PAGE 174
167 An important observation is that practical control problems in general require the simultaneous attainment of several, possible conflicting objectives; not always it is possible to give the objectives and constraints the mathematical formulation required by synthesis techniques. Practical constraints include, for example hardware restrictions in industrial control [38]. Furthermore, the designer has no decision over the structure of controllers obtained by such techniques as H <*, and //synthesis, which have the tendency of being of high order. Certain applications require controllers with the smallest possible order, because of reliability constraints [38]. On the other hand, if an ad hoc design technique is adopted, the controller structure is fixed a priori by the designer. The free parameters of the fixed structure are then optimized, in order to satisfy the design objectives. This approach permits the adoption of well tested structures as the basic control structure. ProportionalIntegralDifferential (PID) structures, for instance, have been traditionally used in control. In a recent example [30] of a frequencydomain method for robust design of multivariable controllers, a PI structure is imposed, and the elements of the controller are chosen such that an objective is minimized. In that case, unstructured uncertainty in a given class is explicitly taken into account in the selection of the controller. A design approach can be seem as an ad hoc procedure is the iterative robustification proposed by Bhattacharyya [4], Given a perturbed system to be robustly stabilized, a nominal stabilizing controller is designed, taking into account all the objectives, including practical ones such as a priori fixed structure and low order elements. The closedloop formed with the nominal controller is then analyzed with respect to robust stability and, if necessary, the elements of the controller are modified in order to increase stability robust
PAGE 175
168 ness. Successive modifications are done, until the closedloop system presents the desired level of robustness. Therefore, the elements of the controller are seem as optimization parameters. Since the controller has a fixed structured, it is preferable to use a controller with the smallest possible order, with the objective of reducing the number of parameters in the controller. Now, it can be shown that the problem of stabilizing a linear time invariant system with a fixed order output feedback controller is equivalent to an stabilization problem with static output feedback, that is, to the stabilization of a system of the form (A + BKC ) by the choice of K [4]. Although the general solution to this problem is unknown, there are results on the order of the controller; also there are methods available for the derivation of the nominally stabilizing controller of lowest order [4]. Moreover, considering that practical PID structures are implemented through leadlag compensators, it is possible to show that a unity feedback system, having in the forward path a linear time invariant system cascaded with a PID compensator, has the same state representation as the static output feedback stabilization problem. Therefore, the robustification procedure applied to static output feedback controllers can be seem as a valid alternative in cases of practical significance. The original robustification approach [4] adopted the Lyapunov direct method in the evaluation of the stability domain yielded by a controller. The robustification is obtained through numerical optimization over the matrices Q and K, respectively the Lyapunov matrix the controller. In the next section, the concept of robustification is explored in connection with the optimization problem proposed in Chapter 4, which since based on a potentially less conservative stability condition is expected to give a better assessment of the stability domain.
PAGE 176
169 7.2 Robustification Associated to Lyapunov Analysis Consider a nominal plant with state space description given by (7.1), and assume that a controller has been designed such that the nominal closedloop x c (t) = A c x c (t) is stable. Also, assume that closedloop uncertainty can be characterized by bounds on parameter ranges, and represented as in equations (7.3) or (7.4). Closedloop stability robustness can be assessed through the Lyapunov direct method, as described in Chapter 4. In analysis, the controller is known and is implicitly included in the dynamic matrix of the closedloop system. Thus, all the quantities r a2 (Q), l S oo(Q )Â» Loo(Q), r* 2 (Q), Isoo(Q) and hoo (Q)i defined respectively by equations (4.36), (4.41), (4.46), (4.62), (4.65) and (4.67), give measures of the stability domain as a function of the Lyapunov matrix Q for a fixed controller K. Considering the controller as a matrix of decision variables, the stability domain in the space of system parameters becomes a function of both Q and K, and can be evaluated by using analogous quantities to the ones used in Chapter 4. The robust stability requirement dictates the choice of the nominally stabilizing controller which yields the largest stability domain. The adequate choice of the elements of the controller can be formulated as an optimization problem over the variables are Q and K as follows. 7.2.1 Â‘Optimal" Stability Domains 2Â— norm Â‘optimalÂ’ domain Let us define the objective J 2 (Q,K) d = o[Mq{Q, A')] (7.5) and let the corresponding optimized stability domain be s hje(QÂ’ K ) d = {p : Up lb < r m 32 (Q,K)}; r* 2 (Q,K) = Â— 1 a[M^{Q y K)} (7.6)
PAGE 177
170 where A/q(*) is defined in (4.32) and Mq(Q,K) is determined by the pair (Q*, A'*), which is the solution to the optimization problem min J 2 (Q,K) = min a[M Q {Q,K)\ (7.7) V,A V,A subject to A c stable . The measure r* 2 (Q,AÂ’) above, which is based on the less conservative equation (4.35), derived in Chapter 4, should be a less conservative indicator of the stability domain than the previous measure [4], which was based on equation (4.22). ooÂ— norm Â‘optimalÂ’ domain Defining 771 &,Â«.*) " ?(Â£ (7.8) k = 1 the optimized stability domain is given by S* d00tK (Q,K) d = { P :  P oo < /;Â«,((?, A')}; l* soo {Q,K) = * ( Efcrl \f% k (Q,K) (7.9) where Ag Jk (*) is defined by (4.28) and Fq k (Q, K ) is determined by the pair (Q*, A'*), which is the solution to the optimization problem min Q,K >{Q,K) = min nZ\ F Q k (Q^<)\] Q ' h k = i (7.10) subject to A c stable 7.2.2 Robustification Procedure It is reasonable to expect that, in practical design problems, information about the range of the plant parameters is available. These range determine the minimum stability domain in the parameter space that a controller must generate for the closedloop system.
PAGE 178
171 The strategy in iterative robustification is to gradually modify the elements of a nonrobust controller, keeping its order fixed, until the stability domain yielded by the modified controller is at least as big as the minimum domain determined by the parameter ranges. This strategy requires that the stability domain yielded by the current controller be assessed, after each iteration. The optimization problems formulated in (7.6) and (7.10) can be used to assess and iteratively improve the stability domain. The geometry of the functions involved in those optimization problems, which depend on the variables Q and A', is not simple. A feasible technique of solution is to dismember the joint optimization problem in two separate problems, each with respect to one variable. Each step of the optimization procedure is then constituted by one optimization over each one of the variables. It was advanced in Chapter 4 that the functions involved in the optimizations are not convex with respect to Q; the are not convex with respect to K either. Therefore, since local extreme points can be present, the assessment of the stability domain can be conservative. However, the optimizations are useful, because nonconvexity does not prevent that solutions which increase the computed stability domain be obtained. Actually, it is not even necessary to extend each optimization until convergence; it suffices to optimize until an acceptable improvement in the value of the criterion is achieved, relative to the starting value. The robustification procedure, alternating optimizations over Q and K, can be implemented through the following algorithm: Procedure 7.1 . Robustification using Lyapunov analysis: 1. Initialization: Consider given:
PAGE 179
172 (a) Nominal system matrices: A o ,B 0 ,C 0 ; (b) Perturbation structures: Ea, Eb, Ec\ (c) Nominal stabilizing controller, A' c ; (d) Parameter bounds: Â— < ctk,k = 1 These bounds determine the minimum stability domain required for the closedloop system, say either Vdi or Vdooi according to the chosen parameter norm. 2. Step i, i > 1, phase 1: For a fixed K, (a) Compute the system matrix: A ( c = f(A 0 ,B 0 ,C 0 ,K <')), with = K 0 ; (b) Compute the perturbation matrices E^\k = 1 If the closedloop perturbation does not depends on the controller, these matrices are constant, for all steps i; (c) Obtain the optimal Lyapunov matrix, Q* (>) , by solving either ming A'W) or ming Jco(Q,KM); (d) Compute the associated stability domain yielded by either Sd 2 ,K or Sdoo,K", (e) If the computed stability domain contains the corresponding minimum stability domain, stops. The controller /C') is robust enough. Else, proceed to the next step. 3. Step i, phase 2: Considering Q = Q* (,) , (a) Obtain the optimal controller, A'* (,+1) , by solving either min ft J 2 (Q* (,) ,K) or minfcJ TO (g* (,) ,A'), subject to A c stable; (b) Compute the associated stability domain yielded by A'* (,+1) , either Sd 2 K or Sdoo , AÂ’ i
PAGE 180
173 (c) If the computed stability domain contains the corresponding minimum stability domain, stops. The controller A'* (,+1) i s robust enough. Else, let i < Â— (i + 1), and return to number 2. Parameter ranges are usually specified as the real intervals [Â— a*;,afc], for k = l,...,m, thus determining the hyperrectangle Vdoo Â• Since the stability domain computed on the basis of equation (7.10), with parameter weighting, is also an hyperrectangle, it is more convenient to apply (7.10) than (7.7). By adequately choosing the weights, the relevant dimensions of the computed stability domain can be made proportional to the parameter ranges. The number of parameters in the controller depends on its dimension, determined by B 0 and C 0 , and on its order. The matrix Q has n n 2 +1 ^ parameters, where n is the dimension of A c . Equation (7.7) requires the computation of the maximum singularvalue of Mq, which is a mn x m matrix, where m is the number of system uncertain parameters. Equation (7.10), on the other hand requires the computation of the maximum singularvalue of a n x n matrix. The optimization problems can be implemented through descent gradient techniques. The gradients of the objectives with respect to Q and K can be either analytically determined or computed by finite differences. Depending on the implementation, the use of analytically determined gradients is expected to save computational time. Analytic expressions for the partial derivatives of the objectives with respect to elements of Q and K are given in the next subsection, for the case where K is a static controller. They can be extended to cover the case of dynamic controllers.
PAGE 181
174 7.2.3 Derivatives of Objectives With Respect to AY, The partial derivatives of the objectives Ji (Q , K) and JooiQ, K) with respect to Q, for fixed K, are the same as derived in Section 4.4. Derivatives with respect to A', for fixed Q, are given below. The next lemma is used in the derivations: Lemma 7.1 . Consider the definitions of Fk and FQ k given by (4.18) and (4.20), respectively, and the closedloop perturbed equation A Cp = ( A 0 + B 0 KC 0 ) f ^Â£1, PkEk Â• Let Q be the solution to the nominal Lyapunov equation for the closedloop system, namely Aj P + PA C = Q. Write K as K = A', + K'ijX'i, (7.11) where A'* represents the controller K with the element Kij replaced by 0, and X' 3 is a matrix with the same dimensions of K , whose elements are null except for the element which is equal to 1. Then, = q\ [Efp' 3 + p* Ek ] Ql = { Fg J k (7.12) where P' J satisfies A^P' 3 + P' J A C = Q' 3 (7.13) and Q' 3 d = (B o X' 3 C 0 ) t P + PB 0 X' j C 0 (7.14) where P is the solution to the Lyapunov equation for the nominal closedloop. Proof . The proof is similar to the proof of Lemma 4.5. Let the Lyapunov equation for the nominal closedloop be Aj P + PA C = Q. Taking the derivative with respect to A',j , m dKij = 0 = d[Aj] p T d[P } dKij ^ c dKij + d[P] dKij A c + pOjAj] dKij one obtains
PAGE 182
175 The closedloop matrix is Therefore, A c = A 0 + B 0 KC 0 = (A 0 + B 0 K,C 0 ) + B 0 K tj X^C 0 = B'WÂ’C.; y&.lB.KvXVC.f (7.15) Defining P' J and substituting (7.15) in the derivative equation, one obtains a t p k + p >J Ac = [(B 0 K ij X i >C 0 ) T P' i + P %J B 0 K lJ X' : C 0 \ =' Q def ij (7.16) Now, _ d[QiF k Qi) = j d[ft] j iJK 3A' dA r , (7.17) and _ rrp'j , p'i 0ATÂ„ +F ^ (7.18) Equation (7.12) follows from substitution in (7.17). Partial derivatives of J 2 (Q,K ) The following lemma provides an analytical expression for the partial derivative of the objective functional J 2 (Q,I{) defined in (7.5), with respect to the elements A',y . Lemma 7.2 . The partial derivative of J 2 (Q, K ) with respect to AÂ’,j is given by d[ + Fqu f qJ w (7.19) where W , W are respectively the left and right eigenvectors associated with the largest eigenvalue of (. MÂ«M Q ), normalized such that  W H W = 1 , and F Qk and F$ k are defined, respectively, by (4.21) and (7.12).
PAGE 183
176 Proof. The proof follows as in the proof of Lemma 4.6. Considering Q fixed, and making the substitution + Fq . F'i ) k=l Using equation (4.89), and substituting for equation (7.19) follows. (7.20) (7.21) Partial derivatives of Jqq(Q, K) Defining, for fixed Q: jt=i Then, the objective functional of equation (7.8) becomes 4.W, *Â•) =
PAGE 184
177 with Fg k and Fq ( obtained from equation (7.12) in Lemma 7.1 combined with the definition (4.95). Proof . The proof proceeds exactly as the proof of Lemma 4.7, with the substitution qij Â— Â» Kij, and the use of the result (7.12) for . An adverse aspect of the robustification procedure above described is that the objectives depend on both Q and K . However, Q and K have essentially different roles in the optimization problems. While K is the component of the control system which actually determines the size of the stability domain, Q is only a mathematical parameter. The dependence on Q is a consequence of the fact that, as discussed in Chapter 4, indicators of robust stability derived through the Lyapunov direct method are too conservative, unless a search procedure is implemented for choosing the best Q for the Lyapunov equation of the nominal system. Therefore, the effect of the optimization over Q is to improve the analysis tool and therefore the computed stability domain, not the actual stability domain. In the next section, a different approach to the robustification is proposed. The central idea is to apply the analysis technique of Chapter 5 in the computation of the stability domain, thus eliminating the parameter Q of the optimization problems. 7.3 Robustification Associated to FrequencyDomain Analysis If the plant parameters are timeinvariant, robust stability x(t) = (A + E)x(t) can be assessed by the method of Chapter 5, which invokes scaling techniques to reduce the conservativism of the computed stability domain. Recall that E = Y1T=i Pk Ek was decomposed as E = LDR, where L and R are constant matrices, which account for the structure of the perturbation, and D = diag{p^ }Â•
PAGE 185
178 Necessary and sufficient conditions, given by equations (5.24) and (5.26) are not directly computable. By relaxing the uncertainty description, whence admitting complex parameters. and exploring well known scaling techniques, sufficient conditions were derived. Specifically, under the assumption that upperbounds on the elements of D are available in the form Df< a Pa,,, a > 0, Vi,j, the following stability conditions (5.32) and (5.33) on the quantity a were obtained, namely as < . " 1 On infs o{ S[M n (s)P A ]S~' } 1 :rr V5 ZT7> As already pointed out, the quantity a is analogous to a multiloop stability margin; it is conservative, because the uncertainty description was relaxed in order to obtain the expressions above . Let us recall that, when the state space dynamic equation represents a closedloop controlled system, Afn(s) = R(sl A C )~ 1 L, where R and L come from the uncertainty diagonalization. In system stability analysis, the controller is implicitly included in the closedloop system matrix, namely A c . Therefore, the above equations can be written as a function of the controller: os{K) < Â— _ 1 Mi' On(P') < ~ infs a{5[A/ n (/i',s)P A ]51 }' 1 ITTÂ’ V5 T3.1 Â‘OptimalÂ’ Stability Domain (7.26) (7.27) The controller that satisfies the design objectives is in general not unique. From the point of view of stability robustness, the optimal controller is the one which yields the largest stability domain in the parameter space. Using expression (5.35), the stability domain in
PAGE 186
179 terms of a(K) is: Sda,K(K) = {p \Pk\ < a(K)P Akk , k = 1, . . m } (7.28) where a(A') is from either (7.26) or (7.27). Notice that, since P A is diagonal, the stability domain is a hypercube in the parameter space. The largest stability domain is given by the controller which maximizes a(K), or equivalently, by the controller which minimizes the denominator of either (7.26) or (7.27), depending on what equation is used. Defining optimal stability indicators, for each frequency value , as aÂ’ s (K) max as{K) (7.29) K aÂ‘(K) max a K (K) (7.30) A and in correspondence defining functional objectives: J S (K) = inf a{ S[Mn(K,s)P A }S~ 1 } (7.31) MK) = S,[M 11 (K,s)P a ]S; 1 } (7.32) the optimal stability domain can be expressed as S da,K( K ) = {P : \Pk\ < a*(K)P Akk , k = l,...,m} (7.33) where a*(I() is obtained as the solution of either one of the following optimization problems: min J S (K) min {inf a{ S[Mn(K, s)P A ]S~ 1 } } (7.34) A K S subject to A c stable min Jt{K) = min W{ S K [M n (K,s)P A ]S ~ 1 } (7.35) A A subject to A c stable
PAGE 187
180 7.3.2 Robustification Procedure The optimization problems in (7.34) and (7.35) can be used as the basis of robustification procedures similar to the one discussed in the last subsection. In the case of robustification using Lyapunov analysis, there were two parameters, namely Q and K . Recall that Q is an n x n symmetric matrix, where n is the dimension of the system, having therefore n ( n + 1 ) parameters. The objective function, in the case of the 2norm, involves the maximum singularvalue of Mq, which is a nm x n matrix. In the case of ooÂ— norm, the maximum singularvalue of the n x n matrix J2k= l FQk is required. Optimization of ax based on (7.34) also involves 2 parameter matrices, namely the scaling S and the controller Ii . Notice that the optimal scaling at each frequency depends on the controller; therefore what problem (7.34) requires is actually a jointly minimization of the objective in S and K. It is easy to see, from the expression of Mn(Ii',s), that the optimization problem is not convex in A , and hence it is not jointly convex in K and S. Nonetheless, for robustification purposes, nonconvexity does not render the optimization useless. Since the procedure is iterative, convergence to the global maximum at each step is not necessary. The conceptual simplicity of the robustification procedure is an important motivation for its adoption, which should be matched by operational simplicity. Thus, it is of advantage to adopt the technique of alternating optimizations over K and S. Due to the simplification introduced by this option, derivatives of the objectives with respect to elements of S and K can be straightforwardly obtained, as shown in the next subsection. Moreover, in the step where K is fixed, say K = I{, the optimization problem becomes infs o{S[Mi\(K , s)]5 1 }, which can be shown to be convex in S [12, 40].
PAGE 188
181 As the number of optimization parameters is concerned, S is diagonal m x m, where m is the number of uncertain system parameters. Since one entry of S can be made unitary, there are (m Â— 1) parameters in the optimization over 5; the number of parameters in the optimization over K depends on the dimensions of B 0 and C 0 , and on the order of I(. The robustification procedure, alternating optimizations over S and K, can be based on the following algorithm: Procedure 7.2 . Robustification using optimal similarity scaling: 1. Initialization: Consider given: (a) Nominal system matrices: A 0 ,B 0 ,C 0 \ (b) Perturbation structures: Ea, Eb, Ec\ (c) Nominal stabilizing controller, K 0 ; (d) Parameter bounds: Â— < Pk < (*k, k = 1, . . . , m, which determine the minimum stability domain required for the closedloop system. 2. Step i, i > 1, phase 1: For a fixed K, (a) Compute the closedloop system matrix: = f(A 0 ,B 0 ,C 0 ,K ( ')), with AÂ’W = K 0 ; (b) Compute the perturbation matrices E[!\k = 1 ,...,m. If the closedloop perturbation does not depend on the controller, these matrices are constant, for all steps i; (c) Obtain the decomposition E = LDR, with L, R constants and D = diag { Pi }; (d) Compute the associated matrix A/ n (s) = R(sl A c )~ l L\
PAGE 189
182 (e) Solve the optimization over S in (7.34), which yields 5* 10 , and compute aÂ’ s (K ); (f) Compute the stability domain S a< K yielded by A'b); (g) If the computed stability domain contains the minimum stability domain, stop. The controller A'b) is robust enough. Else, proceed to the next step. 3. Step t, phase 2: Considering S = S* (,) , (a) Solve the optimization over K in (7.34), to obtain the controller, K * (,+1) ; (b) Compute the associated stability domain Sd a ,K yielded by A'* (,+I) ; (c) If the computed stability domain contains the minimum domain, the controller A r * ( Â’ +1) is robust. Else, let i * Â— ( i + 1), and return to phase 1. The advantage of this procedure relative to the procedure which uses Lyapunov analysis is that the optimization over S is convex and unconstrained, while the optimization over Q is constrained and not convex. The number of optimization parameters is, respectively (m Â— 1) and Â— , where n is the system dimension and m the number of uncertain system parameters. Therefore, the number of optimization parameters in S is less than the number in Q whenever m < i(n 2 + n + 2). However, the above procedure also requires optimization over 2 matrices of parameters, namely S and K . From the computational point of view, it is certainly desirable to avoid the optimization over 5, what can be accomplished with a robustification based on the minimization of the objective J W (K) given in (7.35), where Perron scaling is used. The Perron scaling for (M n (K, s)P&)+ can be directly computed, and certainly is less expensive than the optimization over S. Although the index for a given K, is
PAGE 190
183 more conservative than the index Js(A'), it is not a fundamental issue in the iterative robustification. The following algorithm for the robustification using Perron scaling is obtained from a mere simplification of the last procedure: Procedure 7.3 . Robustification using Perron scaling: 1. Initialization: Consider given: (a) Nominal system matrices: A o ,B 0 ,C 0 \ (b) Perturbation structures: Ea, Eb, Ec\ (c) Nominal stabilizing controller, A' 0 ; (d) Parameter bounds: a* < Pk < k= 1, . . m, which determine the minimum stability domain required for the closedloop system. 2. Step i, i > 1: For a fixed K, (a) Compute the closedloop system matrix: A ^ = f(A 0 ,B o ,C o ,K (')), with A'! 1 ) = K 0 ; (b) Compute the perturbation matrices E^\Vk. If the closedloop perturbation does not depend on the controller, these matrices are constant, Vi; (c) Obtain the decomposition E = LDR , with L, R constants and D = diag { p* >; (d) Compute the associated matrix Mu(s) = R(sl A c )~ l L] (e) Compute the Perron scaling S n for [M\\{K,s)P&]+\ (f) Compute a^(A^) and the domain Sd a ,K yielded by using (7.33);
PAGE 191
184 (g) If the computed stability domain contains the required minimum stability domain, the controller A'b) is robust. Else, solve the optimization problem (7.35), obtaining A' * l Â‘ +1) , let i < Â— ( i 1), and return to number 2. The optimization problems involved in the robustification can be implemented through descent gradient techniques. Analytic expressions for the derivatives of the objectives with respect to elements of S and K are given in the next subsection, for the case where K is a static controller. 7.3.3 Derivatives of Objective Functions Derivatives of Js{K) with respect to 5Â„ Consider a fixed controller, say K = Ii, and let M a = M n (K,s)P A ; M s = SM a S~ l (7.36) S = S. + S ii E ii ; 51 = 5" 1 +  ii (7.37) where 5* is the matrix obtained from S with the replacement of Su by zero, and Eu is an elementary matrix with nonnull element (i, i). Pursuing the approach in Latchman [33], one has the following: Lemma 7.3. The derivative of Js(A') with respect to the element 5Â„ of the scaling matrix is given by )] = W H {Mg EaMs [a(M s )] 2 E tt } W (7.38) where W , W are the left and right eigenvectors associated to the largest eigenvalue of Ms, normalized such that  M^ Ms = 1.
PAGE 192
185 Proof . This proof follows the development in [33]. From Lemma 2.1, with the substitutions x Â— *Â• Su and M Â— * Ms, one has that d[a(M s )\ _ . 1 w h SJMIMs\ w dSa 2 o(Ms) OS ii Considering that Ms? Ms = 5 1 S S M a S~ x and 9 [52] =2(5, + S,iE tt )E lt = 2 S,,E U \ 9 [5_1] EÂ„ dS ; , dS, [SÂ«] 3 E, SlEu = SEuS; ~ = S~ l EÂ„ = E u S~ l bit it follows that (7.39) Ws] = ~ [2 Sl M a H SEuS~ l 5 Â“ 1 SSM a S~ x S ~ x 5 5Af a 5Â“ 1 Eu) UOii O It Substituting in (7.39), and considering that: M s h M s W = [g(M s )] 2 WW H E tt M s H M s = W H [a(M s )} 2 E u W W H M s h M s = [a(M s )] 2 W H W H M s H M s E ti W = [a(M s )] 2 W H E it W = W H [a(M s )] 2 E it W expression (7.38) follows. Derivatives of J(K) with respect to Ii\j For a fixed scaling matrix, say 5 = 5, both the objectives (7.31) and (7.32) can be represented as J(Ii) = a[SM a S~ x ) =a(Ms) (7.40) Considering a static controller, and writing K = A', + K^X' 3 , as in (7.11), the matrix MÂ§ corresponding to the closedloop system is M Â§ = SM a S~ l = S[R(sI A 0 B 0 KC 0 )~ 1 L]P a S (7.41)
PAGE 193
186 An analytic expression for the derivative of the objective in (7.40) with respect the elements of the controller is given by the next lemma. Lemma 7.4 . The partial derivative of J(K ) with respect to K{j is given by d[J(K)} dKij 2 a(MÂ§) 1 W H [V ij + (V ij ) H ]W (7.42) jq where W, W are the right and left eigenvectors associated to the largest eigenvalue of Mg, normalized such that  W W = 1, and, with Eij as defined in (7.11), Vn = MH SRT1 BoEijCoT1 LP A S; T = (si A a B 0 KC 0 ) Proof. From Lemma 2.1, one has that dW(Mg)] s>\ _ 1 H d[Md Mg] dKi W Now, where L tJ 2 a(Mg) m?Ms\ trH d[MÂ»] di( W (7.43) dK = mp Â— Â— ^ + s dKij Mi h Â® [Mg] 5 dIC v H d i M "} = Â§ R d[(*I A 0 B 0 KC 0 )~ l ] dK dK , LP A S l (7.44) (7.45) L,j Using the decomposition of K and considering the definition of T 1 , one has that T 1 = (si A 0 B 0 KC 0 KijBoEijCo)1 . Therefore, = T~ l B 0 E tJ C 0 T\ Substid\M . H 1 tuting in (7.45), one obtains = SRT _1 B 0 EijC 0 T~ l LP A S~ l . Now, substituting in (7.44), that expression becomes: d[Md M*] Â— f = [M? SRT~ l BoEijCoT1 LP A S']+[Md SRT1 BoEijCoT1 LPaS1 }" Â«i s Â— v Defining the underbraced term as F'7 , and using it in (7.43), expression (7.42) follows. The next section illustrates the application of the robustification procedures given in this chapter, and compares their performances.
PAGE 194
187 7.4 Application Consider the linearized model of the dynamics of a VTOL helicopter in the vertical plane [4]: 0.0366 0.0271 0.0188 0.4555 0.4422 0.1761 0.0482 1.010 0.0024 4.0208 x + 3.5446 7.5922 0.1002 0.3681 0.7070 1.4200 5.5200 4.4900 0 0 1 0 0 0 V = [ 0 1 0 0 MO The state variables to Â£4 are, respectively horizontal velocity (knots), vertical velocity(knots), pitch rate (degrees/sec) and pitch angle (degrees), while the inputs uj and u 2 are collective pitch control and longitudinal cyclic control. The openloop eigenvalues A(A 0 ) = {0.27578 Â±j0. 25758, Â—0.2325, Â—2.072667}, show that the plant is unstable. It can be stabilized with output feedback through the algebraic controller r T Ko 1.635220 1.582236 (7.46) Due to variations in airspeed, both the dynamic and the input matrix are uncertain. While perturbations to some of the matrix elements are negligible, main changes take place in the elements A 32 , A 34 and . 821 . The perturbation to the closedloop system is: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + Vi 0 0 0 0 + P3 0 A(l) 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ' " v ' ' ^ Â£1 Â£2 e 3
PAGE 195
188 where A'(l) is the first element of the 2x1 controller. At the equilibrium point considered, the uncertain parameters are known to vary in the ranges: I Pi  < 0.05;  p 2  < 0.01;  p 3  < 0.04 (7.47) Therefore, the 2Â— norm of the parameter vector is bounded above by  p 2 < 0.06481. In the original robustification procedure [4], equation (4.22) was used to compute an upper bound on the allowable 2Â— norm of the parameter vector. With the choice 1.3000 0.5800 0.6410 0.3460 0.5800 1.5200 0.2920 0.9560 0.6410 0.2920 1.2678 0.0850 0.3460 0.9560 0.0850 2.2732 the following upper bound on the perturbation vector was reported [4]: II Pl 2ra Â„= 0.02712 < 0.06481 which means that, with the nominal controller, the system is not stable for all perturbations in the admissible class. It was also reported that, after 26 iterations of a robustification procedure based on (4.22), an stabilizing controller K * = [0.99633989 1.801833665] r , was obtained, which guarantees a positive stability margin, since it yields II H 2moi = 0.12947 > 0.06481 (7.48) Procedure 7.1 of the last section is an implementation of the optimization problem (7.7), which is based on (4.35), derived in Chapter 5. Like (4.22), equation (4.35) gives a bound on the 2norm of the uncertain parameter vector. Procedure 7.1 was applied to the above problem, generating the results shown in Table 71 below. The optimization problem was coded using standard M AT LABÂ™ functions, including optimization routines. For
PAGE 196
189 simplicity of programming, the option of computing gradients by finite difference, offered by the routines, was used. At each optimization step, the number of calls to the optimization routine was arbitrarily limited to 25, since convergence at each step is not fundamental in the robustification procedure. Table 71. Results from Procedure 7.1 with 2Â— norm bound Step Optimization over o{Mq) r* >2 0 8.32 0.12019 1 Q 3.38 0.29586 K 2.51 0.39841 2 Q 2.40 0.41667 K 2.24 0.44643 3 Q 2.19 0.45662 K 2.08 0.48077 4 Q 2.07 0.48309 K 2.03 0.49261 5 Q 2.01 0.49751 K 2.00 0.50000 6 Q 1.99 0.50251 K 1.97 0.50761 Number Total: 1,793,967 of flops Average: 298,950 Notice that the result in the first row corresponds to Q Â— 21. Thus, rÂ„(2I,K 0 ) = 0.12019 > 0.06481
PAGE 197
190 This result shows that the nominal controller given by (7.46) yields a closedloop system which is robust against the expected class of perturbations; therefore, there is no need of robustification. This result also shows that Q = 21 is a better starting point than the Lyapunov matrix used in Bhattacharyya [4]. In fact, if Q = 21 is used with the stability measure based on (4.22), the result is r 3i (2I,K a ) = 0.115449 > 0.06481 thus indicating robust stability. Notice that this result is slightly worse than the value in row 1 of Table 71, thus showing that the stability condition (4.35), derived in this dissertation, is less conservative than (4.22). Although robust stability is guaranteed by the result of row 1, additional steps of the robustification were performed, in order to observe the performance of the procedure. It can be noted that, as expected, r^(Q, K) increases at each step. At the end of the 6 th step, the controller is A' 6 = T 1.174803 2.458954 (7.49) and r s 2 (^ 6 *> A * 6 *) = 0.50761, which is almost 4 times the initial value. The average number of flops required by the first 6 steps of the procedure was 298,950. If convergence were sought at each step, the number of calls to the optimization routine would not be limited to 25 and the average number of flops would certainly increase. The only design objective in the robustification procedures described in previous sections is robust closedloop stability. As the controller is changed, different set of eigenvalues are obtained. Table 72 shows closedloop eigenvalues resulting from some choices of the controller.
PAGE 198
191 Table 72. Closedloop eigenvalues for different controllers Controller K 0 A'* [4] 19.01149 18.39629 23.97406 A (A c ) 0.24413 ijT.41772 0.24759 Â±jl . 25014 0.26839 Â±j 1.22231 0.06289 0.07363 0.07583 Clearly, the performance of the closedloop system depends on the controller. Therefore, the robustification procedure would be more realistic if performance objectives were included. In principle, design objectives that are satisfied by assigning closedloop eigenvalues to suitable regions of the complex plane can be handled by those procedures. Such additional specifications can be taken into account by the introduction of additional constraints in the optimizations over Q and K. However, the computational effort required would certainly be increased. Procedure 7.1 was also implemented with analysis based on ooÂ— norm of the parameter vector. Weighting of parameter ranges, described in Subsection 5.3.5, was used, whence the objective to be optimized is J 00 (Q,K ) given in equation (7.10) with FQ k replaced by the weighted matrix FQ k . Table 73 below shows the results obtained in the first step of the procedure. Table 73. Results from Procedure 7.1 with ooÂ— norm bound Step 0 1 Minimization over Q K *(Â£?=! A? J 48.895 21.257 18.910 UQ, K) 0.02045 0.04704 0.05288 Number of flops 201,506
PAGE 199
192 Parameter weights were chosen such that the computed stability domain is proportional to the known parameter ranges, given in (7.47), with the smallest weighted range, which corresponds to p 2 , normalized to 1. Therefore, the computed stability domain is obtained by multiplying the value of l* Soo (Q,K ) by 5, 1 and 4, respectively. From the values in row 2, one obtains: r Soo (2I, K' (o) ) =>  pi < 0.10225, \p 2 < 0.02045,  p 3 < 0.08180 r 3oo {Q {1 \ K^) => I Pi < 0.23520,  p 2 < 0.04704,  p 3 < 0.18816 C oo (Q (1) ,/v (1) ) =>  pi < 0.26440,  p 2 < 0.05288,  p 3 < 0.21152 The stability domain computed with 1*^(2/, AÂ’W) already contains the minimum required stability domain, thus confirming that the nominal controller provides the desired degree of robustness. Furthermore, since this value corresponds to Qini, It means that no optimization is needed to check robust stability. The number of flops required to perform one step of this version of Procedure 7.1, with number of calls to the minimization routine limited to 25, is about 32% less than the average number required by the 2Â— norm based procedure, which is given in the last row of Table 71. For the application of robustification with diagonal uncertainty description, the perturbation was decomposed as E = LDR , where: 0 0 0 r pi 0 0 0 1 0 0 0 0 A'(l) L(K) = ; d = 0 P2 0 ; R = 0 0 0 1 1 1 0 0 0 P3 0 1 0 0 0 0 0 Since L depends on the controller, it must be evaluated at each step.
PAGE 200
193 Table 74 presents values of a n (K) obtained for 5 steps of Procedure 7.3. Step 0 concerns only the Perron scaling of the closedloop obtained with the nominal controller, while each subsequent step comprehends an evaluation of a* (A') followed by a constrained optimization of J 1T (K) over the controller elements, with constraints being used to ensure that A c remains asymptotically stable. Table 74. Results from Procedure 7.3 Step 0 1 2 3 4 5 KUQ 21.319064 21.319064 21.319065 21.474339 21.474339 21.474340 Number of flops Total: 465,465 Average: 77,577 The controller obtained at step 5 was a~ (5) = 1.63154 1.60444 The value of a x (AÂ’ ) indicates how much the known ranges can be increased without the system becoming unstable for any combination of parameter in the increased parameter domain. Allowable parameter ranges are then obtained by multiplication by a r (K) of the known ranges given in (7.47). Therefore, the stability domains yielded by the initial and the final controllers are, respectively: a r (K 0 ) =>  Pi < 1.06595,  p 2 < 0.21319,  p 3 < 0.85276  Pi < 1.07372, I p 2 < 0.21474,  p 3 < 0.85897 The result obtained with K 0 confirms that the nominal stabilizing controller is robust against the expected perturbation. Notice that with optimization over K, a K (Ii) has increased only about 0.73% between steps 0 and 5. On the other hand, the stability domain
PAGE 201
194 indicated at step 0 is much larger than the one indicated at step 0 of Table 73, which of course depends on the initial choice of Q. It seems to indicate that the apparent effectiveness of the robustification over Q and K is brought by improvements in the analysis tool obtained by the optimization over Q. The computational effort required by Procedure 73 depends on the number of frequency points used. The above table was computed using 30 frequency points for each value of K; in the optimization step, the number of calls to the minimization routine was limited to 25. Comparing with previous tables, the average number of flops required to compute a n (I\ ) is seem to be about 26% of the average number required by r i2 , given in Table 71, and about 38% of the number required by the computation of / Soo , shown in Table 73. In order to have a better assessment of the computational effort required by stability domain indicators derived in Chapters 5 and 6, they were computed for the nominal closedloop system. For the indicators requiring optimization, there was no restriction in the number of calls to the minimization routine. The corresponding computed stability domains are given in Table 75. Each of the computed stability domain contains the minimum required, thus confirming that the nominal controller provides adequate stability robustness against the expected parameter variation. Moreover, they indicate that the system will remain stable even if the known parameter ranges are increased by a factor of order 20. Notice that all of the results computed with diagonal uncertainty description indicate a less conservative stability domain than the Lyapunov direct method. The Lyapunov result took 523 steps of the optimization over Q, thus requiring about 6,680 flop per step. Actually, these figures depend on the tolerances used in the optimization routine.
PAGE 202
195 Table 75. Stability domains obtained with K a Method Parameter upper bound Number of Assessment  Pi max 1 P2  max 1 P3 I max of Flops Lyapunov Direct Method: weighted ooÂ— norm 0.92165 0.18433 0.73733 3,493,756 Diagonal Uncertainty: with Similarity scaling 1.06596 0.21319 0.85276 291,469 with Perron radius bound 1.06492 0.21298 0.85193 3,190 with Perron scaling 1.06596 0.21319 0.85276 18,841 with Osborne scaling 1.06596 0.21319 0.85276 140,370 In the computations with diagonal uncertainty, 10 frequency points were used over a preselect frequency range. The flop counts shown in the table refer to the searches over these 10 points. On the basis of the size of the computed stability region and of the corresponding computational effort, the data on the last table show that the most efficient technique was the frequencydomain approach associated to Perron scaling. 7.5 Conclusions A robustification procedure was proposed in Bhattacharyya [4], which uses the Lyapunov direct method for assessing the allowable 2norm of the plant parameter vector. That procedure involves a numerical optimization of the stability indicator over the Lyapunov matrix Q and a static controller, K . In this chapter that concept was explored, however adapting the procedure to the stability conditions derived in Chapter 4. which are
PAGE 203
196 potentially less conservative. Analytical expressions of partial derivatives of the objective functions relative to Q and K were derived. The procedure established in Chapter 4 starts from Q xn i = 21 in the optimizations over Q. In the example given above, this initial value demonstrates to be better than the original one [4], since it permits robustness to be established without need for robustification of the nominal controller. The main result of this chapter is the reformulation of the robustification procedure, in order to avoid the time consuming optimization over the Lyapunov matrix Q. This is accomplished by substituting the analysis technique developed in Chapter 5 for the analysis based on the Lyapunov direct method. The optimization is therefore done over the elements of the controller only. Computational experience indicates that results from the technique of Chapter 5 are at least as good as the results from the Lyapunov method, and in general better. For the example solved above, this is confirmed by the results with a fixed controller, shown in Table 75. Although a constant controller was considered in this section, the robustification can be used with dynamic controllers of fixed structure. Also, its scope can be extended, through the inclusion of additional constraints in order to accommodate performance specifications.
PAGE 204
CHAPTER 8 NECESSARY STABILITY DOMAIN IN THE PARAMETER SPACE 8.1 Introduction Let us recall the perturbed state space model of previous chapters, m x(t) = A p x(t) d Â± { (A + E)x(t); E = Â£ p k E k , p* < a*, Vfc (8.1) k = l where A is asymptotically stable, p = [pi , . . . ,p m ] T is a vector of real plant parameters, and E k ,k = 1 ,...,m, are constant matrices which account for the structure of the perturbation due to the parameter pk. Uncertainty satisfying the description above belongs to the structured class defined by (2.31), namely E a = f {E : E = PkE\ t, p*: < ak, Vo}. The set of upper bounds ak determines a domain in the parameter space, say V a , defined by V a d = {p : pjt < a k }. The fundamental question concerning the stability of the system (8.1) involves determining the largest values of a k , k = 1 ,...,m, , say ctk, for which A p remains stable; in other words, it involves determining the largest stability domain in the space of parameters, say Sd . Another pertinent goal is to determine the worst case parameter , defined as the destabilizing parameter vector of smallest norm. The analysis methods discussed in Chapters 4 and 5 permit the computation of lower bounds for a k , in general conservative. The set of lower bounds, which will be denoted a k , k = 1, . . . , m, determines a sufficient stability domain in the space of system parameters, say SdThose methods however donÂ’t indicate which is the worst case perturbation. 197
PAGE 205
198 From the definition (8.1), the matrix A p can be seem as the result of a mapping from R m into R nxn . Given that the parameters are assumed to be independent, and since each parameter may affect more than one entry of A, the family of all possible matrices A p corresponding to Â£sd constitute a polytope in R nXn , namely the convex hull of a finite number of points in R nXn , which are the vertices of the polytope. Robust stability of the system (8.1) is equivalent to all the eigenvalues of A p having strictly negative real parts, for all perturbations in the class Â£sdUnfortunately, a necessary and sufficient analytic condition on the class Â£sd which guarantees the stability of all members of the poly tope has yet to be found. Regarding a polytope of polynomials, it has been shown [2] that, in order to check stability of the polytope, if suffices to check the stability of the exposed edges. The idea of checking stability of a polytope by checking the exposed edges is appealing, because checking the exposed edges requires only a finite number of unidimensional searches. However, that stability result for polytope of polynomials does not carry on to polytopes of matrices. An example is available [1] where a set of stable vertex matrices, whose convex hull is also stable, contains an unstable convex combination of the vertices. The implication of the remarks above for the stability of the system (8.1), in the presence of perturbations allowed by the set a^, is that the stability of the family of perturbed matrices cannot be inferred from the stability of the matrices corresponding to vertices of V a , and not even from the stability of matrices corresponding to the exposed edges of this domain. It has been observed that, in many robustness problems fitting model (8.1) discussed in the literature, a simple technique can be used to obtain a necessary stability domain in the parameter space, starting from a known sufficient domain defined by conservative
PAGE 206
199 oonorm bounds on p; the necessary domain is obtained through an extension, in which the relevant dimensions are kept proportional to the corresponding dimensions of the sufficient domain. Furthermore, the necessary domain thus obtained coincides with necessary domains computed elsewhere. In the next section, a conjecture is presented which justifies the extension of the sufficient domain in the cases studied, whose necessary domain were Â‘a prioriÂ’ known. It must be pointed out that all the examples studied have at most 3 parameters, and the uncertainty structure is such that Ek has rank 1, for all k. If that conjecture is somehow proved, then the technique could be used to determine necessary stability domain in similar cases where only the sufficient domain is known. 8.2 Characterization of a Necessary Stability Domain 8.2.1 Definitions Let us initially define relevant quantities for the ensuing development. Given a set of conservative parameter bounds, dk , the corresponding sufficient stability domain is the hyperrectangle Sd = f {P : \Pk\ < 6tk, k = l,...,m} (8.2) Similarly, to a set of exact parameter bounds, dk, corresponds a necessary stability domain: Sd = f {p : pfc < a k , k = l,...,m} (8.3) The boundary of these domains will be designated, respectively, by B(Sd), B(Â§d)', V designates the set of vertices of a domain under consideration, that is V= {vj, j = 1 ,. . . , 2 TO } (8.4)
PAGE 207
200 Assuming that the domains have proportional relevant dimensions, the sufficient and the necessary parameter upper bounds are related through ak = ca>k, k = l c>l Since Sd C Sd, let us define the complement of Sd relative to Sd as A Sd d = { p : ctk < \Pk\ < Vfc } (8.5) (8.6) Given a set a of parameter upper bounds, and the corresponding set of perturbation structures Ek, let the family of perturbed system matrices be: m n Q { A p : A p = (A + E), E = Y.P*Ek, p* < a*, Vfc } (8.7) Given a particular perturbed matrix, let k = 1 RX p d = max {i?e[A,( A p )]} ( 8 . 8 ) and, given a family of perturbed matrices, let R\ d = max {RX V } p A p e n Q p (8.9) For a given parameter domain, 7r* denotes the set of parameters values belonging to its boundary which lead to perturbed matrices having RX P = RX p . Also, for a given perturbed set of parameter bounds, let rj^ be the derivative of RX p of the perturbed matrix corresponding to the j th vertex of the associated parameter domain with respect to the parameter pk that is (j) def d [iZAp] k ~ dp k \ pzzV] A critical point p cr of a given domain is characterized as: ( 8 . 10 ) def Per = P : p G x, 9 points outwards, V& ( 8 . 11 )
PAGE 208
201 Using these definitions, one has that, given a set a* of parameter bounds, the perturbed system given by (8.1) is asymptotically stable if and only if RX P < 0 ( 8 . 12 ) 8.2.2 Extension of a Sufficient Domain Assume that a sufficient stability domain Sd is given, and let f^\ j = 1, . . .,2 m , be derivatives r[ J \ as defined above, computed at its vertices. The sign of the derivatives can be used to determine directions in the space of parameters in which R\ v increases. Figure 81 below illustrates the representation of a 2 dimensional parameter domain, where each of the coordinate axes represent the subdomain of one independent parameter. The dashed square represents the known sufficient domain. At vertex Vj,j = 1,...,4, the direction of the arrow parallel to e^, k = 1, . . . , 2 , correspond to the sign of derivative with respect to pkFigure 81. Extension of a conservative domain
PAGE 209
202 Now, suppose that starting from a point in the boundary of Sd, and searching in the space of parameters, one can find a point which leads to R\ p = 0. This point determines a domain larger than the sufficient domain, say Sj. ' , whose boundary contains at least one value of p leading to the imminence of instability; therefore, S d is a candidate necessary stability domain. One such domain can be represented as the exterior square in Figure 81. The candidate domain can be claimed to be the necessary domain if the complement A S d of Sd relative to S d does not contain any point p such that RX P > 0. Observation of the signs of corresponding derivatives of RX P , at the vertices of the sufficient domain and vertices of the necessary domain obtained by extension, for many cases, has lead to the following conjecture: Conjecture 8.1 . Assume that a sufficient stability domain Sd, a candidate necessary domain S' d , and the derivatives of RX P at their vertices, respectively and r[^ are given. Then, no parameter p leading to instability exists in the complement A S d if sign (f^) = sign (r [^) , VA:, Vj (8.13) O In the next section, a procedure is given for the extension of a sufficient known domain. It will be shown later that the procedure leads to the actual necessary domain in several stability problems obtained from the literature. 8.3 Computation of the Necessary Stability Domain 8.3.1 Procedure for Computation Suppose that, for a perturbed system satisfying the model (8.1), A and Ek,k = l,...,m are given, and that the application of a conservative analysis technique yields a set ctk of
PAGE 210
203 parameter bounds, which defines the sufficient stability domain Sd . Furthermore, assume that Conjecture 81 applies to the problem. Then, the following procedure permits the computation of a necessary stability domain, Sd Â• Additionally, the worst case parameter vector p w is obtained, where p w is the smallest norm parameter vector which drives the system to the imminence of instability. Procedure 8.1 . Computation of S p : 1. Given Sd, identify the critical point p cr Â£ B(Sd)', 2. Starting at p cr , follow a path in the space of parameters, in the direction of increasing RX P , until instability is reached at some p' Â£ 3? m . If p cr is one of the vertices of Sd, search along the diagonal through p cr (see Procedure 8.2); 3. Let S d be the domain defined in the space of parameters by the parameter components of p' k \ thus, S' d is a candidate to Sd', 4. Determine the set 7r* belonging to the boundary B(S' d ); 5. If p' Â£ 7T*, then S d = S d , and p w is smallest norm vector belonging to 7 r* Â£ B(S' d ). The procedure is over; 6. If p' ^ 7r* Â£ B(S d ), it means that at RX p is larger at some other point belonging to B(S' d ) than it is at p', which implies in instability at that point, since RX p = 0 at p' . Then, determine the critical point p' CT Â£ B(S' d ), and from this point, follow a path in the direction of decreasing RX p , until stability be regained at a point which becomes the current point p'; repeat from step 3.
PAGE 211
204 Notice that the determination of the critical point belonging to the boundary of a given domain involves at most a bidimensional search in the parameter space, since only exposed faces of the domain must be taken into account. In order to conclude whether or not Conjecture 81 applies to the problem under study, the derivatives r must be computed. Derivatives are also needed in order to follow a path in the direction of increasing RX p , in the extension of the sufficient domain. Analytical expressions for the derivatives are given below. 8.3.2 Derivatives of R\ v with respect to pk For simplicity of notation, let 7 d = f RX p . The derivative of 7 with respect to the k tfl component of p is given by the following lemma: Lemma 8.1 . For 7 as above defined, i i wf 0 Â• Â•Â• 0 Ei dJ dJ dp? = Re < 1 0 wf Â•Â• 0 e 2 Wj dpk ; WfWj Â• . 0 ; dJ dpm 0 0 Â• Â• Â• wj* E m where W& ,Wj are respectively the left and right eigenvectors associated to the eigenvalue of (A + E) having the largest real part. Proof . Let Aj(A p ) be the eigenvalue of (A + E) having the largest real part. From the definition of .7, d J _ d[Re[\ 3 (A p )\) dpk dp k For a complex eigenvalue Aj(A p ) = a + ;6, one has that d \j _ d[ci + jb] _ d a . <9 6 _ dX / dpk dpk dpk + J dp k ~ e [dp k . + j Im dXj ' .dpk.
PAGE 212
205 Therefore, ^ = Re d[\,{A+E)] dPk . Using the result of Lemma 2.1, with the substitutions (M H M) Â— (A + E) and x Â— pk, one obtains $[A,(A + E) I dpk WfWj where Wj 1 , Wj are the left and right eigenvectors of (A f E) corresponding to A j. Since 9 Â— Ek, A follows that Mp P \ W : HE Â« W > dpk WfWj (8.15) Expression (8.14) is just the matrix form of this equation, for k = l,...,m. 8.3.3 Determination of ir m In order to determine the set of points which maximize RX P , all the exposed faces of a domain must be considered. The points can be determined through a conveniently designed grid search. However, taking into account the directions of derivatives at the vertices of the domain, it may be possible to have an idea of the shape of the surface R\ p over the subdomain given by a face. If a set of adequate starting point in a face can be determined, the set of points which maximize RX p in this face can be obtained through an ascendent gradient procedure. The next lemma addresses the implementation of such a procedure. Lemma 8.2. Consider a a face of the parameter domain defined by parameters Pj,pi, j,l 6 [l,m], pfc < at, k = j,l. Let d p be a path defined in the face such that d p = d Pj + d Pl \  d p = c where c is a small constant, and the components pk are taken in the direction of increasing components J k , k = j,l. Then, points that maximize J = RX P in the face can be determined by the following procedure:
PAGE 213
206 Procedure 8.2 . Maximization of R\ p : While ^ 0, k = j, l or OPk p* < ot k , k = j,l repeat, for i > 1: 1. Update the perturbed matrix: A p ^ = A p l ~^ + E A p Â° ^ = A a \ = 0; 2. Compute^, k = j,l; 3. Compute path d p : d p ^ = d p j + d p }\ 4. Update parameters: = p*.' 1 ' + d p J ; p^ = 0, k = j,l; 5. Update the perturbation: If for some q = j, /, Â— p ^ < a q or p q ^ > a q , a bound was violated. Then, (a) Reduce the dimension of the search, fixing p q = a q or p q = a q , depending on the bound that was violated; (b) Update perturbation: E M d^Ek, k = j , /; k ^ q; (c) Update perturbed matrix: A p ^ * Â— A ^ + PkEk, k = j,l ; k ^ q. If no bound was violated, Â£(*) = J^k Efc, k = j,/; 6. Let i * Â— (t + 1), and go to step 1. Proof. First, observe that the stopping conditions of the algorithm represent, respectively, the following situations: 1. A stationary value of RX(A P ) with respect to Pk,k Â— j,l has been attained;
PAGE 214
207 2. The limit of all parameters have been reached at the i l h step: = vertex. Note that the updating in step 5 (b) gives rise to the possibility of a combination of the above stopping conditions, namely a stationary point relative to one parameter occurs when the other parameter is at its bound. The occurrence of either one of the stopping conditions configures the attainment of a point that maximizes RX P . Occurrence of condition 1 at step ( i ) implies that no further increase in RX p is obtained by using additional steps, while occurrence of condition (2) implies that the parameters cannot be augmented beyond their values at step ( i ). The iterative process of increasing the parameters as specified will eventually lead to one of the stopping conditions. By definition, the increment d p leads to an increase in the value of the objective RX p . Therefore, when either one of the stopping conditions occurs a local maximum has been attained. Note that the matrix A 0 , referred to in step 1 of the last procedure is not necessarily the nominal system matrix, which corresponds to the origin in the space of plant parameters. A 0 represents instead the nominal matrix plus the perturbation corresponding to the starting point in the parameter space. In Lemma 8.2, the path d p = d Pj + d Pl is chosen such that it increases the value of the objective RX p . This specification does not define the values of the components d Pj and d pr However, the additional constraint  d p = c, where c is a Â‘smallÂ’ constant, permits to pose the problem as a parameter optimization problem. Letting A J be the change due to a step in a given face of the parameter space, the problem of choosing the step that maximizes J can be posed as
PAGE 215
208 max AJ ^ d Pk , k j,l subject to ^(d Pfc ) 2 = c 2 , k = j,l (8.16) (8.17) Treating d p j as a decision variable, and d Pl as a state variable, one obtains the optimal choice d Pj Â— c _1 i + vfj 2 ; d ?i = vijd Pj (8.18) where rpj 31 and gk,k = j,l is the partial derivative of J with respect to pk . The restriction Â‘c smallÂ’ is needed in order to avoid larger errors in the first order TaylorÂ’s expansion of A J, which underlies the above results. In the Procedure 8.1, a search in the parameter space is required, in order to determine the necessary stability domain. That search can be done using an algorithm like the one in Procedure 8.2, with obvious adaptations. In this case, the second stopping condition is not used, and the dimension of the space is not limited to 2. If the objective is the search of point of instability which defines a candidate necessary domain, the procedure can be improved by using a variable step instead of a fixed step c. The step can be treated as an additional parameter, and its length chosen such that convergence is improved. The analysis methods discussed in Chapters 4 and 5 yield sufficient domains, which can be used as the starting point for the application of Procedure 8.1. Particularly, the technique of uncertainty diagonalization associated to Perron scaling is of advantage, due to its low computational cost. Numerical results obtained with the application of Procedure 8.1 to some robustness problems are given in the next section.
PAGE 216
8.4 Applications In the following examples, the sufficient stability domain was computed using the frequencydomain technique of Chapter 5, associated to Perron scaling. The examples were chosen because either the domain yielded by that technique is a necessary domain, or a necessary domain had been reported in the literature. The search of boundary points that maximize R\ p was done according to Procedure 8.2, an implemented using standard M ATLAB Tm functions. Example 8.1 . Let us consider the data in Example 6.1. As shown in Chapter 6, diagonal uncertainty description associated to Perron scaling yields the sufficient stability domain Sd {p : pi < 0.9675; p 2  < 0.9675}. Computing derivatives at the vertices, one obtains the result shown in Table 81. Table 81. Signs at sufficient domain VerVi v 2 ^3 V 4 tex P d P d P d p d Sign + + + + + + + + + + R\ 0.624 0.119 1.500 0.532 Comparing signs of parameters and derivatives components at the vertices, it can be seem that at vertices V 2 and V 4 all the derivatives point outwards. Selecting initial points near the edge V\ Â— V3, one obtains that p cr = V 2 . Expanding the parameter domain, according to Procedure 8.1, yields p' cr = [Â—1.25, 1.25]^ . Therefore, a candidate necessary stability domain is S' d = {p : pi < 1.25; p 2  < 1.25}. Derivatives at the vertices of this domain are given in Table 82.
PAGE 217
210 Comparing data in both tables, it can be seem that no change of derivative signs at vertices has occurred, except that some of the derivatives are null at the candidate domain. Searching over the boundaries of the candidate domain, one obtains R\ p = 0, which occurs at all points in the edge V 2 Â— V 4 . Therefore, one concludes that Sd= {p: pi < 1.25; p 2  < 1.25}; p w = [0, 1.25] r (8.19) This result shows that dimensions of the sufficient domain reach 77% of the corresponding dimension of the necessary domain. Table 82. Signs at candidate necessary domain Verv. v 2 V 4 tex P d P d P d p d Sign 0 + + + 0 + + + + + + R\ 0.556 0 1.500 0 To obtain the necessary domain, 50,165 flops were needed, distributed among computation of the sufficient domain (9,247), determination of derivatives at its vertices (1,575), checking for worst case at boundary (14,410), expansion (1,536), computation of derivatives at vertices of the candidate domain (1,178) and checking for worst case at its boundary (22,219). Example 8.2 . Let us consider the data in Example 6.2. As shown in Chapter 6, diagonal uncertainty description associated to Perron scaling yields the sufficient stability domain Sd = {p : pi < 1.75; \p 2 \ < 1.75}.
PAGE 218
211 Computing derivatives at the vertices, one obtains the result shown in Table 83. Since RX P = 0 at vertices V3 and V 4 , the initial domain is already necessary. One must check for the worst case at the boundary. Checking the boundary shows that RX P = 0 at any point of the edge V 3 Â— V 4 . Therefore, Â§d = {p : M < 175; \p 2 \ < 1.75}; Pw = [1.75, 0] r (8.20) Table 83. Signs at necessary domain VerVi V 2 V 3 V 4 tex P d p d p d p d Sign + 0 + + + + 0 + + 0 + 0 RX 1.82 1.25 0 0 Computation of the necessary domain took 30,987 flops, distributed among computation of the sufficient domain (9,377), determination of derivatives at its vertices (1,753), and checking for worst case at boundary (19,767). Example 8.3 . Consider the data in Example 6.9. It was shown in Chapter 6 that diagonal uncertainty description associated to Perron scaling yields the sufficient stability domain S d = {p :  Pl  < 0.2186; p 2  < 0.4372; p 3  < 0.6558}. Computing derivatives at the vertices, one obtains the result shown in Table 84. Comparing signs of parameters and derivative components at vertices, it can be seem that all the derivatives point outwards at vertices V 4 and V5. Selecting arbitrary initial points for each face, one verifies that the searches for p w converge to either V 4 or F 5 ; therefore p CTo = V5. Expanding the parameter domain yields p' cr = [0.344, 0.688, 1.032] r .
PAGE 219
212 Table 84. Signs at sufficient domain VerVi v 2 V 3 V 4 Vs ^6 V 7 ^8 tex P d p d P d p d P 8 P d p d P d Sign + + + + + + + + + + + + + + + + + + + + + RX 1.05 1.55 1.42 1.69 0.39 0.86 0.74 1.23 Therefore, a candidate necessary stability domain is S' d = {p : p t  < 0.344; p 2  < 0.688; /> 3  < 1.032}. Derivatives at the vertices of this domain are given in Table 85. Table 85. Signs at candidate necessary domain Verv x v 2 V 3 v 4 Vs V'e V 7 Vs tex P d P d p d P d P d P d p d p d Sign + + + + + + + + + + + + + + + + + + + + + + RX 1.03 1.77 1.62 1.57 0 0.74 0.54 1 .30 Comparing data in both tables, it can be seem changes of derivative component signs has occurred at vertex V 2 ; however, the derivative still point outwards. Searching over the boundaries of the candidate domain, one obtains RX P = 0, which occurs at vertex V 5 . Therefore, one concludes that Sd = {P : pi < 0.344; p 2  < 0.688; p 3  < 1.032}; p w = [0.344, 0.688, 0.1.032] r (8.21) which matches the result one obtained in [10].
PAGE 220
213 Computation of the necessary domain took 522,122 flops, distributed among computation of the sufficient domain (39,551), determination of derivatives at its vertices (26,178), checking for worst case at boundary (267,149), expansion (162,724), and computation of derivatives at vertices of the candidate domain (26,530). The boundary of the candidate domain was not checked, because the worst case obtained agrees with the previously available result. The next tables summarize necessary stability domains for additional 2 and 3 parameters problems, obtained from the sufficient domains computed in Chapter 6. Table 86 contains results for problems with 2 parameters. Note that the initial domain for the cases in rows 1 and 3 were already necessary. On the other hand, note that for the example in row 4, the sufficient domain was very conservative. Table 86. Necessary domain for problems with 2 parameters Example Domain Pw Sufficient Necessary \P\ < \p\ < Index \P\< bl < Index Pi Pi Ex. 6.2 0.2500 0.2500 1 0.2500 0.2500 1 0.25 0.25 Ex. 6.4 0.9150 0.9150 0.92 1.0000 1.0000 1 0 1 Ex. 6.5 0.0408 0.0816 1 0.0408 0.0816 1 0.0408 0.0816 Ex. 6.7 0.0579 0.0063 0.23 0.2485 0.0248 1 0.2485 0.0248 Results for additional 3 parameters problems are shown in Table 87. The table shows that, for both examples, the sufficient domain was indeed conservative. The column con
PAGE 221
214 cerning Example 6.10 shows that the worst perturbation for that problem is given by any point of the face defined Pi = 0; p 2 = 1; P 3 = 0 . Table 87. Necessary domain for problems with 3 parameters Domain Ex. 6.8 Ex. 6.10 IPil < N < P3l < Index IPil < Iftl < P3 < Index Sufficient 1.5490 1.5490 1.5490 0.86 0.7166 0.7166 0.7166 0.72 Necessary 1.7910 1.7910 1.7910 1 1.000 1.000 1.000 1 Pw 1.7910 1.7910 1.7910 0 1 0 In all the examples given above, Procedures 81 and 82 lead to the necessary stability domain, and to the worst case parameter. 8.5 Conclusions It has been observed in several cases of systems under unstructured uncertainty that, starting at selected points at the boundary of a known sufficient stability domain, and following a trajectory in the space of parameters, chosen such that the largest real part among the eigenvalues of the perturbed matrix is increased, it was possible to determine a necessary stability domain, which matched the necessary domain already computed elsewhere by other means. Additionally, it was possible to determine the worst case perturbation for each problem, which in general had not been previously obtained. Moreover, it was observed that, in all cases, the derivatives with respect to the parameter Pk of R\ p , defined in (8.8), have the same sign at corresponding vertices of the sufficient and of the derived necessary domain.
PAGE 222
215 Since in all cases the necessary domain computed by extension of the sufficient domain always matched an already known domain, it was concluded that no instability point was contained in the domain extensions. Then, a conjecture was made that the equality of signs of derivatives at corresponding vertices could be an indicator of the nonexistence of instability points inside such extensions. The advantage of the procedure proposed in this chapter is that it does not require an extensive search over the parameter domain. All searches are at most 2 dimensional. At this stage however, the results presented in this chapter must be considered with reserve: since only a finite number of problems were studied, further research is necessary in order to try a generalization of the conjecture. More general problems, with larger number of parameters, must be considered. Nevertheless, in view of the examples the idea of using the information conveyed by the derivatives of RX p at the vertices of a domain seems to be of value.
PAGE 223
CHAPTER 9 CONCLUSION 9.1 Summary This dissertation concentrated on the assessment of robust stability properties of systems under parametric uncertainty. As motivation, a brief historical review of uncertainty treatment was presented in the Introduction. Chapter 2 reviewed system models and presented a discussion of uncertainty treatment, classification and representation, both in transfer matrix and state space models. Diagonal representation of uncertainty in interconnected frequencydomain systems was considered, and a simple procedure was given for transforming a frequencydomain system with multiple uncertainty into a system with diagonal uncertainty. Chapter 3 reviewed methods for stability analysis of transfer matrix and state space models. Topics considered include the application of the Lyapunov direct method to state space models, the generalized Nyquist stability criterion, and the derivation of spectral radius and singularvalue conditions for robust stability. The structured singularvalue condition was also reviewed. The application of frequencydomain scaling techniques as a tool for reducing the conservatism of maximum singularvalue stability conditions was given special attention; the review included optimal similarity scaling, optimal nonsimilarity scaling, and Perron scaling. Chapter 4 considered the application of the Lyapunov direct method to state space systems subject to structured perturbations. The uncertainty representation adopted assumes 216
PAGE 224
217 that the perturbation depends linearly on a vector of parameters, p. This model admits the treatment of cases in which one parameter affects more than one element of the system matrix. Although the use of the Lyapunov method in connection with this uncertainty representation is not new, and some results were available in the literature, all derivations presented in Chapter 4 were independently obtained. The Lyapunov method yields sufficient results, therefore potentially conservative even for systems without uncertainty. It was shown in Theorem 4.1 that, besides this inherent conservatism, other causes of conservatism exist when the method is applied to perturbed systems. To obtain this result, concepts of principal directions alignment [33] were used, in what is believed to be an original application. A review of available stability results on  p  preceded explicit derivations of admissible bounds on  p  2 ,  p oo and  p Hj, where the Lyapunov matrix Q was considered a free parameter. Complementary derivations included expressions for stability domains in the parameter space and parameter weighting to change the form of the computed stability domain. A new stability condition based on  p  2 was derived, namely condition (4.35). It is potentially less conservative than previously available expressions. Indeed, the improvement on admissible  p  2 can be considerable, having reached about 36% in one of the examples. The tightness of stability results depends on a convenient choice of the Lyapunov matrix Q for the nominal system, but no systematic method is available for this choice; adopting Q 2/ [61] leads to a ready to use stability condition, which however can be too conservative. It was shown that the choice of Q can be formulated as an optimization problem in the space of n x n symmetric, positivedefinite matrices, where n is the number of states. The objective to be optimized is derived from one of the stability conditions obtained, involving either  p  2 ,  p or  p i. A systematic procedure was obtained by adopting Q ini = 21
PAGE 225
218 as the starting point, since then the initial output equals the previously available result [61]. The optimization, although not convex, is useful. It can be either run until convergence (to a possibly local extreme) occurs, or until the output is better than a given threshold, without waiting for convergence. Numerical experience showed that Q = 21 is not the best choice, since the optimization always found a matrix that yields less conservative parameter upper bounds. Additionally, it was shown that similarity scaling can be applied in connection with the derivation of stability conditions, being therefore a potential tool for reducing the conservatism when Q is not optimally chosen [61]. Such an use of similarity scaling has not been referred to in the literature. The implementation of optimizations over Q, using a gradient descent technique, requires the derivative of objectives and constraints with respect the elements of Q. Analytic expressions were obtained for all the partial derivatives involved, based on available results on derivatives of eigenvalues [22] and of singularvalues [33]. They were implemented for a 2norm condition, and worked very well. Besides the new result presented, the material in Chapter 4 constitutes a fairly complete survey of all the aspects involved in the application of the Lyapunov direct method to systems under structured uncertainty. Chapter 5 formulated an alternative method for the assessment of robust stability of state space systems under structured, timeinvariant perturbations. The first step was to decompose the perturbation E as E = LDR, where and L and R are constant matrices which account for the structure of the perturbation due to the parameter pk, and D = diag{p x ,. ..,p m ). A necessary and sufficient condition for the multiplicity of each parameter to be 1 in D was given.
PAGE 226
219 Furthermore, treating the uncertainty inputs and outputs as additional external variables, and manipulating the state space description, it was possible to arrive at an equivalent frequencydomain representation for the perturbed system, where the uncertainty is the real diagonal matrix D. It seems that the result obtained is of particular interest, since the perturbation in the equivalent frequencydomain representation comes directly from the real perturbation in the state space model, rather than from variations in real parameters like gains and time constants in frequencydomain models. The stability of the frequencydomain representation was shown to be equivalent to the asymptotic stability of the original perturbed state space system. Necessary and sufficient conditions for robust stability were readily derived from the frequencydomain representation, in terms of p(MD) and p(M), where M is the interconnection structure of the system after the perturbation has been diagonalized. Sufficient stability conditions, in the form of singularvalue upper bounds on the spectral radius, were obtained resorting to scaling techniques. Conditions using optimal similarity scaling and Perron scaling were given. Computation of the sufficient conditions requires a frequency sweep over ui 6 with the scaling matrix being computed at each frequency point. If optimal similarity scaling is used, the choice of the scaling matrix at each frequency point is made through a smooth optimization problem. The optimization can be avoided by using Perron scaling, instead of optimal similarity scaling. The use of nonsimilarity Perron scaling has been explored [33] in frequencydomain stability analysis. In the context of diagonal uncertainty, nonsimilarity and similarity scaling are equivalent. Although the condition using Perron scaling is theoretically more conservative, numerical experience obtained in this work indicates that the results were always very close to the ones obtained using optimal similarity scaling, however with the advantage of
PAGE 227
220 a much smaller computational cost. No assessment of comparative performance between Perron scaling and optimal scaling in connection with diagonal uncertainty description has been found in the literature; some comparative results were provided in Chapter 6. All the derivations required by the transformation used in Chapter 5 were independently obtained. It must be pointed out that, recently, a publication was found [18] where the same perturbed state space model was used and a frequencydomain stability condition was presented. However, the derivation was not explicitly presented. Two additional points can be mentioned. First, structured uncertainty commonly used in perturbed state space models accommodates only parametric uncertainty. On the other hand, the frequencydomain stability problem with diagonal uncertainty admits unstructured perturbations. An approach has been suggested for including, in the state space model, unstructured perturbations coming from neglected dynamics. Second, a stability condition, using Perron scaling associated to induced ooÂ— norm instead of to induced 2norm, was derived. Although it is strictly a simple application of the definition of induced ooÂ— norm, a similar condition has not been found in the literature. Numerical results shown in Chapter 6 indicated that this condition, which has low computational cost, can be used to improve the assessment of stability domain in the parameter space. The goal of Chapter 6 was a comparison among sufficient stability conditions obtained with the Lyapunov direct method and with the frequencydomain method proposed in Chapter 5, according to 2 factors, namely allowable parameter upper bound and computational cost. It can be said of the numerical results described in the chapter, that: 1. It was verified that in some cases the norm bounds computed were equal or very close to the actual allowable parameter norm bound. In other cases however, they were
PAGE 228
221 considerably conservative, reaching between 60% and 80% of the actual bound. One case of extreme conservatism was found, where all the results reached only 23% of the allowable bound; 2. The method of Chapter 5 in general yielded less conservative parameter norm bounds than the ones obtained with the optimization associated to the LDM; there were cases in which the LDM gave the same bounds, but no example was found where LDM gave better bounds; 3. Among the results obtained with the frequencydomain method, the bound computed from the condition that uses Perron scaling was always equal or very close to the corresponding bound computed from the condition that uses optimal similarity scaling; however, the computational cost of the Perron scaling was very low compared to the cost of optimal similarity scaling; 4. Based on the numerical results obtained, and considering both factors, namely cost and allowable parameter norm bound, the frequencydomain method associated with Perron scaling emerged as a better alternative for robust stability assessment than the LDM, when the perturbation is timeinvariant. Chapter 7 associated the stability analysis techniques proposed in Chapter 5 with the technique of improving the robustness of a closedloop system through optimization of a nominally stabilizing controller of fixed order. As initially proposed [4], a 2norm parameter bound obtained from Lyapunov analysis was used in the assessment of the current stability domain in the parameter space. The assessment is improved by optimizing the stability condition over Q and the controller K.
PAGE 229
222 The procedure was adapted to use the Lyapunov conditions derived in Chapter 4, based either on the parameter 2norm or oonorm. It was shown that the optimization problems proposed in that chapter were more effective in the solution of one case presented in Bhattacharyya [4] than the routine originally used. In order to take advantage of better computational properties, the robustification procedure was changed so that the frequencydomain stability conditions of Chapter 5 could be used. Using the stability condition based on optimal similarity scaling, there are still 2 parameters in the optimization problem, namely the controller K and the scaling matrix 5; an advantage is that the optimization over S is convex. Furthermore, in terms of computational effort, an even better alternative is to use the stability condition based on Perron scaling, because then the only remaining parameter in the optimization is Ii. The stability conditions obtained in Chapters 4 and 5 are only sufficient. The goal of the approach presented in Chapter 8 was to define a computationally simple procedure which, starting from those sufficient conditions, would lead to necessary stability conditions on the norm of the parameter vector. To be useful, such a procedure has to avoid an extensive search in the space of parameters. A simple procedure was defined which has led to the exact stability domain in the parameter space, obtained through the extension of an available sufficient domain; the results are known to be exact because they match results already known. Based on these matching results, and the observation that the derivatives 3 ^ maX did not change when the sufficient domain was extended, it was conjectured that this property of the derivatives is a sufficient condition for the nonexistence of instability points inside the domain extension. Applying to some examples obtained from the control literature, the exact stability domain and the worst case perturbation were obtained.
PAGE 230
223 9.2 Directions for Further Work 9.2.1 Improvement of Sufficient Stability Conditions On the basis of the numerical results obtained, the main conclusion of this work is that, in the presence of structured, timeinvariant perturbations linearly dependent on a set of parameters, the frequencydomain technique of Chapter 5, associated to Perron scaling, provides a better assessment of robust stability of state space systems than the largely used Lyapunov direct method. Indeed, that alternative provided the best results among all alternatives considered. Recall that the uncertainty description obtained with the technique of Chapter 5 is diagonal. with real parameters. Now, suppose that, as suggested in Chapter 5, a complex norm bounded block is added to the diagonal description to represent neglected dynamic effects, resulting in mixed uncertainty matrix. Different block sizes does not preclude the use of optimal similarity scaling, and thus the corresponding stability condition can still be computed. The use of Perron scaling in this case, if it applies, would bring computational savings. This applicability of the Perron scaling is an appropriate subject for further investigation. The sufficient conditions derived in Chapter 5 utilize the maximum singularvalue of an scaled matrix as an upper bound on the spectral radius. A recent paper [18] has presented a new upper bound which is less than or equal to the optimally scaled singularvalue. Examples were presented for which the new bound has permitted the computation of the exact stability condition. Some of these cases were used in Chapter 6, namely in Examples 6.4, 6.5 and 6.10. Only in the case of Example 6.5, the scaled singularvalue bounds of Chapter 5 yielded the same value as the new bound [18]. Although those examples have
PAGE 231
224 already shown that the new bound is of advantage from the point of view of conservatism, no information was found about its computational requirements. Therefore, it is worthy to investigate the inclusion of the new bound in the technique of Chapter 5, in place of the singularvalue bounds. It was observed that, in all the problems involving the minimization over frequency of the maximum singularvalue of an scaled matrix, the curve of norm versus frequency presented only one point of maximum. However, the scaling matrix is actually frequencydependent, because at each frequency point, the matrix being scaled is different. Because the scaling is frequencydependent, it is not straightforward to prove that the function has always a well defined point of maximum. But this possibility should be investigated, since it would bring computational savings if confirmed, because then the frequency sweep would be done only until the maximum is reached, instead of over all the frequency range of interest. Concerning the stability conditions derived from the Lyapunov direct method, the numerical optimization for the choice of the best Q is not the ideal solution from the point of view of stability analysis, since it involves computational costs. However, to date no general, nonconservative analytic solution is available. For instance, numerical optimization for choosing Q is the method proposed in a recent paper by Leal and Gibson [35]. Moreover, that paper introduces the concept of first order Lyapunov analysis, where the Lyapunov function varies linearly with the perturbations; the use of the first order Lyapunov function is proposed as a less conservative technique. It would be interesting to investigate how the new concept would improve the stability conditions of Chapter 4, and how they would then compare to the frequencydomain conditions of Chapter 5. Concerning the numerical optimization approach for the choice of Q, used in Chapter 4, there is still room for improvement of computational aspects. Although analytic expressions
PAGE 232
225 for all the derivatives had been obtained, they were not completely explored since standard routines with computation of gradients by finite difference were used. Finally, the ideas and results presented in Chapter 8 certainly deserve further attention, and have potential for producing good results. In that case, a simple procedure was devised for obtaining a necessary stability domain in the parameter space, starting from a sufficient domain. The data collected led to a conjecture which would guarantee that under certain conditions, an intensive search in the extended domain can be avoided. However, all the problems dealt with in Chapter 8 have few parameters, at most 3, and the perturbation due to each parameter had rank 1. They were not chosen this simple on purpose; they just happen to be problems for which an exact stability domain was already known. The first task, as additional work in this subject is to consider problems with more parameters and more general perturbation structure. For these problems, a direct search on the space of parameters would give the exact domain, or at least very good approximations. The proposed technique would then be applied, and the results compared, with a view to the generalization of the conjecture made in this work. Stability analysis techniques are an important tool in the context of controller synthesis and design, since stability of the control system is a fundamental objective. Practical design problems, in addition to the stability requirement, include performance requirements; due to uncertainties in the models, all the requirements must be robustly satisfied. In the following, it is examined how the robust stability analysis techniques discussed in this dissertation relates to some design and synthesis approaches whose objectives include both robust stability and robust performance.
PAGE 233
226 9.2.2 Synthesis and Design Aspects Synthesis Frequencydomain synthesis techniques have received a great deal of attention in the last decade; unlike state space synthesis techniques, they can directly handle systems with plant uncertainty. In many practical control problems including stability and performance robustness specifications, the objective can be put in the form [20, 36]: min \\M(P,K) \\oo (9.1) A where M is the transfer matrix of an interconnection structure which depends on the plant, P and on the controller, K; the functional dependence is determined by the control problem, particularly by the performance and robustness objectives. Nominal stability is ensured by choosing the controller K out of the set of all stabilizing controllers of M; this can be obtained by taking into account the socalled Youla parametrization [53]. This formulation however is useful in the presence of unstructured uncertainty. For the case of structured, norm bounded perturbations, the //Â—synthesis technique [12] is used. In this case, the control objective is formulated as min  S M(P, K) S~ l oo (9.2) A, 5 where M, K and P are as before, and S is a scaling matrix. The problem is convex over either K or S individually, but is not jointly convex over K and 5. For fixed 5, this problem reduces to the previous H 0 0 problem. Although the emphasis of this Dissertation was placed in state space system under structured uncertainty, the technique of Chapter 5 has shown that an equivalent frequencydomain stability problem can be obtained, where the uncertainty shows up as a diagonal
PAGE 234
227 real matrix. The formulation in Chapter 5 implicitly included the controller in the closedloop dynamic matrix, but it can be applied keeping the controller explicitly represented, as appropriate for design purposes. In this case, a perturbed representation as in Figure 72 is obtained. Thus, it is possible to restate in the form of (9.2), a robust stability design problem originally given in state space form. However, it has been pointed out [15] that the advantage of using the /xapproach is that it permits inputoutput performance objectives to be reformulated as stability problems under unstructured perturbations. Therefore, advantage of the flexibility of the equivalent frequencydomain problem can be taken if performance objectives admit a frequencydomain interpretation. For instance, there has been some effort towards the representation of timedomain performance objectives, like stepresponse characteristics, in frequencydomain control problems [5]. These considerations delineate a possible way of using the frequencydomain formulation of Chapter 5 in design problems, starting from state space specifications. Design Perspectives The iterative robustification discussed in Chapter 7 is a design technique where a robust controller is obtained by numerically optimizing the elements of a static nominally stabilizing controller. The numerical optimization approach makes sense in a context where a large number of possibly conflicting design objectives and constraints make it difficult to include robustness specifications in the design of the controller. Although considering static controllers may seem restrictive, it can be shown [4] that any stability problem with a controller of fixed order can be transformed into a stability problem under output feedback through an static controller. Moreover, it is possible to shown that
PAGE 235
228 a unity feedback system having a cascaded PID compensator in the forward loop, a largely used structure in frequencydomain design, can be given a state representation in terms of output through an static controller. Thus, there exist problems where the robustification approach to the derivation of a robust controller makes sense. In order for the design to become practical, performance objectives must be included. The robustification using the frequencydomain stability condition, as discussed in Chapter 7 involves a constrained optimization where the constraints are used to ensure closedloop stability, that is, to guarantee that the eigenvalues of the closedloop dynamic matrix have strictly negative real parts. These constraints are equivalent to defining the left half complex plane as the admissible region for the eigenvalues. Now, some timedomain performance objectives can be posed in terms of specific admissible regions in the complex plane. Therefore, these performance objectives could be integrated into the robustification approach by adopting constraints which would represent the intersection of the regions admitted by stability and performance objectives. This seems to be the most interesting aspect related to the robustification procedure. Another potentially fruitful direction stems from the work related in Chapter 8. The objectives there w'ere to obtain the exact stability region in the space of plant parameters and to determine the worst case parametric perturbation to a state space nominal system. It has been noticed in some 2 parameter problems that the stability region is asymmetric relative to the origin; moreover, in many cases the limit of the stability relative to one parameter is much bigger in one direction than in the other. In order to ensure robust stability, the controller must be such that, for all the perturbations in a given class, that is, for any parameter inside a given region around the origin
PAGE 236
229 in the space of parameters, the eigenvalues of the perturbed system matrix have strictly negative real parts. Given a nominal controller, the worst case perturbation indicates how far the nominal matrix is from the instability region, and in what direction the instability region is close to the origin in the parameter space. Now, if a new controller is defined as K = (K 0 + AK), where AK is a term such that the eigenvalues of the nominal closedloop system are pushed away from the closest boundary of the instability region, the new closedloop system is likely to be more robust than the nominal closedloop system, for uncertainties in the same class. In order to obtain AA', a simple procedure like Procedure 8.2 could be used. Obviously, the success of such a procedure will depend on the structure of the controller and on the structure of the perturbation; also, AA' must be such that the performance of the nominal system is affected as little as possible.
PAGE 237
REFERENCES 1. Barmish, B.R., M. Fu and S. Saleh, Â‘Stability of a Polytope of Matrices: Counterexamples.Â’ IEEE Trans. Autom. Control, vol. AC33, 569572 (1988). 2. Bartlet, A.C., C.V. Hollot and H. Lin, Â‘Root Locations of an Entire Polytope of Polynomials: It Suffices to Check the Edges.Â’ Proc. ACC87, (1987). 3. Bauer, F.L., Â‘Optimally Scaled Matrices.Â’ Numerische Mathematik, vol. 5, 7387 (1963). 4. Bhattacharyya, S.P. Robust Stabilization Against Structured Perturbations. Lecture Notes in Control and Information Sciences, vol. 99, SpringerVerlag, Heidelberg (1987). 5. Boyd, S.P., V. Balakrishnan, and others, Â‘A New CAD Method and Associated Architectures for Linear Controllers.Â’ IEEE Trans. Autom. Control, vol. AC33, 268283 (1988). 6. Bryson, A.E.,Jr. and YC. Ho, Applied Optimal Control, Hemisphere, Washington, D.C. (1975). 7. Daniel, R.W. and B.Kouvaritakis, Â‘The Choice and Use of Normal Approximations to Transferfunction Matrices of Multivariable Control Systems.Â’ Int. J. Control, vol. 37, 11211133 (1983). 8. Daniel, R.W. and B.Kouvaritakis. Â‘Analysis and Design of Linear Multivariable Feedback Systems in the Presence of Additive Perturbations.Â’ Int. J. Control, vol. 39, 551580 (1984). 9. Daniel, R.W. and B.Kouvaritakis, Â‘A New Robust Stability Criterion for Linear and Nonlinear Multivariable Feedback Systems. 7nÂ«. J. Control, vol. 41, 13491364 (1985). 10. De Gaston, R.R.E. and M.G. Safonov, Â‘Exact Computation of the Multiloop Stability Margin.Â’ IEEE Trans. Autom. Control, vol. 33, 156171 (1988). 11. Desoer, C.A. and M. Vidyasagar, Feedback Systems: InputOutput Properties, Academic Press, Inc., New York, NY (1975). 12. Doyle, J.C., Â‘Analysis of Feedback Systems with Structured Uncertainties.Â’ IEEE Proc. Pt.D, vol. 6, 242250 (1982). 13. Doyle, J.C., Â‘Structured Uncertainty in Control System Design.Â’ Proc. 24 th CDC, 260265 (1985). 230
PAGE 238
231 14. Doyle, J.C. and G. Stein, Â‘Multivariable Feedback Design: Concepts for a Classical/Modern Approach.Â’ IEEE Trans. Autom. Control, vol. AC26, 416 (1981). 15. Doyle, J.C., J.E.Wall and G. Stein, Â‘Performance and Robustness Analysis for Structured Uncertainty.Â’ Proc. 2\ th CDC, 629636 (1982). 16. Eslami, M., and D.L.Russel, Â‘On stability with Large Parameter Variations: Stemming from the Direct Method of Lyapunov.Â’ IEEE Trans. Autom. Control, vol AC25, 12311234 (1980). 17. Fan, M.K.H., A. L. Tits and J.C.Doyle, Â‘Robustness in the Presence of Joint Parametric Uncertainty and Unmodeled Dynamics.Â’ Proc. ACC88, 11951200 (1988). 18. Fan, M.K.H., A. L. Tits and J.C.Doyle, Â‘Robustness in the Presence of Parametric Uncertainty and Unmodeled Dynamics.Â’ in Advances in Computing and Control. Lecture Notes in Control and Information Sciences, vol. 130, SpringerVerlag, Heidelberg (1989). 19. Foo, Y.K., Robustness of Multivariable Feedback Systems: Analysis and Optimal Design, D Phil Thesis, University of Oxford, Oxford (1985). 20. Francis, Bruce A., A Course in Control Theory. Lecture Notes in Control and Information Sciences, vol. 88, SpringerVerlag, Heidelberg (1987). 21. Freudenberg, J.S. and D.P. Looze, Frequency Domain Properties of Scalar and Multivariable Feedback Systems. Lecture Notes in Control and Information Sciences, vol. 104, SpringerVerlag, Heidelberg (1988). 22. Golub, G.H. and C.F. van Loan, Matrix Computations, The Johns Hopkins U. P., Baltimore, (1983). 23. Henrici, P., Â‘Bounds for Iterates, Inverses, Spectral Variation and Fields of Values of Nonnormal Matrices.Â’ Numerische Mathematik, vol. 4, 2440 (1962). 24. Hinrichsen. D. and A.J. Pritchard, Â‘New Robustness Results for Linear Systems under Real Perturbations.Â’ Proc. Â‘27 th CDC , 13751379 (1988). 25. Hinrichsen, D., B. Kelb and A. Linnemann, Â‘An Algorithm for the Computation of the Structured Complex Stability Radius.Â’ Automatica, vol. 25, 771775 (1989). 26. Hsu, CIE, and CT. Chen, Â‘A Proof of the Stability of Multivariable Feedback Systems.Â’ Proc. of the IEEE, vol. 56, 20612062 (1968). 27. Kharitonov, V.L., Â‘Asymptotic Stability of an Equilibrium Position of a Family of Systems of Linear Differential Equations. 'Differential Equations, vol. 14, 14831485 (1979). 28. Kouvaritakis, B., and H. Latchman, Â‘Singularvalue and Eigenvalue Techniques in the Analysis of Systems with Structured Perturbations. Int. J. Control, vol. 41, 13811412 (1985).
PAGE 239
232 29. Kouvaritakis, B., and H. Latchman, Â‘Necessary and Sufficient Stability Criterion for Systems with Structured Uncertainties: the Major Principal Direction Alignment Principle. 'Int. J. Control , vol. 42, 575598 (1985). 30. Kouvaritakis, B. and M.S.Trimboli, Â‘Robust Multivariable Feedback Design.Â’ Int. J. Control , vol. 50, 13271377 (1989). 31. Kwakernaak, H., Â‘Uncertainty Models and the Design of Robust Control Systems,Â’ in Uncertainty and Control. Lecture Notes in Control and Information Sciences , vol. 70, SpringerVerlag, Berlin, (1985). 32. Kwakernaak, H. and R. Sivan, Linear Optimal Control Systems , WileyInterscience, New York, NY (1972). 33. Latchman, H.A., Frequency Response Methods for Uncertain Multivariable Systems D Phil Thesis, University of Oxford, Oxford (1986). 34. Layton, J.M., Multivariable Control Theory, Peter Peregrinus Ltd., Stevenage, England, (1976). 35. Leal, M.A. and J.S. Gibson, Â‘A FirstOrder Lyapunov Robustness Method for Linear Systems with Uncertain Parameters.Â’ IEEE Trans. Autom. Control , vol. AC35, 10681070 (1990). 36. MacFarlane, D.C. and K. Glover, Robust Controller Design Using Normalized Coprime Factor Plant Descriptions. Lecture Notes in Control and Information Sciences, vol. 138, SpringerVerlag, Ileidelber (1989). 37. MacFarlane, A.G.J. and I. Postlethwaite, Â‘The Generalized Nyquist Stability Criterion and Multivariate RootLoci.Â’ Int. J. Control, \ ol. 25, 81127 (1977). 38. Maciejowski, J.M. Multivariable Feedback Design, AddisonWesley Publ. Co., New York, NY (1989). 39. Morari, M. and E. Zafiriou, Robust Process Control, PrenticeHall, Englewood Cliffs, N.J. (1989). 40. Norris, Robert J., Analysis of Multivariable Control Systems in the Presence of Structured Uncertainties, Ph.D. dissertation, University of Florida, Gainesville (1990). 41. Packard, A., M.K.H.Fan and J.C. Doyle, Â‘A Power Method for the Structured Singular Value.Â’ Proc. 27 th CDC, 21322137 (1988). 42. Patel, R.V., and M. Toda, Â‘Robustness of Linear Quadratic State Feedback Designs in the Presence of System Uncertainty./Â£Â’Â£'Â£' Trans. Autom. Control, vol. AC22, 945949 (1977). 43. Osborne, E.E., Â‘On PreConditioning of Matrices.Â’, J. Assoc. Comp. Machinery , vol. 7, 338345 (1960). 44. Qiu, L. and E.J. Davison, Â‘New Perturbation Bounds for the Robust Stability of Linear State Space Models.Â’ Proc. 25 th CDC, 751755 (1986).
PAGE 240
233 45. Qiu, L. and E.J. Davison, Â‘A New Method for the Stability Robustness Determination of State Space Models with Real Perturbations.Â’ Proc. 21 th CDC , 538543 (1988). 46. Safonov, M.G., Â‘Tight Bounds on the Response of Multivariable Systems with Component Uncertainty.Â’Proc. of the Sixteenth Annual Allerton Conference , 451460 (1978). 47. Safonov, M.G., Â‘Stability Margins of Diagonally Perturbed Multivariable Feedback Systems.Â’ IEE Proc., vol. Pt D129, 251256 (1982). 48. Safonov, M.G. and M.Athans, Â‘Gain and Phase Margin for Multiloop LQG Regulators.Â’ IEEE Trans. Autom. Control, vol. AC22, 173179 (1977). 49. Safonov, M. and J.C. Doyle, Â‘Minimizing Conservativeness of Robustness Singular Values.Â’ in Multivariable Control: New Concepts and Tolls (Tzafestas, S.G., ed.), 197207, Dordrecht: Reidel (1984). 50. Sage, A. P. Linear Systems Control, Matrix Publishers, Inc., Champagne, IL (1978). 51. Sezer, M.E. and D.D.Siljak, Â‘A Note on Robust Stability Bounds.Â’ IEEE Trans. Autom. Control, vol AC34, 12121214 (1989). 52. Vidyasagar, M., Nonlinear Systems Analysis, PrenticeHall, Englewood Cliffs, NJ (1978). 53. Vidyasagar, M., Control Systems Synthesis, M.I.T. Press, Cambridge, MA (1985). 54. Wilkinson, J.H., The algebraic Eigenvalue Problem, Clarendon Press, Oxford (1965). 55. Yedavalli, R.K., Â‘Improved Measures of Stability Robustness for Linear State Space Models.Â’ IEEE Trans. Autom. Control, vol. AC30, 577579 (1985). 56. Yedavalli, R.K., Â‘Perturbation Bounds for Robust Stability in Linear State Space Models.Â’/. J. Control, vol. 42, 15071517 (1985). 57. Yedavalli, R.K., Â‘On Measures of Stability Robustness for Linear Uncertain Systems.Â’ In Robustness in Identification and Control. Ed. Milanese, M., R.Tempo and A.Vicino, Plenum Press, New York, NY (1989). 58. Yedavalli, R. K., and Z. Liang, Â‘Reduced Conservatism in Time Domain Stability Robustness by State Transformation: Application to Aircraft Control.Â’ AIAA Proc.. 467472 (1985). 59. Yedavalli, R.K. and Z. Liang, Â‘Reduced Conservatism in Stability Robustness Bounds by State Transformation. 'IEEE Trans. Autom. Control, vol. AC31, 863866 (1986). 60. Zames, G., Â‘Feedback and Optimal Sensitivity: Model Reference Transformations, Multiplicative Seminorms and Approximate Inverses. 'IEEE Trans. Autom. Control, vol. AC26, 301320 (1981). 61. Zhou, K. and P.P. Khargonekar, Â‘Stability Robustness Bounds for Linear StateSpace Models with Structured Uncertainty.Â’/E'EÂ’E' Trans. Autom. Control, vol. AC32, 621623 (1987).
PAGE 241
BIOGRAPHICAL SKETCH Jose Alvaro Letra was born on July 2, 1950, in Tupa, state of Sao Paulo, Brazil. In 1969, he entered the Academia Militar das Agulhas Negras (Brazilian Army Military Academy), from where he graduated in 1972 as an Ordnance Officer. After a stint of 5 years on duty, he entered the Instituto Militar de Engenharia (Military Engineering School Rio de Janeiro, Brazil), having obtained a Bachelor of Science degree in electrical engineering in 1980. From 1981 to 1984, he was commissioned as industrial maintenance engineer, and by the end of this term, he was selected to follow graduate studies for 2 years at the Instituto Militar de Engenharia, having received a masterÂ’s degree in electrical engineering in 1987. After a new commission of 1 year at an Army research center, he was chosen to pursue further studies for 3 years at the University of Florida, on an scholarship granted by the Exercito Brasileiro (Brazilian Army) and the CNPq Conselho Nacional de Desenvolvimento Cientifico e Technologico (Scientific and Technological National Development Agency Brazil). 234
PAGE 242
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. ' Â£_ Haniph A. Latchman, Chairman Assistant Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Thomas E. Bullock Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Jatfobv Hammer Associate Professor of Electrical Engineering
PAGE 243
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. A. Antonio Arroyo Associate Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. A^ Spyros A. Svoronos Associate Professor of Chemical Engineering This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. May 1991 Winfred M. Phillips Dean, College of Engineering Madelyn M. Lockhart Dean, Graduate School
PAGE 244
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. A. Antonio Arroyo Associate Professor of Electrical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. A^ Spyros A. Svoronos Associate Professor of Chemical Engineering This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. May 1991 Winfred M. Phillips Dean, College of Engineering Madelyn M. Lockhart Dean, Graduate School

