Citation

## Material Information

Title:
Robust stability analysis of systems under parametric uncertainty
Creator:
Publication Date:
Language:
English
Physical Description:
vii, 234 leaves : ill. ; 29 cm.

## Subjects

Subjects / Keywords:
Conservatism ( jstor )
Eigenvalues ( jstor )
Mathematical independent variables ( jstor )
Mathematical robustness ( jstor )
Mathematical vectors ( jstor )
Matrices ( jstor )
Parametric models ( jstor )
Polynomials ( jstor )
Scalars ( jstor )
Sufficient conditions ( jstor )
Control theory ( lcsh )
Dissertations, Academic -- Electrical Engineering -- UF
Electrical Engineering thesis Ph. D
Lyapunov functions ( lcsh )
Stability ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

## Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1991.
Bibliography:
Includes bibliographical references (leaves 230-233).
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:

## Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Resource Identifier:
026242054 ( ALEPH )
25046242 ( OCLC )

Full Text

ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY

By

JOSt ALVARO LETRA

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

1991

To

Carmen Lucia

and

ACKNOWLEDGMENTS

I am profoundly indebted to my advisor and supervisory committee chairman, Dr. Haniph A. Latchman, for his guidance, permanent support and encouragement during my three years at the University of Florida. Despite his several other responsibilities, Dr. Latchman always found time to discuss my work and give me his insightful orientation.

I wish to thank the professors who served on my committee, Dr. Thomas E. Bullock, Dr. J. Hammer, Dr. A. Antonio Arroyo and Dr. Spyros A. Svoronos, for their willingness to discuss and advice my work, and for the high level of consideration I was always treated with.

I wish to thank the help and advice of Dr. G. Basile, my first committee chairman.

I am indebted to the EE Graduate Coordinator, Dr. Leon W Couch, and his staff, for all their assistance. Particularly, I have to thank Mrs. Greta Sbrocco, who always provided helpful orientation on administrative subjects.

It was a privilege to work close to my ex-fellow student, Dr. Robert J. Norris, whose valuable incentive and help I now acknowledge. I also wish to thank Dr. Julio S. Dolce da Silva, of the Brazilian Army, for his help on my enrollment and adaptation to the University.

I am grateful to the Ex~cito Brasileiro (Brazilian Army) for conceding me the opportunity of coming to the University of Florida to further pursue my studies, and to the CNPq Conselho Nacional de Desenvolvimento Cientifico e Tecnol6gico (Scientific and Technological National Development Agency - Brazil) for the scholarship I was granted.

page

ACKNOW LEDGM ENTS ........................................................... iii

A BST R A C T ........................................................................ vi

CHAPTERS

I INTRO D UCTIO N ........................................................... 1

1.1 Dissertation Objective ................................................... 1
1.2 Brief Historical of Uncertainty Treatment ................................ 2
1.3 Structure of the Dissertation ............................................. 9
1.4 N otation ............................................................... 11

2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION ......... 16

2.1 Nominal Models and Definitions ........................................ 16
2.2 Uncertainty Representation ............................................. 20
2.3 Conclusions ............................................................ 38

3 STABILITY ANALYSIS OF LINEAR SYSTEMS ........................... 39

3.1 Introduction ............................................................ 39
3.2 Stability of State Space Systems ........................................ 39
3.3 Stability of Transfer Matrix Models ..................................... 45
3.4 Frequency-Domain Scaling Techniques .................................. 63
3.5 Conclusions ............................................................ 7 2

4 LYAPUNOV DIRECT METHOD IN THE PRESENCE OF STRUCTURED
UNCERTAINTY ........................................................ 73

4.1 Introduction ............................................................ 73
4.2 Dependence of Conservatism on Perturbation Structure ................. 76
4.3 Stability Under Structured Uncertainty ................................. 82
4.4 Maximization of Stability Domains ...................................... 92
4.5 Application of Optimization Over Q ................................... 109
4.6 C onclusions ........................................................... 113

5 STABILITY UNDER DIAGONAL PARAMETRIC UNCERTAINTY ...... 115

5.1 Introduction ........................................................... 115
5.2 Diagonal Representation of State Space Perturbations .................. 116
5.3 Problem Formulation .................................................. 122
5.4 Necessary and Sufficient Conditions for Robust Stability ............... 127
5.5 Sufficient Conditions for Robust Stability .............................. 132
5.6 Numerical Application ................................................. 136
5.7 Some Extensions of Previous Results ................................... 139
5.8 Conclusions ........................................................... 143

6 COMPARISON OF SUFFICIENT PARAMETER NORM BOUNDS ....... 145

6.1 Introduction ........................................................... 145
6.2 Results for Problems with 2 and 3 Parameters ......................... 146
6.3 Results for Randomly Generated Matrices ............................. 154
6.4 Conclusions ........................................................... 161

7 ITERATIVE CONTROLLER ROBUSTIFICATION ....................... 163

7.1 Introduction ........................................................... 163
7.2 Robustification Associated to Lyapunov Analysis ....................... 169
7.3 Robustification Associated to Frequency-Domain Analysis .............. 169
7.4 A pplication ............................................................ 187
7.5 Conclusion ............................................................ 195

8 NECESSARY STABILITY DOMAIN IN THE PARAMETER SPACE ..... 197

8.1 Introduction ........................................................... 197
8.2 Characterization of a Necessary Stability Domain ...................... 199
8.3 Computation of the Necessary Stability Domain ........................ 202
8.4 A pplications ........................................................... 209
8.5 Conclusions ........................................................... 214

9 CO N CLUSIO N ............................................................ 216

9.1 Sum m ary ............................................................. 216
9.2 Directions for Future W ork ............................................ 223

REFEREN CES .................................................................... 230

BIOGRAPHICAL SKETCH ....................................................... 234

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY

By

JOSft ALVARO LETRA

May 1991

Chairman: Dr. Haniph A. Latchman
Major Department: Electrical Engineering

In the analysis of stability properties of control systems, the uncertainty in mathematical models must be taken into account. Main sources of uncertainty are high order dynamic phenomena of the physical system neglected in the model, and variations in system parameters. The subject of this work is the assessment of stability of linear control systems in the presence of parametric uncertainty.

State space and frequency-domain models and uncertainty representation are reviewed, as well as general conditions for nominal and robust stability. Also reviewed are scaling techniques used for reducing the degree of conservatism of frequency-domain stability conditions, including optimal similarity scaling, optimal non-similarity scaling and Perron scaling.

Particularly, the perturbed state space model i(t) = (A + E) x(t) is studied. The nominal matrix A is assumed asymptotically stable, and the perturbation E is of the form E = F=1 pkEk, where p is a m-dimensional vector of system parameters, and Ek, k = 1,..., m, are constant matrices. The application of the Lyapunov Direct Method

for obtaining conditions on the norm of p which are sufficient for robust stability is discussed in detail. A new stability condition on 1I P 112 is given, which is potentially less conservative than available results. The problem of the choice of the Lyapunov matrix which yields less conservative stability conditions is formalized as a constrained numerical optimization problem.

For the case of time-invariant uncertainty, an equivalent frequency-domain stability problem is formulated, where the perturbation is a real, diagonal matrix obtained directly from the state space perturbation. Sufficient stability conditions are derived from the equivalent formulation, and scaling techniques are used, in order to reduce conservatism.

Comparison of numerical results obtained for several problems indicates that, for timeinvariant uncertainty, the frequency-domain approach, associated to Perron scaling, constitutes an alternative which has better performance than the Lyapunov Direct Method. The frequency-domain approach and corresponding stability conditions are also shown to be of advantage in iterative optimization of static feedback controllers of fixed order.

Additionally, a procedure is suggested for obtaining a necessary stability domain in the space of plant parameters, starting from a known sufficient domain.

Finally, the integration of the stability analysis techniques into robust controller design is discussed.

CHAPTER 1
INTRODUCTION

1.1 Dissertation Objective

At least two common aspects are shared by the majority of the current literature on control systems analysis and design, although many different methods and techniques are nowadays employed. These aspects are as follows:

e Focus is placed on multivariable systems;

9 Uncertainty in system models is explicitly taken into account.

These aspects constitute a frame for the present dissertation. The specific subject is the assessment of robust stability properties of systems under parametric uncertainty, which finds motivation in the following considerations.

Control systems are designed to meet some performance specifications. Although the formulation of performance specifications depends on the approach used, it always requires that some quantitative indices be satisfied by the system response, what of course implies in constraints to the dynamic behavior of the system.

However, it only makes sense to discuss the quantitative behavior of a control system if its stability can be assured. Otherwise, the dynamic behavior can be expected to blow up under some admissible operating condition, thus rendering the system useless. Stability, therefore, emerges as a fundamental requirement.

Control design relies on mathematical modeling of the controlled system. Unfortunately, there always exists a degree of uncertainty between the model and the modeled system,

2

which must be taken into account. The existence of uncertainty gives rise to the requirement of robustness, namely the aptitude of a control system for retaining the desired behavior in spite of the uncertainty.

Design methods definitely depend on analysis techniques in order to assess system properties, including robust stability. Techniques for robust stability analysis count on uncertainty representation, which is dictated by several factors, mainly by the causes of uncertainty and available information on uncertainty structure. Variations in system parameters are sources of an important category of perturbations, which is particularly suitable to representation in state space models.

Motivated by these facts, this dissertation addresses the problem of robust stability analysis in the presence of parametric perturbations. The perturbation will be assumed to depend linearly on a vector of parameters, thus admitting the practically important case in which one parameter affects several entries of the system matrices in the state space representation. This model has been used in several recent works in stability analysis.

The development of the subject is outlined in Section 1.3. Before this, a brief historical summary of the treatment of uncertainty in control theory is given.

1.2 Brief Historical of Uncertainty Treatment The need for control systems has been long felt in the process of technological development. Examples of the use of control systems date back to four thousand years [501. Noteworthy is the fact that feedback principles are found even in those early examples. Among the several advantages that the feedback principle brings to control systems, appears the property of effectively coping with disturbances and system uncertainty [31].

3

Important events in feedback history are registered by Sage [50]. Among them are the invention of the mechanical fly-ball governor by James Watt in 1788, which was developed from early windmill regulators, and the analysis of feedback control systems published in 1868 by Maxwell.

In 1927, the concept of feedback was introduced by Black in the design of amplifiers for long distance telephone lines; his pioneering work is contained in the paper 'Stabilized Feedback Amplifiers', published in 1934. Although robust to uncertainties caused by nonlinearity and other factors, the feedback amplifier presented unwanted oscillations. The theoretical study of this phenomenon led to the development of the regeneration theory by Nyquist, whose work was published in 1932. The Nyquist criterion, which derives closedloop stability characteristics from open-loop information, would constitute a fundamental technique for frequency-domain stability analysis.

Ensuing developments of frequency-domain concepts originated from the work of Bode, in network analysis and amplifier design (1945), which demonstrate the existence of constraints in the manipulation of the frequency response of linear time-invariant systems; from the Nichols transformation of the Nyquist diagram, and from the root locus technique of Evans.

The set of those techniques constitute what became known as the classical approach to analysis and design of Single-Input, Single-Output (SISO) systems. In the classical approach, the issue of coping with uncertainty is indirectly addressed, by providing the system with enough gain and phase margins. These margins ensure that unwanted effects of uncertainty will not disrupt stability.

In the late '50s, problems of more complex nature, mainly originated by the control and guidance of missiles and space vehicles, came into the consideration of control engineers

4

and theoreticians, and dominated the development in the field. The already well-known set of classical tools was not adequate to deal with the essentially multivariable nature of the incoming control problems. The number of degrees of freedom inherent to multivariable systems, and the complex relationship between open-loop and closed-loop properties in those systems, manly due to interaction, which has no counterpart in SISO systems, often preclude the use of the simple techniques developed for scalar systems [21]. In this context, and because the digital computer was already available, the decade of the '60s saw a marked tendency towards the use of optimization techniques in the solution of control problems. The design objectives in such techniques were mathematically treated and transformed into a cost function to be minimized.

Thus, the approach to control problems shifted from the frequency-domain to state space. Indeed, the state space was well suited for describing multivariable systems, and powerful techniques were developed for handling optimal control problems. Feedback emerged as a convenient property of solutions to optimal problems [311. Linear Quadratic State Feedback (LQSF) appeared as robust solution to control problems, relying however on exact measurements of the states; on the other hand, the possibility of very accurate models for the applications then sought caused the question of uncertainty to receive comparatively less attention than in the classical frequency domain approach.

The state space formulation and the control techniques it brought about, however, did not achieve acceptance in all fields of applied control, particularly in industrial control. Different reasons have been presented for this fact: only approximate models are available for many industrial processes; plants have components which deteriorate due to continued use; long formed habits of dealing with classical techniques by industrial engineers are an obstacle to the adoption of the sophisticated mathematical treatment required by optimal

5

control. The Linear Quadratic Gaussian (LQG) theory, developed in the late '60s, can handle external disturbances modelled as Gaussian noise, and preserve the optimality of solutions, but the LQG controller is not robust against plant uncertainty, an important limitation in such industrial applications.

The decade of the '70s witnessed a renewed effort in control theory. The first phase in the process involved efforts made towards the generalization of classical SISO frequency-domain techniques to multivariable systems. One example of the resulting analysis and design techniques is the Inverse Nyquist Array (INA) method of Rosembrock (1974), which sought to eliminate the influence of interaction and then apply scalar techniques to the independent loops. Another is the Characteristic Locus Method of MacFarlane and Postlethwaite [371, which introduces a generalization of the Nyquist stability criterion based on the eigenloci of the transfer function matrix, and produces necessary and sufficient conditions for stability. The resulting generalized Nyquist plots are used in the multivariable design in the same fashion that the Nyquist plot is in the scalar case. The original formulation, however, applies to the case of exactly known models. Since the eigenloci are sensitive to perturbations in the transfer matrix, the original formulation had limitations in the context of robust stability. Later developments have extended the generalized Nyquist criterion to uncertain system, through the computation of inclusion bands for the perturbed eigenloci. Sufficient inclusion bands are obtained with the normal approximations method [8], and necessary and sufficient inclusion bands with the E-contours method [9].

Another side of that effort, which continued through the '80s, sought a deeper understanding of the structure and property of multivariable systems, with a renewed interest for robustness aspects.

6

Safonov [48, 46] proposed an explicit representation where perturbations in multi-loop systems assume the form of a diagonal perturbation matrix, therefore a structured representation. This representation was later used in the definition of a measure of stability margin for multivariable systems [47].

Doyle and Stein [14] developed the use of maximum-singular values to obtain bounds on the perturbations to multivariable systems, with perturbations modeled as norm-bounded but otherwise unconstrained, having therefore an unstructured representation.

In 1976, a parametrization of all stabilizing controllers of a particular system was presented by Youla and coworkers. Zames [60], proposed a scalar design technique which minimizes the effects of external disturbances while ensuring closed-loop stability; performance was measured in terms of oo norm. This work is considered one of the fundamentals of what, associated to the Youla parametrization, has become known as H.o control. Several multivariable problems, like sensitivity minimization and robustness to additive perturbations, can be expressed as H.. control problems, that is, problems where the goal is the minimization, in the frequency-domain, of the norm of a transfer matrix. This approach permits the synthesis of a controller which minimizes an objective function, which in general is used to express some performance requirement, while ensuring the stability of the solution by restricting the controller to belong to the set off all stabilizing controllers. However, controllers derived through this approach tend to be of high order, requiring a posteriori order reduction.

Although an unstructured uncertainty representation yields a more tractable mathematical problem, it may lead to conservative stability results. Often, some information about the structure of the perturbation is available, and should be used in order to produce tighter results. The work of Doyle [12] gave new dimension to the diagonal perturbation problem

7

pioneered by Safonov, when he argued that model uncertainty can be very effectively posed in terms of block-diagonal norm bounded perturbations. He developed a new analysis tool, namely the p-function, which constitutes a necessary and sufficient mathematical condition for robust stability of transfer matrix models.

The computation of this new robustness measure presents considerable difficult for general structured uncertainties. An upper bound presented by Doyle involves the minimization, over the space of diagonal similarity scaling matrices, of the norm of the scaled system matrix: this upper bound actually equals 1 when there are at most three complex blocks in the diagonal uncertainty representation. For the case of more blocks, or when the perturbation has real components, the upper bound is a conservative estimate of i. For design purposes under structured uncertainty, Doyle has formulated what has become known as the 'p-synthesis' method. In this approach, the cost function to be minimized is the o-norm of a similarity scaled transfer matrix involving a controller chosen out of the set of all stabilizing controllers. The parameters are the controller itself and the scaling matrix.

The formulations by Doyle, as well as previous work by Safonov, introduced the use of frequency-domain scaling in control problems, as a tool for the derivation of less conservative sufficient stability results, in connection with the block-diagonal uncertainty problem.

Other models of uncertainty, as well as different forms of scaling, have been proposed. For instance, in Latchman's work [33], the highly structured element-by-element-bounded uncertain model is explored, and new, less conservative stability conditions are obtained with the introduction of non-similarity scaling. For the case of element-by-element-bounded complex perturbations, it has been shown [33] that, if the maximum singular-value of the optimally scaled system matrix remains distinct, p is attained, regardless of the number of elements in the perturbation matrix. Relationships between similarity scaling and non-

8

similarity scaling have been derived [40], and used as tool for decreasing the cost of the computation of the i-function for complex perturbations.

The block-diagonal formulation of uncertainties admits complex as well as real perturbations. Real perturbations in frequency-domain models have been employed for example to represent uncertainty in gains [10, 38] and in poles [10] of a transfer function. In this dissertation, a perturbed state space system is given a frequency-domain representation having real diagonal uncertainty, which is derived directly from the state space real uncertainty. For problems involving real uncertainty, results derived with the p-function approach are usually only sufficient. The derivation of tighter results for the case of real uncertainty is an active area of research [17, 18]; a new upper bound for i, tighter than the singular-value bound, has been recently introduced [18].

Besides the cited developments in analysis of perturbed transfer matrix models, the analysis of perturbed state space models received a great deal of consideration in the last decade. Two basic approaches can be recognized in the analysis of state space models: the Kharitonov approach and the Lyapunov approach.

The approach spurred by the work of Kharitonov [27] deals with robust stability of control systems through stability analysis of characteristic polynomials having perturbed coefficients. Although the original work considered the case of independent coefficient perturbations, new results [2] have later extended the approach to the case of polytope of polynomials. Basically, this extension permits the assessment of stability of a whole polytopic family by analyzing stability properties of its exposed edge polynomials.

The Lyapunov approach to robust stability analysis stemmed from the original work on stability by Lyapunov, published in Russian in 1892, which has a French translation dating from 1949. The Lyapunov Direct Method (LDM) yields a sufficient condition for stability;

9

stability assessment however depends on the construction of a suitable Lyapunov function for the system under investigation. In the case of linear, time-invariant systems, a quadratic function of the state is used as Lyapunov function. The condition for robust stability can then be posed in terms of the positive-definiteness of a certain matrix. Although only sufficient, the approach has been used in robust stability analysis in a great number or recent works [4, 16, 42, 51, 56, 59, 61]. In particular, this method has been used in connection with structured perturbations depending linearly on a vector of parameters [4, 51, 61]. This uncertainty representation, on the other hand, has also been used apart from the Lyapunov approach [18].

Additional stability analysis methods for state space systems are the stability radius method [24], and the methods of Qiu and Davison [44, 45]; tensor products are used in the latter.

1.3 Structure of the Dissertation

This dissertation is organized into 9 chapters, the first of which contains this Introduction. The next 2 chapters present a review of basic concepts, while the main part of the work is presented in Chapters 4 through 8. Chapter 9 contains the Conclusion.

Specifically, nominal and perturbed system models are reviewed in Chapter 2. Special attention is given to uncertainty representation in state space and transfer matrix models, with emphasis placed on diagonal representation of uncertainty in interconnected frequencydomain models.

The focus of Chapter 3 is in stability conditions. The review includes the Lyapunov Direct Method, the Generalized Nyquist Criterion, spectral radius conditions for stability, and spectral radius upper bounds given by singular-value and structured singular-value.

10

Chapter 4 concentrates on the assessment of robust stability of state space systems in the presence of structured perturbations which depend linearly on a vector of parameters. The application of the Lyapunov direct method is thoroughly discussed, including a qualitative study of reasons of conservatism under perturbation, a review of available results, the derivation of admissible parameter norms and the use of parameter weighting for shaping the form of the computed stability domain. A new condition on the 2-norm of the vector of parameters, which is potentially less conservative than available conditions, is presented, and similarity scaling is explore in the reduction of conservatism of available results. Finally, the choice of the adequate Lyapunov matrix is cast as an optimization problem.

An alternative approach to the assessment of robust stability of state space systems, under time-invariant perturbations linearly dependent on a vector of parameters, is proposed in Chapter 5. Working directly with the perturbed state equations, and exploring diagonalization of uncertainty, an equivalent frequency-domain problem is formulated, from which sufficient stability conditions are derived. The formulation is such that the uncertainty matrix which appears in the equivalent frequency-domain problem is derived directly from the real perturbation to the state space model. The derivation was independently undertaken, and has not been explicitly found in the literature. Conservatism of the stability conditions is reduced through the use of scaling techniques; besides the well-known optimal similarity scaling, conditions are obtained in terms of Perron scaling.

Chapter 6 compares numerical results obtained with the LDM of Chapter 4 and the frequency-domain method proposed in Chapter 5. Results obtained from the frequencydomain method were in general less conservative than results from LDM; they were always at least as good as the LDM results. Particularly, it is shown that the stability condition

11

that uses Perron scaling have low computational cost and produces results with the same level of conservatism as results obtained with optimal similarity scaling.

In Chapter 7, the frequency-domain approach is explored in the analysis step of an iterative controller robustification technique, similar to that proposed by Bhattacharyya [4]. The alternative approach has computational advantages, mainly when Perron scaling is used, because then it permits the elimination of parameters in the resulting optimization problem.

Both the methods discussed in Chapters 4 and 5 yield sufficient stability domains in the space of plant parameters. In Chapter 8, a technique is presented for the computation of a necessary domain, starting from an available sufficient domain. An extensive search in the parameter space, which would be unfeasible for a large number of parameters, is avoided on the basis of a conjecture, which has worked well in all problems considered.

Finally, Chapter 9 presents a summary of results and suggestions for further work.

1.4 Notation

The following notational convention will be adopted in this document, unless otherwise explicitly stated. Additional symbols will be defined.as required. A, Nominal dynamic matrix (open-loop) A, :Nominal dynamic matrix (closed-loop) AP :Perturbed dynamic matrix

D Diagonal form of real perturbation matrix D, :Diagonal form of perturbation with complex scalars E Error matrix (parametric perturbation) EA Parametric perturbation to the matrix A

Ek Fu(M, A) FL(M, K) G0(s) H(s) In

J

K

L

M E nXm M E CnXm Mij MH M+

P PA

Q Q1
R

RA(AP) RA(AP)

S

S,

Perturbation due to the kth parameter Upper linear fractional transformation Lower linear fractional transformation Nominal plant transfer matrix Open-loop transfer matrix Identity matrix of order n Objective function in optimization problems Controller

Left matrix in the decomposition E = LDR Real n x m matrix

n x m matrix with complex elements Element at ith row and jth column of M Complex conjugate transpose of M Matrix of the complex magnitude of elements of M Solution to the Lyapunov matrix equation Matrix of upper-bounds on elements of A Lyapunov matrix

Nominal compensated transfer matrix The right matrix in the decomposition E = LDR Largest real part of Ai(Ap), for fixed E Largest real part of Ai(Ap), for E in a class Similarity scaling matrix Perron scaling matrix

S: Osborne scaling matrix Sd Stability domain Sdp(Q) Stability domain, function of Q, in the norm 11 * 11p Sd.K,.(K) Stability domain, function of K, based on the measure UK T(s) Closed-loop transfer matrix Ue Unitary matrix W Matrix of right eigenvectors dp :Change in parameter p jR :Imaginary axis of the complex plane km Multiloop stability margin km :Conservative assessment of km

Stability bound on 11p 11,

p E Rm im-dimensional parameter vector Pw Worst case parameter combination rs2(Q) Stability bound on 11P 112 sk Weight applied to the kih parameter s Complex frequency p E RM Input vector x E n : State vector y E RP Output vector XM,XM Major (minor) output principal direction of M YM7yM Major (minor) input principal direction of M C Field of complex numbers CmXm Space of complex m x m matrices

Du : Class of frequency-dependent, unstructured uncertainties Vs Class of frequency-dependent, structured uncertainties

-Fu Class of unstructured real uncertainties Es :Class of structured real uncertainties Q Set of symmetric, positive-definite Q E R"ï¿½n ,Sc :Class of scaling matrices related to the block-structure K X/C :Class of block-diagonal structured uncertainties R :Field of real numbers R+ Set of non-negative numbers

nXl Space of n x n matrices with elements in R A(s) Frequency-dependent perturbation AM(S) Frequency-dependent perturbation to M ak Bound on the range of kth parameter "a Measure of stability margin b(s) Upper-bound on the norm of A(s) E Small quantity in general Ai(M) ith eigenvalue of M p(M) :Structured singular-value of M 7r(M) Perron radius of M 7r, :Set of worst case parameters p(M) Spectral radius of M pR(M) Real spectral radius of M Oi(M) ith singular-value of M U(M) Maximum singular-value of M

a(m) Minimum singular-value of M 0, : Characteristic polynomial 19 Partial derivative lxi Complex magnitude of x det[M] Determinant of square M 11 x lp p-norm of vector x 11 M I!, Matrix norm induced by p-norm I M hIF Froebenius norm of M V For all

* :End of proof

End of statement given without proof o End of example inf, sup Infimum, supremum max, min Maximum, minimum DU Diagonal Uncertain LDM Lyapunov Direct Method GNC Generalized Nyquist Criterion MIMO Multi-Input, Multi-Output OS Osborne Scaling OSS Optimal Similarity Scaling PR Perron Radius PS Perron Scaling SISO Single-Input, Single-Output SSV Structured Singular-Value

CHAPTER 2
NOMINAL MODELS AND UNCERTAINTY REPRESENTATION

2.1 Nominal Models and Definitions This section introduces basic definitions and models of linear time-invariant systems. Let us consider the unity feedback system with cascade compensation, represented in Figure 2-1. The multi-input, multi-output block G, represents the physical system or process under investigation, which is generically designated as plant.

r, ,
-- Go , ul ylL_, UIY

Urn yp

(a) (b) Figure 2-1. Unity feedback system a) Closed-loop system
b) Uncompensated nominal plant The subscript o designates the nominal model of the plant, namely a mathematical representation where the relationships among the quantities involved are exactly known. Unless otherwise stated, nominal models will be regarded as linear and time-invariant. The cascade connection of plant and compensator defines the open-loop compensated plant, denoted by Qo = GoK.

17

Many dynamic systems of engineering significance can be described by a linear differential equation relating the input r(t) and its derivatives to the output y(t) and its derivatives. However, this representation is not the most convenient to deal with. Representations that have become standard in control systems theory are the state space model and the transfer matrix model.

State space model. A differential equation of order n with constant coefficients, involving m inputs, p outputs and their derivatives, can be put in the state variable form: i(t) = Ax(t) + Bu(t) (2.1) y(t) = Cx(t) + Du(t) (2.2) where x(t) E Rn is the state vector and A E Rnxn, B E Rnxm, C E Rpxn and D E Rpxm are constant matrices.

A generic state space model is often designated by the quadruple [A, B, C, D]. Unless otherwise stated, open-loop plants are assumed to be purely dynamic, thus having a representation of the form [AG, BG, CG, 0]. A dynamic controller is represented by the quadruple [AK, BK, CK, DKI, which reduces to DK in the case of a purely algebraic controller. To the closed-loop system corresponds the quadruple [Ac, Be, Co, Dj], whose components are easily obtained from the state space descriptions of plant and controller.

Transfer matrix model. The nominal transfer matrix may be obtained via the application of the Laplace transform to the state space equations, under the assumption of null initial conditions. The transfer matrix is then given by: H(s) = C(sI - A)-'B + D (2.3)

where the term (sI - A)-' is the resolvent of the matrix A.

18

Let G,(s) and K(s) be the transfer matrices of plant and compensator, respectively. The transfer matrix of the closed-loop unity feedback system, which can be obtained by algebraic manipulation of blocks, is: T(s) = [(I + GoK)-1GoK](s) (2.4) Note that, in view of the dimensions of the matrices in the state space model, G0(s) E Cpxm. Consequently, K(s) E CmxP and T(s) E CPxP. Of course, T(s) can be obtained by applying (2.3) to the quadruple [Ac, Bc, Cc, DcI.

Characteristic decomposition. A complex, square matrix M E Cnxn with distinct real eigenvalues has the characteristic decomposition: M = WAW-1 (2.5) where A = diag{Ai}, i = 1,...,n, contains the eigenvalues of M. The columns of W are lineaxly independent vectors of M, arranged in correspondence with the eigenvalues. Matrices with non-distinct real eigenvalues, or with complex eigenvalues, have analogous decompositions, where A assumes a non-diagonal Jordan form.

The spectral radius and the real spectral radius of M are, respectively,

p(M) _=f max Ai(M) I (2.6) pR(M) = max AR,(M) (2.7) where ARA(M) is a real eigenvalue of M. It is easy to show that pR(M) < p(M) (2.8) The spectral radius has an important role in stability analysis, as will be seen in the following chapters.

19

Singular-value decomposition. A complex matrix M E C'1ï¿½"n has the singular-value decomposition

M = XEYH (2.9) where E = diag {i}, ai E R+, i = 1,... , n, arranged in decreasing order, and Y and X are unitary matrices that contain respectively the right and left singular-vectors of M, arranged in corresponding order with the singular-values.

The right singular-vectors YM are called input principal directions, while left singularvectors xM are called output principal directions. The largest and the smallest singular values are of fundamental importance in stability and performance analysis. They are called respectively maximum singular-value and minimum singular-value, and denoted by d(M) lf (M) (M) 1(m) a(M) (2.10) The principal directions corresponding to the maximum singular-value receive the qualifier major, while minor is attributed to the principal directions corresponding to the minimum singular-value. They are denoted by y9M,-M and yMxM, respectively.

The singular-value decomposition extends to non-square matrices M E C"xn. In this case, X and Y are matrices of different dimensions, and E has a number q = min{m, n} of non-null singular-values.

The minimum and the maximum singular-values constitute respectively lower and upper bounds for the magnitude of the eigenvalues, that is, a(M) < IA(M)I _ i(M), Vi (2.11) Derivatives of the maximum singular-value will be needed in Chapters 4 and 5. The following lemma advances an analytic expression for the derivative of [o(M)]2 with respect to a generic variable x.

20

Lemma 2.1. Let M E CflxI, and assume that the entries of M depend on a variable x. Then, the derivative of [-(M)]2 with respect to x is given by d [[_(M)]2] WH d
dx - (M M] (2.12)

-x d
Proof[33]. For any matrix A, let A(A) df maxi A (A). Then, it follows from the definition of maximum singular-value that [0(M)]2 = max Ai(Mg M) = - (MH M) Let W be the normalized eigenvector associated with A(MH M). Then, (MH M) W = AW. The derivative of this expression with respect to x is given by: d[(MH M)W] d[AW]
dx dx
d[(MH M)]w + (MH M) d [W] - d[A]w + d[w] dx dx dx dx Multiplying to the left by WH and considering that WH (MH M) = WHA, one has that wH d [(MH MA -W = H W, from which (2.12) follows.
dx d

In robust stability analysis, nominal models must be complemented by a description of the uncertainty they are subject to. Uncertainty representation is discussed in the next section.

2.2 Uncertainty Representation

2.2.1 Causes and Classification of Uncertainty A mathematical model is intended to represent the most significant characteristics of the modelled system. Between the model and the true system there always exists an error, which is called uncertainty in control theory.

21

Two broad categories of modeling error sources can be identified, namely unmodeled dynamics and variations of plant parameters. The objective of this section is to discuss uncertainty representations, with particular attention to the case of parametric uncertainty.

The modeling process is guided by the conflicting requirements of fidelity to the plant dynamics and tractability. As a result of the necessary compromises with respect to these conflicts, some secondary dynamic phenomena may be left unmodelled, or may receive simplified representation.

On the other hand, a model might adequately represent the plant dynamics under given conditions, yet might not be able to capture variations suffered by the plant during its life span, or even during an operation cycle.

Changes in properties of physical components, which affect the plant, are normally expected and cannot be eliminated in some cases. For example, due to a compromise between precision and production costs, almost all technical specifications of serial made industrial components allow variations of properties around the nominal value. Other factors also contribute for changes in properties; among them are aging of the components, hysteresis cycles and environmental conditions.

An example of plant with uncertainty due to both simplifications and neglected dynamics is the chemical batch reactor, discussed in [39]. In that case, a truly nonlinear process is linearized at an operation point, thus characterizing a simplification advised by tractability. The dynamics of the resulting equation is uncertain due to neglected nonlinear effects and due to unknown plus neglected high frequency temperature-dependent effects.

In order to improve the assessment of stability and performance characteristics of control systems, some sort of mathematical description of the uncertainty associated to a given nominal model is needed. This description is called uncertainty representation.

22

In a fairly general sense, the true modeled object can be represented in terms of its nominal model S, and of the modeling error E, by the following relationship: SP = Hl(S, E) (2.13) where SP designates the object obtained when S0 is perturbed by E, and II(e) describes how the error relates to the nominal model. The object Sp may represent either the plant or an interconnected system, which includes the plant as one component. If, for example, the modelled object is a plant, (2.13) becomes Gp = ll(Go, E).

An admissible error set is called a perturbation class. Given the perturbation class, the relationship II(o) determines a family of objects around the nominal model; this family is a set that includes a member which is closer to the true modeled object than the nominal model.

The relationship IH(o) is determined by the uncertainty description chosen. A mathematical description of uncertainty must satisfy the following requirements [19, 33]:

(i) Simplicity: the description should be such that the model is tractable;

(ii) Accuracy: the uncertainty class should be such that it would allow only perturbations that really can occur ;

The quality of results obtained from the analysis of perturbed models depends, to some extent, on the uncertainty representation. The following are relevant factors in uncertainty representation:

(i) Nature of the model. The uncertainty representation must follow the nature of the nominal model. For example, if a linearized model is constructed for a system described

23

by a nonlinear input-output relationship, the error can be adequately represented by the difference between the true output vector and the output vector of the model.

When the system is represented by a MIMO transfer matrix model, the uncertainty is represented by a dimensionally compatible transfer matrix. If a state space model is used, the uncertainty is represented by dimensionally compatible real perturbations to the quadruple [A, B, C, D].

(ii) Type of the error. The uncertainty may assume either the form of an absolute error or the form of a relative error. In the former case, the uncertainty is represented as an additive term, while in the later it appears in multiplicative form.

(iii) Structure of the uncertainty. This is the most important characteristic of the uncertainty representation. It is related to the knowledge and assumptions made about the mechanisms that generate the uncertainty.

If nothing is known about particular causes of uncertainty, or if it is not practical to consider sources of uncertainty individually, the unstructured representation is used. The effects of all, possibly several sources are lumped together, and represented as if caused by only one source. The error is characterized by a norm upper bound, say Ij E 1I < C, but is otherwise unconstrained. The norm upper bound completely characterizes a class of unstructured perturbations.

When the mechanisms that give rise to uncertainty are known, it is useful, although not required, to adopt an structured representation. It is in general possible to identify at least some of the causes of uncertainty [14], whence it is in general possible to use at least a partially structured representation of uncertainty.

An interconnected system whose components are uncertain presents multiple perturbation 'blocks', which can be of different dimensions. Looking at the whole system, the

24

uncertain has a structure defined by the position of the blocks. An unstructured representation could be used to cover up for various scattered 'blocks.' However, this approach would be conservative, because the norm bounded but otherwise unconstrained class of unstructured uncertainties would admit perturbations which don't satisfy the known block structure.

In the following, the general principles given above are applied to uncertainty representation in frequency-domain and state space models.

2.2.2 Representation of Uncertainty in Transfer Matrix Models Unstructured plant uncertainty

Let us assume that the nominal and the perturbed plants are represented by transfer matrix models, respectively G0(s) E Cpxm and Gp(s) E Cpxm, and let A(s) represent the uncertainty. The argument 's' may be dropped, if the dependence on s is clear from the context.

In the unstructured representation, the class of admissible perturbations is characterized by a frequency dependent norm bound; usually the norm of choice is the induced 2-norm, which coincides with the maximum singular-value. An unstructured class which admits all possible A in a ball or radius b(s) in CPxm is defined as: Du = {A(s) E CPxm : 5[A(s)] < b(s) E R+, Vs} (2.14) Additive representation. If the unstructured uncertainty is meant to account for an absolute error in the nominal model, the representation assumes the following additive form, illustrated by Figure 2-2 (a):

GP = G + AA, AA E 1u

(2.15)

25

Multiplicative representation. This representation accounts for relative errors in the model. It is well suited when the nominal plant has input or output uncertainty. The perturbed model becomes, for each of these cases, respectively:

Gp = Go(Im + Al), A, E Du; Gp = (Ip + Ao)G., Ao E Tu (2.16)

When both input and output uncertainty are present, as shown by Figure 2-2 (b), the above expressions combine to give Gp = (Ip + Ao)Go(Im., + A1).

(a) (b)
Figure 2-2. Plant uncertainty representation a) Additive representation
b) Multiplicative representation

Brief analysis. The unstructured representation does not discriminate sources of uncertainty. Neglected dynamics, which usually contributes to high-frequencies error components, and parametric variations are considered together.

This representation certainly satisfies the simplicity requirement. However the maximum singular-value, used to characterize the class of allowable perturbations, depends on the whole matrix and does not account for magnitudes or phases of individual elements or submatrices structure. Consequently, the accuracy requirement may not be attained, because the class Du admits perturbations which are not physically possible to occur.

From the point of view of accuracy, it is preferable to use structured representations. Yet, even when some of the plant error components can be represented in structured form, there exist high-frequency components that require unstructured representation [141.

26

It is interesting to note that additive and multiplicative representations of plant uncertainty lead to different expressions for the perturbation of compensated plants. Regarding Figure 2-1 (a), when the additive representation is used, the perturbed compensated plant is given by Qp = (Go + AA)K = GoK + AAK = Qo + &A, while in the case of output multiplicative uncertainty representation, the perturbed compensated plant is Q, = (p + Ao)GK def (1, + Ao)Qo. Therefore, with multiplicative representation, the relative error in the compensated plant is the same as in the nominal plant, while the absolute error changes in the additive representation.

Structured plant uncertainty

Structured representations are adopted when it is possible to identify the causes of uncertainty, so that their effects can be linked to specific entries of the transfer matrix. Since individual sources of uncertainty are independently considered, the structured representation is more accurate.

Element-by-element-bounded perturbations. This highly structured representation can be used when frequency-dependent norm bounds for the uncertainty in each element of the nominal transfer matrix are available. The class is characterized by magnitude bounds and unconstrained element phases, and is defined as [33]:

Ds = {A(s) E CPr : At < Pij E +, arg(Aij) = 0ij, 0 < ij < 27r, Vs} (2.17) It has been shown [33] that the class Ds defined above is a proper subset of the class Du given by (2.14). The perturbed plant under element-by-element-bounded additive uncertainty is:

GP=Go+ AA, AAE As

(2.18)

27

This structured class admits all perturbations whose element (ij) belongs to a ball of radius Pij around the nominal element Go(i,j), Vi < p, Vj < m. Cases where some elements of the nominal system are exactly known are covered by setting to zero the corresponding elements of P.

Since the matrix of upper bounds, namely P, is a nonnegative matrix, this representation permits the use of results from Perron-Froebenius theory in robust stability analysis. Also useful is stability analysis is the result of the following lemma.

Lemma 2.2. For any A E Ds and P E Rp1m, such that At < Pi, a(A+) < (P)

Proof. For any real matrix A E Rpxm and vector x E R,

Y(A) =11A 11i2= sup 1I Ax I12= sup [( Aix Ilxl--1II::l=1 i=1 j=1

Therefore,

1 1
sup( 21 A;x -F(P) = SUP _: 3)j
[11= =1 j=1 J= =

Since At > 0, the supremum will occur for some x such that xj > 0, Vj; let Y be the value of x which maximizes -(A+). Now,

T > 0 At > 0, Pi >0 =Piji > A+gi, V(ij) Therefore:

+)= sup Ajxj < sup I P=
-I11= i=1 5=1 IIkl=1 -- j= 1

28

This proof is an alternative to the original proof [33]. It is known that, VA E Ds,

(A) < -(A+). Therefore, using the result of the lemma, for any perturbation in the element-by-element-bounded class one has that d(A) d(A+) < -(p).

Uncertainty representation in interconnected systems

When uncertain systems are connected together, the resulting larger system have scattered simultaneous perturbations. Although the individual perturbations may be unstructured, the perturbation in the overall system presents a structure, because the relative positions of the system components are known.

The system represented in Figure 2-3 illustrates such a case; although it has only two perturbations, the following discussion applies in general.

Figure 2-3. Uncertain unity feedback system

Additive unstructured representation. A possible approach to the derivation of an uncertainty representation for this system is to obtain the perturbation of the compensated open-loop transfer matrix in terms of Al and A0. The perturbed open-loop compensated matrix is:

Qp = (I,, + Ao)Go(Ip + AI)K GoK + (AoGK + AoG ,iAK + GoAIK)

Qp defQ + A (2.19)

29

Therefore, the uncertainty can be written as and additive perturbation to the open-loop transfer matrix. This approach however is inadequate for two reasons.

The first reason is that, in order to render this formulation useful, it is necessary to compute or estimate a norm bound for the perturbation AA.. Although this possibly can be done for simple systems, it might become very cumbersome in the case of complex systems. The second and most important reason is that the additive unstructured representation does not carry information about the structure of the perturbation in the interconnected system.

Additive block-diagonal representation. An alternative approach, which takes into account the structure of uncertainty, is the block-diagonal representation.

It derives from the technique introduced by Safonov and Athans [481, for dealing with systems involving simultaneous perturbations in the context of the LQG regulator problem, therefore in time-domain analysis. The essence of the technique is to rearrange the system in such a way that the perturbations are isolated in a block-diagonal matrix.

The technique was explored by Safonov [46] in the derivation of 'conic sector conditions' for stability of MIMO systems, and by Doyle [12] in the derivation of necessary and sufficient conditions for stability under structured perturbations.

A diagonal representation of simultaneous perturbations can be obtained for any system, regardless of the dimensionality of each particular perturbation. Both parameter dependent additive perturbations and actuator and/or measurement uncertainties, represented respectively as input and output perturbations, can be handled [39]. Let us consider its application to the system in Figure 2-3.

The loops involving the perturbations Al and AO can be regarded as additional system loops, through which the nominal system and the perturbations exchange signals. The

30

nominal feedback loop provides signal to the ith perturbation through the output yAi, and receives signal through the input uAj. The perturbations may be isolated in a block-diagonal structure through the following simple procedure:

Procedure 2.1. Diagonalization of uncertainty in frequency-domain systems:

1. Suppose the additional system loops are open, as in Figure 2-4 (a);

2. Compute the transfer function from each system input to each system output. Inputs

and outputs now include the nominal input vector r and the nominal output vector

yi, as well as the perturbation outputs uh, and perturbation inputs yAj;

3. Arrange the transfer functions in matrix form. This step will generate the representation in Figure 2-4 (b), which is referred to as the 'M - A' form of the perturbed

system.

r M(s) Y
r

(a) (b) Figure 2-4. Block diagonal representation a) Open perturbation loops
b) The A - M form

The perturbation in Figure 2.4 (b) is A = diag(A, Ao), therefore a block-diagonal structure; yA and uA are vectors containing uncertainty inputs and outputs, respectively. The transfer matrix M(s) is called nominal interconnection structure. The (1, 1)-submatrix relates the collective output of the uncertainties to collective inputs, while the (2, 2)-submatrix

is the nominal transfer matrix from r to y. For the system in Figure 2-3, M, is:

yh1 (I + KGo)-IKG, (I + KGo)-IK UAI YA2 -(I + G0K)-'G0 (I + GK)-'GK J

YA1 (I + KG)-1 KG0 (I + KG0)-1K A1 0 yai YA2 -(I+GoK)-lGo (I+G0IQ1G0K 0 A YA2J Ail A
Note that the dimension of the square submatrix M11 dependents on the number of simultaneous perturbations. Therefore, even a SISO system subjected to simultaneous perturbations is characterized by a MIMO nominal interconnection structure.

Partitioning the interconnection structure according to the dimensions of inputs and outputs, the system can be represented as: y[ Mn M12 uA
= [(2.20) Y M21 M22 r

From the partition and Figure 2-4 (b), the following relations are obtained:

UA =AYA; YA =M11uA+M12r; Y=M22r+M2,ua

Manipulating these equations, one obtains:

Y = [M22 + M21A(I - M1A)-'M2] r (2.21) Thus, the transfer matrix from r to y is given by an upper linear fractional transformation of the uncertainty, namely:

Fu(M, A)d = [M22 + M21A(I - MuA)1M12] (2.22) A block diagonal representation of the LFT is shown in Figure 2-5 below. The expression A(I - M11A)-1 represents a feedback loop, with A in the direct path, and M1 in the

32

feedback path. If A = 0, then Fu(M, A) simplifies to the nominal transfer matrix from r to y, namely M22 = (I + G0K)-1G0K.

- M12(M)1(s)

r' M2j2(8) + 1/

Figure 2-5. Block diagram representation of Fu(M, A)

The general case of block diagonal representation. The technique applied to the simple example above applies to systems having a larger set of localized perturbations.

Uncertainties originating from unmodeled dynamics assume the form of norm-bounded, full complex blocks of different dimensions. On the other hand, uncertainty coming from parametric variations assumes the form of real perturbations, which can be repeated. Additionally, fictitious repeated complex scalars perturbations can be used to reformulate a robust performance problem as a robust stability problem [15].

Therefore, in the most general case, the final block diagonal structure will show (possibly repeated) real scalars, (possibly repeated) complex scalars and full complex blocks of different dimensions.

To account for the correct dimensionality of blocks in the diagonal formulation, a block structure of indices is defined [171. Assume that M E Cx'em, and consider the triple (mr, m, mC) of nonnegative real numbers such that m, + m, + mc =r - < n, and define

the block structure K associated with M by:

KC(m,,mc, mc) = (ki7. .., kmr,kmr+l,.. .,kmr+mc, kmr+mc+l . kmr+mc+mc) (2.23) where, for compatibility of dimensions, E=' ki = m. Given C, a family of associated m x m block diagonal perturbations is defined by:

Xr = {A = bl diag(1Ik1,..., 1r ..., Ik,+, I, . Mc)}' (2.24) where 6r E R, bq E C and Al E Ck'r+m,+jxk',+"c+j. As required by the dimension of M, Xic E Cmxm. Each b!*Ik, represents a repeated real scalar, while each bjclk, represents a repeated complex scalar and At represents a full complex block.

The general form can be particularized through the convenient choice of indices. For example, if there is no parametric uncertainty, mr = 0. In the case of purely real perturbations, the adequate setting is mc = 0 and mc = 0.

A class of allowable perturbations, having block sizes determined by the block structure, is defined from (2.24) by specifying an upper bound on the norm: X :(b) = {A : A E Xic, _U(A) _< 6 E R+} (2.25)

2.2.3 Representation of Uncertainty in State Space Models Let us now assume that the nominal plant is described by a state space model.The dynamics of the physical process is captured by the matrix A. Since A has fixed dimension in the state space model, it implies that the dynamical order of the process is well determined. Thus, uncertainty caused by neglected high order dynamics cannot be taken into account in the usual state space model.

On the other hand, the state space model is well suited to the representation of parametric uncertainty. Variations in system parameters are represented as perturbations in the

34

elements of the real matrices that define the model. The perturbations can be collected in the error matrix E, so that the perturbed matrix is represented by Mp = + E (2.26) where M can be either one of the real matrices in the state space representation. Particular forms of E are discussed below. Unstructured uncertainty

As in frequency-domain models, the class of unstructured errors is characterized by a norm upper-bound:

ï¿½u= {E: I E_} (2.27) and the perturbed matrix is

Mp=(Mï¿½-E), E E Cu (2.28) This representation is adequate when several, indistinguishable uncertainties exist in the system, which otherwise has a well defined order. However, since it is in general possible to identify at least some of the uncertainty sources, more realistic representations are needed in order to account for the structure. Structured uncertainty

Independent variations of elements. This representation is used when the elements of a real matrix change inside known real intervals, independently of each other. The admissible class of uncertainty can be defined by placing an upper bound on the largest interval: =si {E : I Ei I - Eij; max cij= c} (2.29)
1'3

The perturbed matrix takes the form of an interval matrix: Mp= (M+ E), E E &si (2.30) If only E is known, this representation can be used with the error matrix elementwise bounded by the matrix P = EU, where U,(ij) = 1, ij = 1,....n [58]. If some of the entries of M are exactly known, the corresponding entries of U, are set to zero, thus accommodating the extra information on the error structure.

Dependent variations of elements. This case differs from the previous one in that it admits correlated variations between entries of M. This assumption is actually required in practical cases. For example, consider the case of an open-loop state space model in which the output matrix has some uncertain entries, due to variations in a physical parameter that affects the output gain. If an output feedback controller is used, the dynamical matrix of the closed-loop system is likely to have several uncertain entries. However, the variations on these entries are not free, since they depend on the same physical parameter.

A convenient representation for such cases is to obtain the error matrix in terms of the physical parameters. Suppose that an m-dimensional vector of parameters can be identified, and assume that the dependence of M on each parameter is linear. This assumption is not too restrictive, since it is possible to redefine nonlinear combinations of physical parameters such that the assumption is satisfied. The perturbation class can be characterized as:

ESD I= {E : E= pkEk, IPkI_ ak, k= 1,...m} (2.31) k=1
Each Ek is a constant matrix which expresses the structural dependence of M on the parameter Pk. Such representation has been largely used in stability analysis [4, 51, 611.

The perturbed matrix is represented by: MV=(M+E), E E &SD (2.32) Notice that M. = M + -k=l PkEk is (affinely) linear on the parameters.

The following example illustrates the use of this representation of parametric uncertainty.

Example 2.1 Consider the circuit diagram represented in Figure 2-5.

+ vo
+ V

Figure 2-6. Elementary electric circuit Let the input be u(t) = vi(t) and the output be y(t) = vo(t). Then, one has: X1 I
+ Ri
i2 T; -- X 2 0 Xl1
S[0 R2]X2

Assume that R1, R2 are uncertain, and that the components are rated at L = 1H, C = IF, Rjo = 0.5Q, R2. = IQ. The nominal matrices are:

AJ ; B= ; C=1 0 1 Given that R1, R2 are uncertain, the terms they affect can be written as:

1 1 1 1 R1C 1- R1 + R1
-R2= R2= R2. + (R2)= + p2
L

where b(o) represents the unknown variation. Therefore, the perturbed open-loop model is given by:

; -2 -1 -1 0 0 0 X1 21
+ P, + P2 +{ + Pi u i:2 1 - 1 0 0 0 -1 X2 0 0 EA EB

Y = 1 + P2 0 1
X2
Ec

Thus, uncertainties in the physical parameters R1, R2 are reflected by the state space model as uncertain input and output gains, plus uncertainties in the dynamic matrix A. Assuming that an output feedback controller K = -1 is used, one has A, = (A + BKC), where
0 (-(2ï¿½pl+2p2+ PlP2)) BKC =
0 0

def
Defining P3 = PI P2, the closed-loop perturbed matrix becomes:

xl-2 -3 -1 -1 0 -2 0 -1 X1
1 + Pli + P2 + P3
i/2 1 10 0 0 -1 0 0 X2

Now, let p dff [P1 P2 P3]T. The objective of stability analysis is to find out which is the largest p such that the perturbed system remains stable, and to characterize the

38

allowable intervals f-ak, ak]. Alternatively, assume that the parameter ranges are known. For example, assume that the variations in R1, R2 are within ï¿½10% of the rated value. Then, the parameters are in the ranges:

pi E [-0.202,0.2021; p2 E [-0.100,0.1001; j03 E [-0.020,0.0201

In this case, the objective of stability analysis is to check whether or not the system remains stable for all possible combination of parameters in the hypercube defined by these ranges.

0

2.3 Conclusions

This chapter puts together basic concepts concerning system models and uncertainty representation, which will be relevant for subsequent development.

Since the objective of this dissertation is the study of robust stability under parametric uncertainty, the state space model will have an important role in following chapters. Also very useful will be the uncertainty description given by (2.31), which accommodates practical cases of parametric uncertainty, as demonstrated by Example 2.1.

In Chapter 5, the problem will be given a frequency-domain treatment, and the diagonalization of uncertainty will be employed. Although the diagonalization technique has been used for some years, no explicit derivation has been found. For this reason, indications found in the literature were put together in Procedure 2.1, and the steps leading to the linear fractional transformation (2.22) were completely worked out.

The review of fundamental concepts will continue in the next chapter with a summary of stability conditions.

CHAPTER 3
STABILITY ANALYSIS OF LINEAR SYSTEMS

3.1 Introduction

Stability of control systems is a fundamental requirement, which must be ensured prior to any other. This chapter presents a review of stability conditions and stability analysis techniques applicable to linear systems.

Both state space and transfer matrix models are considered; in each case, nominal stability and robust stability under additive perturbations are addressed.

3.2 Stability of State Space Systems

3.2.1 Nominal Stability Condition Let us consider the linear, time-invariant system i (t) = Ax(t) (3.1) This model can be interpreted as the representation of either an unforced system or a system under fixed, known input [52]. The following theorem gives necessary and sufficient condition for asymptotic stability:

Theorem 3.1 [52]. The equilibrium point 0 of (3.1) is asymptotically stable if and only if all the characteristic values of A have strictly negative real parts, that is lim x(t) = 0 4==* Re[Ai(A)] < 0, Vi (3.2)
t--- 00

An asymptotically stable linear system is globally asymptotically stable, because

x(t) 1-* 0 independently of the initial state x(t0).

Equation (3.2) states that asymptotic stability depends on the eigenvalues of A. However, it is not necessary to compute the eigenvalues in order to check stability. The RouthHurwitz criterion gives necessary and sufficient condition for stability based on the signs of the coefficients of the characteristic polynomial. Furthermore, the Lyapunov direct method permits sufficient conditions for stability to be derived from a matrix function involving A.

Nominal stability assessment through the Lyapunov Direct Method. The stability properties of the equilibrium point x(t) = 0 of the system i(t) = Ax(t) can be determined through the Lyapunov direct method, which does not require the computation of the characteristic polynomial.

According to Lyapunov theory, a sufficient condition for global asymptotic stability of the equilibrium point x(t) = 0 is the existence of a scalar positive definite function of x, say V(x), having a negative definite time-derivative V/(x) [52]. For LTI systems, the natural choice of a Lyapunov function candidate is the quadratic function V(x) = xTpxT (3.3) where P is a real symmetric matrix. As long as P is positive definite, the scalar function V(x) is positive definite. The time derivative of the quadratic function is given by:

V(X) = iTPX + xTpi = XT(ATp + PA)x dL- -XTQX (3.4) from which the matrix Lyapunov equation, relating the matrices A, P and Q, is obtained: (ATp + PA) def -Q (3.5) Global asymptotic stability of the equilibrium point x =_ 0 of i = Ax(t) is ensured if, for a given A, it is possible to find symmetric positive definite matrices P and Q satisfying

41

equation (3.5). It is so because, if such P and Q exist, V(x) is a scalar positive definite function whose time-derivative is negative definite. On the other hand, if there exists Q positive definite such that the corresponding P is negative definite, the equilibrium point is unstable.

The following theorem formalizes the relationship between the asymptotic stability of A and the matrix Lyapunov equation.

Theorem 3.2 [52]. The following statements are equivalent, VA E Rnxn:

1. All eigenvalues of A have strictly negative real parts;

2. For every positive definite Q E Rxn, the equation (3.5) has a unique, positive definite

solution for P;

3. There exists some positive definite matrix Q E Rnx such that the equation (3.5) has

a unique, positive definite solution for P.

This theorem provides a computational device for assessing stability without computing the eigenvalues of A. Choosing any positive definite Q and solving (3.5) for P, if the solution exists, is unique and positive definite, then A is asymptotically stable. If there is no solution, or if the solution is either not unique or not positive definite, then A is not asymptotically stable.

3.2.2 Assessment of Robust Stability

Robust stability assessment through the Lyapunov Direct Method

Let us consider the perturbed state equation

i(t) = AP x(t) = (A + E) x(t)

(3.6)

42

where the nominal matrix A is asymptotically stable. Since A is stable, the matrix Lyapunov equation for the nominal system, namely ATp + PA = -Q, has a unique, positive definite solution P for every positive definite matrix Q; let P be the solution corresponding to some positive definite Q0.

Now, let V(x) = xTPx, where P is symmetric and positive definite, be a Lyapunov function candidate for the perturbed system (3.6). The time derivative of Vp(x) is:

VP(x) = iTPx + XTDi = [(A + E)XTpX + xT15[(A + E)x] = xT[(ATD + PA) + (sTP + PE)Ix

Let us choose P = P,, the positive definite matrix defined above. Then, the last equation becomes

VP(x) = -XT[Qo - (ETpO + PoE)Jx d'f XTQPX(t) (3.7) According to Theorem 3.2, since Po is positive definite, AP is asymptotically stable, if Qp is positive definite. Therefore, the robust stability analysis problem becomes that of finding conditions on E which ensure the positive definiteness of Qp. Certainly the conditions that can be derived depend on the description of the uncertainty E.

Although stability conditions obtained from the Lyapunov direct method are only sufficient, a positive feature of the method is that it can be applied with virtually all uncertainty descriptions, including time-varying and nonlinear uncertainties.

In Chapter 4, a detailed treatment of stability conditions according to the Lyapunov direct method will be given, for the case of E belonging to the class ESD defined by (2.31).

Other results

A Perron radius stability bound [44]. Sufficient conditions for (A+ E) being asymptotically stable are A stable and (A + E) without eigenvalues on the imaginary axis of the complex plane, for all E in an admissible class. It can be show that (A + E) has no eigenvalue on the imaginary axis if there exists a non-singular matrix R E R,'ï¿½r, such that II RE(jwI,, - A)-R-1 I11< 1, Vw > 0, VE (3.8) Assuming that the uncertainty can be decomposed as E = SIAES2, where S1 E Rnxp and S2 E Rqxn are known constant matrices which account for the structure and the matrix AE E Rpxq, p < n, q < n contains the perturbation factors, and using condition (3.8), with the further assumption that I AE, 1< EijE; Eqj > 0, E > 0, where E is unknown, the following sufficient robust stability condition can be obtained [44]:

1
C < 1(3.9) SUPw>o r[ S2(jwl- A)-1S1 U] (

where U = [qj], and 7r(.) is the Perron eigenvalue.

The advantage of a condition based on the Perron eigenvalue is that it is easily computable; however, it can be too conservative. It will be shown in Chapter 5 that a less conservative robust stability condition can be obtained by explicitly using Perron scaling. The relevant concepts of Perron theory are reviewed in Section 3.4 ahead.

Stability radius condition[24]. The objective of the stability radius method is to compute the distance from the stable matrix A to the set of unstable matrices of the same dimensions. The distance is measured by the smallest norm of a destabilizing matrix, namely the smallest norm of E such that (A + E) have a purely imaginary eigenvalue.

Considering the decomposition

i(t) = (A + E)x(t) = (A + BDC)x(t) (3.10) where A E Rxn is stable, B E Rnxm and C E RPX, are known constant matrices which define the uncertainty structure, and D E mxp is a matrix of unknown factors, the stability radius of A is:

r3(A; B, C) = inf {I D II: (A + BDC) unstable} (3.11)
D

An analytical expression for the real stability radius has been obtained [24], but the computation is too complex, even for unstructured perturbations. In the case of structured perturbations of rank 1, namely when either only one row or only one column of A is perturbed by each factor, the computational burden of the analytical expression is considerably simplified.

Letting G(s) 4f C(sI - A)-IB, and defining as GR(jw) and Gi(jw), respectively, the real and imaginary parts of G(jw), and as Q and Q, respectively, the set of frequency points for which Gi(jw) = 0 and its complement in R, the real stability radius for the case of rank

1 perturbations is given by:

rR(A; B, C) = minG ) sup [>2 1
maxima Q G(jw)II1' su - [11 GR(jW) 11' -~ ,J)l' (3.12)

Therefore, in the case of rank one perturbations the computation of the real stability radius involves an unidimensional optimization problem. If only one entry of A is under perturbation, then D and the associated G(s) become scalars; the second term in the right side of (3.12) becomes infinity, and the real stability radius is easily computable.

3.3 Stability of Transfer Matrix Models

3.3.1 Nominal Stability Analysis

Input-output stability

A linear system is Bounded-Input, Bounded-Output (BIBO) stable if an input bounded in magnitude always produces a bounded output. Let H(s) be a matrix whose elements are proper rational functions of s. H(s) can be written as H(s) = N(s) _ N(s) (3.13) d [I)fj~d(s - pi)

where dd is the degree of the denominator polynomial d(s), which is given by the least common denominator of all (non-identically zero) minors of H(s) [39]. The transfer matrix, which was assumed proper, is stable if all poles pi are in the open LHP. If pj = 0, for some j, then stability requires that the multiplicity of pj = 0 be 1.

Under the assumptions that each element is a proper rational function of s, the transfer matrix possesses a state space realization [A, B, C, D], such that the transfer matrix relates to the state space realization by H(s) = C(sI - A)-IB + D. Although the transfer matrix representation of a system is unique, the state space realization is not.

This transfer matrix can be rewritten as

C(sI - A)1B + D Z(s) _ Z(s) (3.14) I-)-I = det(sI - A) H=I [s - Ai(A)]

If there is no cancellations of terms of the form [s - Ai(A)], between the denominator and all the elements of the numerator in (3.14), then (3.13) and (3.14) are equivalent; the pole polynomial d(s) of the transfer matrix and the characteristic polynomial det(sI - A) are the same. In this case, input-output stability is equivalent to the asymptotic stability of the dynamic matrix A.

46

A necessary and sufficient condition for non-cancellations of system poles in (3.14) is that the state space realization [A, B, C, D] be a minimal realization of the dynamic system, that is be state controllable and observable.

Internal stability of closed-loop systems

Asymptotic stability of closed-loop systems, like the feedback system shown in Figure 21 (a), is equivalent to the internal stability of the loop [53]. A closed-loop LTI system is internally stable if any two points of the loop are connected through an exponentially stable transfer matrix [38].

Let K(s) in Figure 2-1 (a) be a stabilizing compensator for Go(s), and let rd designate an external signal placed at the plant input. The vector [y, u]T, formed by the outputs of plant and compensator, is related to the vector Ir, rd]T of their inputs by:

g(Go, K) = y = (I + GoK)-'GK (I + GK)-G r (3.15) (I + KG,)-0 K -(I + KG -KG r
HIGOGK) =1 jG(3.15

Therefore, internal stability of the unity feedback system with cascade compensation is equivalent to the stability of the four transfer matrices in II(GoK). The characteristic polynomial of each of these matrices must be checked in order to assess the internal stability of the closed-loop system.

Also, it can be shown that external stability and internal stability of the closed-loop system are equivalent if the state space representations of the plant and controller are stabilizable and detectable [53].

Note that, if the compensator K is already known to be stable, then the stability of (I + GK)-Go is necessary and sufficient for the stability of H(Go, K).

The term (I + GK)-'Go represents the transfer matrix of a feedback loop, with G in the forward path and K in the feedback path. This loop can be represented in state space form by F, = [At, Bc, Co, D,]. Stability of the feedback loop depends on the pole polynomial of its transfer matrix; therefore, it depends on the characteristic polynomial of A,. The following result relates the characteristic polynomial of A, to the characteristic polynomials of AG and AK-.

Assume that Go(s) and Ks are proper transfer functions having respectively minimal realizations [AG, BG, CG, DG] and [AK, BK, CK, DKI, and define the return-difference operator as

F(s) = [I + K(s)G0(s)I (3.16) further assuming that

det[I + F(oo)] = det[I + K(oc)G(oo)] = det[I + DKDG] 5 0 Let Oc be the closed-loop characteristic polynomial. Then [261: det[F( s)]
Oc = det(sI - A,) = det(sI - AG) det(sI - AK) det[F(o)]
ns n nk det[F(s)]
I(al)] = 7J[s- Aj(AG)I Ij[s - A (AK)] detfF(oo)] (3.17)
i=1 i=I =

The important fact revealed by this equation is that, when AG and AK are Hurwitz, the matrix A, is Hurwitz if and only if all the zeros of det[I + K(s)Go(s)] have negative real parts. It is important to notice [11] that, if cancellations of terms [s - Ai(*)] occur between the left and the right side of equation (3.17), the zeros of det[I + K(s)G,(s)] are a proper subset of the closed-loop eigenvalues.

48

Assuming that Go(s) and K(s) are stable, equation (3.17) shows that a necessary and sufficient condition for stability of a feedback loop is that, for all s such that Re(s) > 0, det[I + KG0(s)] $0 = 1-A[I + KGo(s)] j 0 + [I KG(s)]$ 0, Vi S Ai[KGo(s)] $-1, Vi = p[KG.(s)] < 1, (3.18) = -5[KGo(s)] < 1, Thus, small loop gain is a sufficient condition for stability of a feedback loop. Internal stability of a feedback loop can alternatively be checked through the Nyquist criterion, which is reviewed next. Nyquist stability criterion The Nyquist stability test permits the assessment of closed-loop stability without requiring the solution of the closed-loop characteristic polynomial. Due to its graphical character, it is very appealing in computer-aided analysis and design environments. Let us initially discuss the case of scalar system. Suppose that plant and controller in Figure 2-1 (a) are scalar transfer functions. Let qo(s) = gokqo(s) = () and let f(s) represent the return difference transfer function. Then, f(s)= 1+ q(S) = n(s) + d(s) (3.19) d(s) It can be easily verified that AS) (3.20) 49 where 6o(s), Oj(s) designate respectively the open-loop and the closed-loop characteristic polynomials, and let Po, Pc be their respective number of unstable poles. Closed-loop stability analysis requires the determination of the number Pc; for closed-loop stability, pc must be zero. The Nyquist criterion obtains p, from the knowledge of po and the application of the principle of the argument to equation (3.20). Let n, be the number of clockwise encirclements of the origin by the map of the standard Nyquist contour under f(s). Equivalently, n, corresponds to the number of clockwise encirclements of the critical point (-1, jO) by the map of the contour under qo(s). Since n, corresponds to the difference between the number of roots of the numerator and denominator of f(s), which are respectively pc and po, the following relationship is satisfied: Pc = no + Po (3.21) The closed-loop system is stable if and only if pc = 0, or, equivalently, if and only if no = -po. That is, if and only if the map of the Nyquist contour by qo(s) encircles the critical point, in the anticlockwise direction, a number of times equal to the number of unstable poles of 0,. Now, consider the case in which Go(s), K(s) in Figure 2-1 (a) are MIMO transfer matrices. Let Oo, Oc be respectively the open-loop and the closed-loop characteristic polynomials, and consider the return difference operator defined by equation (3.16). Defining = det[I + K(oo)G(oo)], equation (3.17) shows that ï¿½fc det[F(s)] = o (3.22) The Nyquist criterion has been generalized and extended to the case of MIMO systems [37]. In that extension, the fundamental data are the number of unstable poles of o, 50 and the number of encirclements of the origin by the characteristic loci of F(s), which is the same as the number of encirclements of the critical point by the characteristic loci of Q,. The characteristic loci of F(s) are the maps of the Nyquist contour under the characteristic values of F(s). Let fi(s), qi(s) be the characteristic values of F(s), Q0(s), respectively, and recall that Q,, E CPXP. The characteristic values qi(s) are the solutions of the characteristic equation V(q, s) =L- det[q(s)I - Q0(s)] = 0. In general, the characteristic equation can be factored as a product of irreducible polynomials, V(q, s) = Vl(q, s),..., Vj(q, s). Each polynomial Vi is a polynomial of order ni in qi, with coefficients aij (s),j = 1,..., ni, such as: Vi(q,s) = qi'(s) + ei, (s)qi"-(s) + . . . + aim,(s) = 0 (3.23) where the condition E =, ni = p is satisfied. The algebraic functions qi, i = 1,...,1, defined through equation (3.23), are the characteristic functions of Q0(s). Each algebraic function qi(s) is defined in a Riemann surface Ri, constituted by ni-copies of the complex plane, joined together in such a way that qi(s) is single-valued on Ri. Except at the branch points (through which the Riemann surfaces are pieced together), qi(s) is constituted by ni analytic, distinct branches. The characteristic values of Q,(s) are obtained as the set of branches of the characteristic functions, qi(s),i = 1,...,p. The generalized Nyquist criterion arises from the application of the generalized principle of the argument [37] to equation (3.22), which can be rewritten as: P P (3.24 fi(S) - 1-I[1 + qi(s)] = - (3.24) i=1 =1 51 The map of the standard Nyquist contour under the characteristic values of Qo(s) generates a set of closed curves, which constitute the characteristic loci of Q0(s). The number of encirclements of the critical point by the characteristic loci of Qo(s) and the number of unstable poles of 0o(s) are used to assess closed-loop stability. The generalized criterion is formally stated as follows: Generalized Nyquist criterion. Let no be the number of encirclements of the critical point by the characteristic loci of the open-loop transfer matrix Qo(s), and let pc and Po be the number of unstable poles of q6c and q0o, respectively, and assume that there are no hidden open-loop unstable modes. Then, the closed-loop system under unity feedback is stable if and only if no eigenlocus passes through the critical point -1 + jO and Pc = no + Po (3.25) Since pc = 0 is the condition for closed-loop stability, it is required that no = -Po, which means that the characteristic loci of Q0(s) must encircle the critical point Po times in the anticlockwise direction. 3.3.2 Robust Stability Analysis of Transfer Matrices Generalized Nyquist Criterion under uncertainty In the presence of additive perturbations belonging to a given class, the open-loop matrix becomes Qp = Q0 + AA. Let us assume that the nominal system Q, is stable under unity feedback, and that the perturbation AA is such that Q, and Qp have the same number of Right Half Plane (RHP) poles. The generalized Nyquist criterion states that the nominal system is stable under unity feedback if and only if the net number of anticlockwise encirclements of the critical point by the characteristic loci of Qo(s) equals po, the number of RHP poles of Q,(s). Therefore, the 52 assumption of nominal closed-loop stability is equivalent to assuming the correct number of encirclements of the critical point by the nominal eigenloci. Now, under the assumption that Q0 and Qp have the same number of RHP poles, the perturbed closed-loop system remains stable as long as the net number of encirclements of the critical point does not change under perturbation. A change in the number of encirclement occurs if and only if there is a non-null net number of crossings of the critical point by the perturbed eigenloci. The following theorem formally states these considerations. Theorem 3.3119]. Let the unity feedback system of Figure 2-1 (a) be closed-loop stable. Assume the presence of additive perturbations, belonging to a given class, such that Qo and Qp have the same number of RIP poles. Then, the perturbed system remains stable under unity feedback, for all perturbations in the given class, if and only if nop = no (3.26) where nop and no are respectively the number of encirclement of the critical point by the perturbed and the nominal characteristic loci. Two remarks are in order here. First, the assumption that Q, and Qp have the same number of RHP poles requires that the perturbation itself be stable. Also, if the controller K(s) is an open-loop stabilizing controller for G0(S), then no = 0; but fop = 0 if and only if the controller open-loop stabilizes the perturbed plant Gp(s), for all perturbations in the allowable class. Second, the application of the Nyquist criterion requires graphical displays of eigenloci; however, the perturbed eigenloci are not known. Fortunately, there exist methods for determining regions in the complex plane which include the eigenvalues of a perturbed complex matrix. Computed in a point-by-point 53 basis as the complex frequency describes the Nyquist contour, each region containing one eigenvalue generates an inclusion band in the complex plane which contains one perturbed eigenlocus. Thus, the perturbed eigenloci is contained in the set of bands described in the complex plane by the set of inclusion regions. If the open-loop compensated plant Q, is stable, the stability requirement (3.26) is equivalent to the requirement that the critical point does not be belong to set of the inclusion bands. Therefore, to in robust stability analysis the generalized Nyquist criterion is applied to the inclusion bands. The size of the inclusion regions depend on the construction method and on the norm upper bound on the uncertainty class. Methods of computation of inclusion regions are next briefly reviewed. Condition number method. Let AA E DU, defined in equation (2.14), Qp = Qo + AA, and assume that Q, has the characteristic decomposition Q, = WA0 W-1. Then, it can be shown that [541 I A(Qp,) - A(Qo,) I _ KW6, Vi (3.27) where r.w is the condition number of the eigenvector matrix W. The quantity Kw6 gives the radius of regions in the complex plane, centered at the nominal eigenvalues, which include the perturbed eigenvalues for all perturbations in the class characterized by [AA(s)] < (s). The inclusion regions defined by (3.27) are easily computable, but the method has disadvantages. If the condition number of the nominal matrix is large, the radius is large, and the computed inclusion regions may be very conservative. Also, if the eigenvectors of the nominal matrix are too skewed, the condition number can be very sensitive to small perturbations, thus unfavorable for computations. 54 Normal approximations method. Let us consider again the open-loop perturbed matrix, namely Qp(s) = Q,(s) + AA, where Q0(s) E Cmxm. Using the rectangular decomposition technique, Q0(s) can be decomposed into the sum of two normal matrices, one hermitian and one skew-hermitian. The method of normal approximations to perturbed matrices [7, 8] consists of the substitution of the nominal matrix by the hermitian part of a rectangular decomposition. The skew-hermitian part is considered an approximation error, and included in the perturbation. Let Q,, and EQ be respectively the hermitian and the skew-hermitian parts of the decomposition of Q,. The rectangular decomposition is chosen such that the norm of EQ is minimized. Assuming that EQ is characterized by a norm upper bound, say a[EQ(s)] < c(s), Vs, the perturbed matrix can be written as QP = Q" + (EQ + AA), where (EQ + AA) represent the total perturbation to the normal matrix Q, The application of the condition number method to Qp yields: I A(Qp,) - A(Q.,) I < w(, + c) = (b + e), Vi (3.28) since the normal matrix Q,, has condition number KW = 1. By adequately choosing the normal approximation, the radius (b + e) given by the last equation can be made smaller than the radius given by (3.27), thus reducing the conservatism of the inclusion region. Inclusion regions determined by normal approximation can be made tighter by taking their intersection with the region determined in the complex plane by the numerical range of the matrix Qp. The numerical range of Q, E CPxP is given by [23]: ~X*QoX o {zE C :Z= X The numerical range of Qp , which obviously includes the eigenvalues, is contained in the region of the complex plane determined when the numerical range of Q, is extended by 6 in all directions. That is, = {zEC : = o + )X )Vp = z E C z - x X*X 0j4xEC) x*QoX X*AAX = {zEC : z- + 0 ,O4xECP} X*X X*X A p _9 A o r,, Lt f (3.29) where L+J means the extension in all directions. Since the perturbed eigenvalues are included in the regions defined by both equations (3.27) and (3.29), they are included in their intersections. Whence, tighter inclusion bands are obtained by the computation of those intersections as the complex frequency describes the standard Nyquist contour. The regions given by the intersections are still not tight, in the sense that they may include points which cannot be made eigenvalues of the perturbed system, for any of the perturbations in the allowable class. A method which yields tight inclusion regions for the case of unstructured perturbations is next summarized. E-contours method. Let z E C be an eigenvalue of the perturbed open-loop matrix. Then det[(Q. + AA) - Z1pJ = 0, what means that (Q, + AA - zIp) loses rank at z = 0. Therefore, a(Qo + AA - zip) = ![(Qo - zlp) + AA] = 0. The inequality [Qo- zip) + AA_ _ !(Q. - zIp) - a(AA) (3.30) permits the derivation of the following result [9, 33]: Lemma 3.1. Assume that z E C and AA E DU. Then, 1. If u(Qo - zIp) > b, then z cannot be an eigenvalue of Qp, for any AA; 2. If u(Qo - zIp) < b, there always exists AA such that z is an eigenvalue of Qp. 56 This lemma leads to an algorithm for the computation of the E-contours inclusion regions. Letting A,,, i = 1,...,p be the nominal eigenvalues, the E-contours are the loci of the 'first' solution for z of the equations a(Qo - z1) = [Q. - (Ao, + pen] - 6 (3.31) as p is increased from 0, and 0 < 0 < 27r. It can be shown [9] that the contours constructed as above described always form closed curves, and that the perturbed eigenvaues are contained in the union of the contours. Plotted as function of the frequency, the contours sweep bands to which the generalized Nyquist criterion is applied. Singular-value condition for stability under unstructured uncertainty Let us consider again the unity feedback system of Figure 2-1 (a), assuming that K is a stabilizing controller for the nominal system. Furthermore, let us assume that the plant is subject to additive unstructured uncertainty AA belonging to the class Du. The presence of the controller in the forward path of the feedback loop changes the open-loop perturbation. In order to assess robust stability, one may consider the perturbed open-loop compensated system, given by: def Qp(s) = [G.(s) + AA(s)]K(s) = G0K(s) + AAK(s) = Qo(s) + AQ(,s) (3.32) Notice that the resultant perturbation is AQ(S). In order to characterize the class containing the uncertainty in the compensated open-loop plant, the norm upper bound 5[AAK(s)] must be obtained. It may happen that, due to the controller structure, the upper-bound results too large, thus causing the uncertainty description to be unacceptable. 57 Using the method described in Chapter 2., the system can be rearranged so that the uncertainty becomes an additive term to the closed-loop system, as in the M - A representation of Figure 2-4 (b). The nominal interconnection structure is given by: YA M ua -(I + GoK)-1 (I + GoK)-l ua (-3 = M = GK r(3.33) Y ] (I + GoK)-1 (I + GoK) GotK r The transfer matrix from r to y, in the presence of uncertainty, is given by the linear fractional transformation Fu(M, A) = [M22 + M21AQ(I - MIIlAQ)-M12, which is represented by the block-diagram in Figure 2-5. The equation (3.33) above shows that the transfer functions M12, M21 and M22 are stable, since they depend only on the nominal system, which is by assumption stabilized by K(s). Therefore, the stability of the linear fractional transformation depends only on the transfer matrix -[AQ(I+M1lAQ)1], which represents a feedback loop with AQ(s) in the forward path and M11(s) in the feedback path. Let [All, Bil, Cil, Dnl] be a minimal state space realization of M (s), and let us assume that the perturbation AQ(s), which is itself a dynamic system, has a minimal realization [AA, Ba, CA, DA]. Using equation (3.17), the characteristic polynomial of the feedback loop is given by: = J7[s - Ad(Al)]JI[s - AjdAA)I det[I + Ml1AQ(s)] 3.4 .~1 i det(I + MlIAQ(oo)] Therefore, if the perturbation AQ(s) is stable, the stability of the feedback loop involving the perturbation can be derived from the stability of det[I + M A(s)]. Stability of AQ(s) is a requirement stronger than the requirement of stability of AA(S). Alternatively, the perturbed system can be rearranged so that the original perturbation AA(s) becomes the additive perturbation to the closed-loop system. In this case, the nominal interconnection structure becomes: Y ua -(I + KG0)- K (I+ KGo)-'K uA ] [ = M = (IGK- ï¿½0 -GK r] (3.35) y r (I + GK)-1 (I + GK)-IGK r Under the assumptions that the controller K(s) stabilizes Go and that the controller itself is stable, the transfer matrices M12, A121 and M22 are stable; thus the stability of the transfer matrix from r to y depends on the feedback loop -[A(I + M11AA)-1]. Furthermore, M11(s) itself is stable. If the perturbation AA(S) is stable, then the zeros of the closed-loop characteristic polynomial of the feedback loop are in Left Half Plane (LHP) if and only if the zeros of the return difference matrix are in the LHP. Therefore, the perturbed system is stable, Vs : Re(s) > 0 and VAA E Du, if and only if det[I+M11AA(s)]$ 0

4==' flAi[Iï¿½+M11AA(S)] 4 0
i
=* Ai[MllAA(S)] $-1,Vi ï¿½= p[MIIAA(s)] < 1 (3.36) Recall that the spectral radius condition for nominal stability, given by equation (3.18), is only sufficient. The last inequality however shows that, in the presence of unstructured uncertainty, the spectral radius condition is necessary and sufficient. Necessity is obtained from the phase freedom of the elements of the unstructured perturbation, and the possibility of scaling the perturbations, in order that A' = c AA, ( E [0, 1], is obtained from AA. For suppose that p[ML.AA(S)] > 1, for some perturbation in the allowable class, and some s. Then, by changing only the phase of the perturbation elements and scaling by multiplication by c, it is possible to obtain a perturbation, say i3A, such that det[i + M11AA(S)] = 0, for some s. 59 It is always possible to find a perturbation in the allowable unstructured class, say A'(s), which satisfies p[MniA'A(S)] = j[MiiA'A(s)] = [/i(s)] JA (s)] (3.37) Therefore, necessary and sufficient condition for robust stability, VAA E Du is: j[/ll(S)]d[1AA(S)] < 1,VS 1 4=* d[M1(s)] < 1 ,Vs (3.38) 6F[AA(S)I' Stability under structured perturbations Let us consider the M - A form of a perturbed system, represented in Figure 2-4 (b), and assume that A E Xc(b), defined by equation (2.25), and that the associated block structure has k, = kmc = 0. This is the case of a perturbation composed by complex blocks, which emerges naturally when the diagonalization technique is applied to an interconnected system whose subsystems are subject to unstructured uncertainty. Applying the same reasoning used above leads to a necessary and sufficient stability condition in terms of the spectral radius, namely p[MIIA(s)] < 1, VA E Xk(b) However, this perturbation class does not admit all perturbations with norm less than b, but only those which satisfy the norm constraint and the block structure, whence the inequality chain p[MllA(S)] < d[/llA(S)] < Z[MJJ(s)]-5F[A(s)]) (3.39) in general does not hold with strictly equality for any member of the admissible perturbation class. Consequently, the singular-value stability condition, namely 1 W[A(s)] < [A()]' A E Xc(b), Vs (3.40) 60 is only sufficient. The conservatism of this condition can be arbitrarily large, since it may happen that no perturbation satisfying (3.40) and having the required structure will destabilize the system. Spectral radius preserving transformations have been widely used to scale the relevant matrices such that the gap between the spectral radius and the singular-value is reduced, thus reducing the conservatism of the stability condition obtained from (3.39). Scaling techniques are reviewed in the Section 3.4. Next, two tighter criteria for stability under structured perturbations are reviewed. Structured singular-value stability condition. Given a matrix M and the associated block structure K, the structured singular-value of M, or sI-function, is defined by [12]: def 1 /I(U) = minAEX,,(6) {I[A(s)] : det[I - MA(s)] = 01 (3.41) if there is A E Xg(b) such that det[I - MA] = 0; if there is not such a A, then p(M) = 0. The following theorem states the necessary and sufficient condition for stability of the M - A representation, in terms of the i-function. Theorem 3.4 [13]. The system M - A is stable, VA E XIc(b), if and only if: P[M11(s)] 5(s) < 1, Vs 1 S[M11(s)] < ( Vs (3.42) If the perturbations are weighted such that aA(s) < 1, Vs, and the frequency dependent weight is included in M, the above result asserts that: stability -== sup IM11(s)] < 1 a (3.43) 61 The tightness of the above stability condition stems directly from the definition of the ji-function: Mi(M) is defined on the basis of a destabilizing perturbation having the required structured. However, although it clearly addresses the robust stability problem, the definition is not of much help from a computational point of view. Actually, the computation of the exact value of i(M) can be done only in special cases. Usually only upper and lower bounds are computable, even for the purely complex case, namely when mr = 0 in the block structure [41]. The computation is specially demanding in the mixed case, namely when m,$ 0.

Computation of bounds for p(M) relies on a set of properties of the 11-function, proved by Doyle [12], the most important of which are given below: 11(aM) = Ia I 1L(M), VM E C"mm, V scalar a (3.44) A(MIM2) < a(MI) p(M2), VM1, 12 ECnxrn (3.45) If -i = 1, mC = 1, As(M) = (M), VM E Ct'x' (3.46) If if = 1, m, = 1, p(M) = p(M), VM E C'x (3.47)

The equality in (3.46) is attained in the case of one single complex block of any size, since the conditions imply m, = 0 and m, = 0. On the other hand, (3.47) concerns the case of one complex scalar, since the conditions mean that m, = 0 and mc = 0.

From the computational point of view, the following property is fundamental. Let U. de=f {U : U is unitary } with the same block-diagonal structure as XIC, and let S, e {S : S = diag {sili}, si E R+} (3.48) the set of real positive diagonal matrices with blocks having the dimension of the corre-

sponding block in XC. Then, VM E Cmxtm,

sup p(UM) :_ p(M) < inf -5(SMS-1) (3.49) UEUic SESi

It has been shown [12, 151 that the left inequality of (3.46) is actually always an equality; however, the optimization problem involved is not convex, what may lead to the existence of local maxima.

On the other hand, it has been proved [49] that the optimization problem involved in the right inequality of (3.46) is always convex, and hence has only global minima, as a consequence of the fact that -(eSMe-s) is convex in D. Since S has -i elements, one of which can be fixed, the minimization is done over (-T- 1) variables, no matter what the sizes of the blocks are. Equality is always attained on the right side of (3.46) when there are 3 or fewer non-repeated blocks in the block-diagonal perturbation, regardless of the dimension of the blocks. For more than 3 blocks, the lower and upper bounds in (3.46) usually stay within 5% from each other, and almost always within 15% [38].

Furthermore, it has been shown [29] that, for the case of complex perturbations, the right inequality holds with equality, regardless of the number of elements in the perturbation, provided that 'inf' in (3.49) occurs at a stationary point of -(SMS-1) relatively to the elements of the scaling S. This case occurs when there is no cusping of -d(SMS-1).

Multivariable stability margin. Consider the perturbed M - A form where A E Xjc(6), is
A =diag{fb,.., br

The multivariable stability margin of the MIMO structure M is defined as follows [10]: k, I min {k E [0, oo) : det (I - kAM] = 0} (3.50)
A

63

Let Di be the known domain of the parameter bi and let the actual perturbation be A.c E Xkp(b). Then, the perturbed system is stable if and only if Aac, E kmDi, Vi

Therefore, given a set of parameter ranges, if km > 1, it indicates how much the ranges can be extended without the system becoming unstable for any combination of parameters inside the extended domain. Conversely, k, < 1 indicates how much the ranges must be shrunk so that the system can stand all perturbations in a given class.

An algorithm for the computation of the multivariable stability margin, which can be applied also to the case of purely real uncertainty, has been given by De Gaston and Safonov [10]. The algorithm avoids a burdensome search over the parameter space by exploiting the mapping theorem due to Zadeh and Desoer.

3.4 Frequency-Domain Scaling Techniques The fundamental condition for robust stability of the M - A representation is given by equation (3.36), namely p[M11A(s)] < 1, Vs. Equation (3.38) shows that, if the perturbation belongs to an unstructured class characterized by the norm upper bound 6(s), a necessary and sufficient condition for stability is Y[Mi(s)], < )Vs (3.51) "(s)' < (s)'V'

The sufficiency of the condition comes from the inequality

p[M, A(s)] < j[M,,A(s)] [ [(s)] < [M1(s)] 6(s) (3.52) which applies in general. Necessity arises because, since the only constraint posed on the unstructured class is the norm bound, it is always possible to find a member of the class for which all the above inequalities become equalities.

64

If a structured uncertainty class is considered, constraints are posed on the norm and on the structure of the admissible perturbations. Under these constraints, it is not possible to guarantee that (3.52) holds with strictly equality for some member of the class. Consequently, if the uncertainty is structured, a singular-value condition in the form of (3.51) is in general only sufficient.

In fact, it has been shown [29] that the worst case perturbation A(s), namely the one for which p[M11A(s)] = [MiiA(s)] = -&[M1l(s)]-O[A(s)] = d[Mil(s)]b, is characterized by having output and input major principal directions aligned with the input and output major principal directions of M1(s). This is a rather stringent requirement that may not be satisfied by perturbations in a structured class.

Structured representations of uncertainties occur very often. For example, a block diagonal structured representation is the outcome of the technique discussed in Chapter 2 for rearranging interconnected systems such that simultaneous perturbations are isolated. Furthermore, when estimation and or identification techniques are used to obtain a frequency response model of a plant, confidence bounds are generated for each element of the transfer matrix. The uncertainty in the frequency-domain nominal model is then naturally represented by the structured class of element by element bounded perturbations.

Given the frequent occurrence of structured perturbations, the potential conservatism of the singular-value condition is a substantial limitation. A resourceful measure to reduce the conservatism of the singular-value stability condition is to perform a pre-conditioning of the matrices involved, in such a way that the spectral radius is preserved, while the gap between the spectral radius and the maximum singular-value is reduced. Scaling techniques for pre-conditioning of the relevant matrices are reviewed next.

3.4.1 Similarity Scaling

The advantageous application of similarity scaling in robust stability analysis was first reported in the context of the block diagonal uncertainty problem [15, 47]. Let us review this case.

Consider the M - A perturbed representation, and let A be a member of the structured class Xjc(b) defined by (2.25), with the further assumption that A has no real elements. Applying condition (3.36), stability is guaranteed, V A E Xlc(b), if and only if sup p[MiA(s)] < 1, Vs (3.53)
A

A well known property of nonsingular similarity transformations is that they preserve the eigenvalues of the transformed matrix. Therefore, for some S E SIC defined by (3.48), the spectral radius and the maximum singular-value of [M11A(s)] are related by:

sup p[SM1iA(s)S-J = sup p[MuIA(s)] < sup -5[SMJ1A(s)S-'], Vs
A A A

Letting S range over the set Sq, one has that sup p[AMiiA(s)] !_ inf {sup -&[SMiA(s)S-']}, Vs (3.54) A S A

Let A (s) E Xjc(b) be the worst case perturbation, which is characterized by p[MnA(s)] = maxp[Mj1A(s)]
A

It has been shown [12] that, in the case of purely complex perturbations, there exist a worst case perturbation in which each element is in the boundary of its domain in C. Therefore, the worst case perturbation can be decomposed as

A (s) = PAUO(

(3.55)

66

where PA is a diagonal real matrix containing the known upper bounds on the norm of the complex blocks, and U6 E UK, the set of unitary matrix having the same block structure as XIC. Substitution in (3.54) gives:

sup p[Mll(s)PAUe] < inf {sup- [SM1(s)PAUoS-1]}, Vs
Ue S Ue

Observing that U9 and S-1 commute, because by definition they satisfy the same block diagonal structure, and that the spectral norm is invariant under multiplication by an unitary matrix, the last equation can be written as: sup p[Mi(s)PaUeoj < inf -[SMii(s)PAS-'j, Vs UG S

Defining Ma(s) =f M11(s)PA, the above inequality becomes:

p[Ma(s)] _ sup p[Ma(s)Uo] < inf d[SM(s)S-'], Vs (3.56) U9 S

Therefore, a sufficient condition for stability of the M - A representation, under blockdiagonal structured uncertainty, is inf [SMa(s)S-1] < 1, Vs (3.57)
S

3.4.2 Non-Similarity Scaling

In the derivation above, the commutative property of block diagonal matrices was invoked to do a swap of positions between U0 and S-1, thus allowing the phase matrix to be discarded in the term involving the spectral norm. This property could not be used if the perturbations had a more general structure than the block-diagonal form. This is the case of the element-by-element bounded perturbations in the class Ds, defined by (2.17). However, this case can be handled by the technique of non-similarity scaling [28, 33].

67

Let us consider the M - A representation, assuming that M,1 E CmXm and the allowable uncertainty class is Ds defined in (2.17). Then, the perturbation A(s) is a full matrix satisfying A+ < PA, for some PA E Rmxm. Now, let S 4f {S : S = diag{s,...,sm}, si E +, Vi} (3.58) Considering S1, S2 E S, one has that

p[MIA(s)] < sup p[Mu A(s)] _ sup -[SiM11(s)S2S;1A(s)S1] A A

Letting SI and S2 range over S, the above relationship become

p[MnIA(s)] < sup p[M11A(s)J < inf {-l[SIMII(s)S2] sup d[S2'A(s)SI]} A S1,S2 A Now, for any A E C.xm such that A+ < P E Rmx-, one has [29]: Id[A] _< -d(A+) < (P) (3.59)

In view of these inequalities, the right term of the previous inequality becomes: p[MIA(s)] < inf {I[SIM11(s)S2][S2-1PASI]1 (3.60) SI,2 Therefore, a sufficient condition for stability under all A E Ds is: inf {f [S1M11(s)S2]-5[S21PAS'11 < 1,Vs (3.61) SI,S2 The presence of two scaling matrices, Si and S2, with S2 0 S11, characterizes nonsimilarity scaling. Note that in the application of similarity scaling technique, complex perturbations are explicitly assumed, what allows the consideration of the worst case given by (3.55). In the application of non-similarity scaling, the upper bound matrix PA implicitly admits complex perturbations. 3.4.3 Suboptimal Scaling Both stability conditions (3.57) and (3.61) are optimal in the sense that the norm of the scaled matrix is minimized over the set of scaling matrices. However, consider S E S. The following inequalities follow from equation (3.56), under the assumption of complex perturbations: pAiWPA] < sup p[MI(s)PAUo] < inf -[SMll(s)PAS-1] < -F[SMll(s)PAS-] --Ue -S (3.62) In the same way, for 91, 92 E S, equation (3.60) yields: p[illA(s)] < inf {'&[SjMjj(S)S2]-d[S2 YaSt']) < -4[,MII(W)2]-F[92 'PA 1l] - 1,82 (3.63) If the similarity scaling S, or the non-similarity scaling pair S1 and ,2, is chosen according to some criteria, equations (3.62) and (3.63) can be used to obtain sufficient stability conditions. Although more conservative, these conditions save computation time, since they do not require a search over S. Two techniques for the choice of suboptimal scaling are discussed below. Perron scaling Let us review some results related to the theory of non-negative matrices, the first of which is the Perron theorem. Theorem 3.5 (Perron). A (real) irreducible non-negative square matrix A has an eigenvalue of multiplicity one equal to its spectral radius, and no other eigenvalue is larger in absolute value. Corresponding to this eigenvalue, there exist a right and a left eigenvector which have only positive components. 69 The eigenvalue of A which equals the spectral radius is called Perron eigenvalue and denoted by 7r(A). The associated eigenvectors are the right and left Perron eigenvectors. Lemma 3.2[3, 29]. For any A E Ctexm, and S E S, inf -d(SA+S-1) = 7r(A+) (3.64) S The minimizing scaling S d___f S,, called Perron scaling, is given by S, = [YAXa']2, where YA and XA are diagonal matrices containing respectively the elements of the left and of the right Perron eigenvector of A+. Lemma 3.3[3, 28]. Given matrices A and B of compatible dimensions, with both Ai, and Bij E R+, and S and S2 E S, then inf {(S1AS2)-F(S21BSI1)} = r(AB) (3.65) S1 ,S2 The scaling defined in this lemma is called Perron1 ,s2 scaling [28]. The optimal pair of scaling matrices, for which equality is obtained in (3.65), is determined by [3, 28]: = [YABX 1]; S2,, = [XBAYBJ] (3.66) where XAB and YAB are diagonal matrices whose elements are respectively the entries of the right and of the left Perron eigenvectors of (AB). XBA and YBA are defined in a similar manner, regarding (BA). Lemma 3.4[28]. Let A and B be complex, with compatible dimensions. Then, for S1 and S2 E S, inf {-(SIAS2)-U(S IBS 1)} < r(A+B+) (3.67) SI,S2 70 where A+ and B+ are matrices whose elements are the magnitudes of the elements of A and B, respectively. Let us return to the problem of robust stability under structured perturbations characterized by [A(s)]+ < Pa,Vs. Using equation (3.59), the following inequalities apply: p[MuA] < "[MA] < j[(M11A)+] < -j[MjPAJ, Vs Using similarity scaling, and applying Lemma 3.2, one has that p[MiiA] < inf -d[S(M+ PA)S-1I = ir(M+PA ) (3.68) -s Therefore, the Perron radius can be used to obtain a sufficient condition for stability, namely: lr(M+PA) < 1, Vs (3.69) Now, consider the Perron scaling for Mj+PA, given by SW = [YM+p,(XM~pA)I]2 (3.70) Substituting S, for S in equation (3.62) gives p[Ml(S)PA] <_ -j[S, MII()PAS7-r] Therefore, using the Perron scaling for (M+jPA), a stability condition less conservative than (3.69) can be obtained, namely -5[SMll(s)PAS 1] < 1, Vs (3.71) A non-similarity scaling condition can be derived in the same fashion. Since p[M.] <5 -[Mll A] < -[(Mii))+ AN -a'[MAIPA] the application of non-similarity scaling and Lemma 3.4 results in the inequalities p[M1lA] < inf -d[SIM+S2]-[S 1PAS71] < 7r(M+PA) (3.72) from which condition (3.69) can also be obtained. The Perron scaling for (M+PA) is Sr = [YM+P (XMI+P ) '1; S2,r = [XpM+(Yp M+) ]2 (3.73) Substituting S1,. and S2, for 91 and S2 in equation (3.63) gives p[MII(s)A] < _j-[S1rM1(S)S2,]j [Sj PASI,] Thus, a sufficient condition for robust stability, based on explicit non-similarity Perron scaling, is: - [SMl(8)S2]-aF[SIPAS7,rS < 1, Vs (3.74) Osborne scaling Osborne's scaling process [43] comprises an iterative procedure to find the scaling which minimizes the Froebenius norm of an irreducible matrix A E C'X, defined as dd 1 11 A JIE d.f Aij 1 Let S, be the scaling obtained from Osborne's iterative process applied to the matrix [Mnl(s)PA]. A stability condition analogous so (3.71) can be obtained using S,, namely -F[SoM11(S)PASI1] < 1, Vs (3.75) 3.5 Conclusions This chapter summarizes robust stability conditions and techniques that will be employed in the next chapters. One important topic is the application of the Lyapunov direct 72 method under uncertainty. This method will be explored in Chapter 4, and the sufficient condition obtained in Section 3.2.2 will be studied in detail. Also important is the notion that singular-value stability conditions are only sufficient in the presence of structured uncertainty, and that the conservatism of singular-value conditions can be reduced through scaling. These concepts will have significant roles in Chapter 5, where an alternative frequency-domain approach is proposed for the assessment of robust stability of state space system under structured uncertainty. Although the generalized Nyquist criterion and its extension to systems under perturbation will not be applied in the next chapters, the review undertaken above is justified because this technique is a relatively recent generalization to MIMO systems of a classical tool in frequency-domain analysis of SISO systems, which can have a prominent role in computer-aided analysis and design environment. CHAPTER 4 LYAPUNOV DIRECT METHOD IN TILE PRESENCE OF STRUCTURED UNCERTAINTY 4.1 Introduction The objective of this chapter is to obtain conditions for robust stability of linear state space systems under structured uncertainty, using the Lyapunov direct method. Although Lyapunov theory yields only sufficient conditions for stability, it can be applied to a wide class of dynamic systems, including nonlinear, time-varying systems. The difficulty in general associated with the application of Lyapunov theory direct method is that it requires the construction of a suitable Lyapunov function. In the case of linear systems, this difficulty is not present, since an immediate choice is a quadratic function of the form V(t, x) = x(t) Tp(t)x(t), where P(t) is a symmetric matrix. Furthermore, in the case of time-invariant linear systems, the positive definiteness of the derivative of the function V(x) = x(t)TpX(t), which depends only on P, can be checked through the Lyapunov matrix equation, given by (3.5). This property extends to the analysis of linear systems whose matrix A is uncertain. In this situation, however, besides the inherently sufficient nature of the stability condition, there is an additional cause for conservatism, as illustrated by the following case [42]. Let us consider the application of the Lyapunov indirect method to a nonlinear system. After linearization around an equilibrium point, the linearized system can be viewed as a perturbed linear system, where the perturbation is the linearization error, namely the neglected high order terms. Let the perturbed model be i(t) = Ax(t) Bm u(t)+f [x(t), u(t)], 73 74 where Am and Bm describe the linear part and f is a nonlinear vector function. A nominally stabilizing linear quadratic state feedback control yields the closed-loop i(t) = (A,, - BmR-1BTp)X(t) + f[x(t)] de4f Ax(t) + f[x(t)] which is stable for f = 0. Let V(x) = xTPx be a Lyapunov function candidate, where P comes from the solution to the Ricatti equation associated with the LQSF problem. Then, the derivative is V(x) = xT(AP + PA,)x + 2fT(x)Px. The following robust stability condition can be derived [42]: I1 f(x) 112 1 a <~ 1 + - Vx E Ry 11 X 112 - 2-(D-1)d(P) n(P) V where D = Q +PBmR-1BTp, o(*) is the spectral condition number, and a is a parameter in the Ricatti equation. This case exemplifies two facts about the use of the Lyapunov theory in robust stability analysis. First, the problem of nominal stability analysis of a nonlinear system can be approached by robust stability analysis of the corresponding linearized system. Second, and more important for the objectives of this chapter, stability conditions obtained from the application of the direct method generally involve some function of the norm of the perturbation. Consequently, the method cannot discriminate between real and complex uncertainties having the same norm bound. If the uncertainty is known to be real, and the stability result is given in the form of a norm bound on the perturbation, a larger class of perturbations is virtually admitted, namely the class of complex perturbations with the same norm bound. Therefore, the result is not tight. The Lyapunov direct method can handle time-varying perturbations as well, in which case V/(x) is required to be negative-definite at each instant t. In the case of nominal time-varying systems, the use of the Lyapunov matrix equation is precluded. However, if 75 the system matrix can be decomposed into a constant part plus a time-varying part, this case also can be handled, by looking at the time-varying part as a perturbation to the time-invariant part, and requiring negative definiteness of V(x) at each instant t. Examples of the application of the Lyapunov direct method to systems under unstructured and under structured perturbations are available in the literature. For instance, assuming Q = 21, the following robustness condition can be derived for the system .i(t) = [A + E(t)]x(t), where E(t) is a time-varying unstructured perturbation [57]: < 1 V[E(t)] <1 Y(P) where P is the solution to the Lyapunov matrix equation. The application of the method in the presence of structured perturbations can be illustrated by the case below [55, 56]. A bound on the magnitude of each perturbation element def is given, namely E13(t) < Ei3, Vt, with maxim, Ei = E. Using Q = 21, the following condition for robust stability can be derived: 1 E < <[(P U-)] where (Pm Un)s is the symmetric part of the matrix PmUn, Pm contains the magnitudes of the elements of the P, and U,, is such that Uni, = 1, Vij. In the next section, a link between perturbation structure and conservatism of the stability condition is investigated. 4.2 Dependence of Conservatism on Perturbation Structure The section points out a cause for conservatism in the application of the LDM under structured uncertainty, which is inherent to the mechanics of the application, and related to the choice of the Lyapunov matrix. 76 Recall that according to Theorem 3.2, the system dynamic matrix is asymptotically stable if and only if there exists some positive definite symmetric matrix Q such that matrix Lyapunov equation AT + PA = -Q has a unique, positive definite solution P. It is important to keep in mind that the theorem does not guarantee that, picking a positive definite P, the corresponding Q is positive definite. Now, consider the following lemma: Lemma 4.1. Given a real symmetric positive-definite matrix P, the set of systems i(t) = Ax(t) for which V(X) = xTpx is a Lyapunov equation is a convex set. Proof. Let M PD d=_ {M : 1 is symmetric, positivedefinite} (4.1) A(P) d= {A ATP+PA=_Q, PQ EMPD} (4.2) Then, for A1 and A2 E A(P) and P E MPD, one has ATp + PA1 = -Q1 E MPD and ATp ï¿½ PA2 = -Q2 E MPD. Taking al, a2 E R+ such that al + a2 = 1, and defining A3 = a1A1 + a2A2, one has: AT3p +PA3 = [aA, +(1- a,)A2]Tp+ P[al A, + (1- a)A2 - a,[ATp+ PAI]+(1-al)[ATp + PA2] = al(-Q1) + (1 - al)(-Q2) def ~ EP d- -Q3 E .,MPD Therefore, A3 E A(P), which shows that A(P) is a convex set. Let us now turn the attention to the matrix Ap = (A + E), where A is Hurwitz and E is some perturbation in the admissible class. Define A1 = A, and A2 = A + -yt, Y E R+, further assuming that A2 is also Hurwitz and that, for a given P, the function V(x) = xTbx 77 is a Lyapunov function for both i(t) = Aix(t) and i(t) = A2x(t). Letting A3 be a convex combination of A1 and A2, one has: A3 = or A, + (1 - a,) A2 = Al + a21E f A, +/3 where/3 E [0, -y]. According to the preceding lemma, V(x) = xTPx is a Lyapunov function for i(t) = A3x(t). Now, suppose that A4 = A + (E, C > -y. Even if A4 is Hurwitz, it may happens that V(x) = xTPX is not a Lyapunov function for i(t) = A4x(t). Since the choice of Q determines P, it also determines the size of the convex set of system equations for which V(x) = xTPx is a Lyapunov function. Therefore, the conservatism of a computed stability condition will be reduced if Q is selected such that the corresponding P yields the largest possible convex set A(P). However, notice that in the above lemma, a fixed perturbation t is taken into account, while in a robust stability problem one deals with an admissible class of perturbations. The question of selecting Q such that the corresponding P generates a Lyapunov function for the largest possible set of perturbed systems, for any perturbation in the admissible class, does not have a straightforward analytic solution; possibly it has no analytic solution at all. It was seem in Chapter 3 that, choosing the Lyapunov function candidate V(x) = xTp x for the perturbed system i(t) = (A+E)x(t), it leads to the derivative equation (3.7), namely VP(x) = XT[Q - (ETpo + PoE)]x df _xTQPX where Q, and P, are respectively the choice of Lyapunov matrix for the nominal system and the solution of the nominal Lyapunov equation. A sufficient condition for stability of the perturbed system is the positive definiteness of Qp. Defining, for simplicity, F If ETp + P, E (4.3) 78 robust stability requires positive definiteness of Qp = (Qo - F). Since both Qo and F are real symmetric matrices, one has: (Qo - F) positive definite ï¿½= min{Re[A(Qo- F) > U; =v a(Q o - F) > 0 (4.4) == _.(Qo)- -d(F) > 0 (4.5) in view of the inequality 1(Q. - F) >_ o(Qo) - -d(F) (4.6) Since the analysis objective is to find explicit conditions on E, equation (4.4) is not useful, and the only alternative is to apply (4.5). Obviously, this condition is not tight, since, as shown by (4.6), it may be possible that a(Qo - F) > 0 even if a(Qo) - -5(F) < 0. Therefore, the closer (4.6) is to strictly equality, the tighter (4.5) is. The following theorem gives necessary and sufficient conditions on Qo and F for equality to be attained in (4.6). For simplicity, the subscript of Q, will be dropped. Theorem 4.1. Given Q, F E Rmxm, then a(Q - F) = _(Q) - (F) if and only if the following conditions hold: "7F = e'XQ (4.7) YF = eoYQ (4.8) where 0 is arbitrary. The first of these conditions requires that the major output principal direction of F and the minor output principal direction of Q be aligned. The second requires alignment 79 between the major input principal direction of F and the minor input principal direction of Q. The proof of this theorem is derived from a similar proof [30], and is given after the following lemma, which establishes necessary and sufficient conditions for alignment between the relevant principal directions of Q, F and (Q - F). Lemma 4.2. Given Q, F E RX', then a(Q - F) = _a(Q) - j(F) if and only if the following conditions hold: -Q-F = eJO-YF (4.9) YQ-F = eJlkYQ (4.10) XQF = eO'%F (4.11) XQF = eJvTQ (4.12) where 0 and 0 are arbitrary. Proof. Sufficiency: (4.9) to (4.12) =>- _(Q - F) = _(Q) - (F). Assume conditions (4.9) to (4.12) are true, and consider the input YQ-F applied to [Q - F]. Then, [Q- F]YQ-F = QYQ-F - FYQ-F = ej'QyQ-eJF-F by (4.9),(4.10) Applying the relationships _(M)xM = MYM and -d(M)YM = MYM, VM, to the last equation, it becomes: [Q - F]YQF = ej'a(Q)XQ - ejo,(F)-YF = Q(Q)xQ.-F-d(F)xQF by (4.11),(4.12) = [_(Q)- -(F)]_QF The last equation implies that a(Q - F) = _a(Q) - j(F), which proves sufficiency. Necessity: a(Q - F) = a(Q) --(F) ==*. (4.9) to (4.12). Assume a'(Q - F) = a(Q) - (F). Now, Vz E Rn, (Q - F)z = Qz - Fz. For z = -YQ-F' this expression becomes (Q - F)YQF = _(Q - F)XQ-F = QYQ-F - FYQ-F. Given the assumption above, Ua(Q)XQF - -(F)XQ-F = QYQ-F - FYQ-F' which is equivalent to a(Q)xQ-F = QYQ-F (4.13) -d(F)_Q-F = FYQF (4.14) Equation (4.13) means that, since Q applied to _YQF produces a magnification e(Q), YQ-F and yQ must be aligned, that is, YQ-F = eJ'kYQ, for arbitrary ', which is (4.10). Now, (Q)X--Q-F = QYQ-F = QeJVPyQ = eJ"QyQ = e'la(Q)XQ = g(Q)eJ .--Q Therefore, x~QF = eJ'xQ, which is (4.12). Similarly, equation (4.13) shows that, since F applied to YQ-F produces a magnification -(F),yqf and YF must be aligned, that is, yQF = Y arbitrary 9, which is (4.9). Since -F(F)_QF = FyQF Fejï¿½-= eJ ï¿½-'(F-Q =!_Q-f = eJOTF, which is (4.11). Proof of Theorem 4.1. Necessity: o(Q - F) = (Q) - -(F) :==> (4.7) and (4.8) Rewriting (4.9) as -9F = e-JYQ-F and using (4.10), one gets YF = e-jOej'YQ, and letting / = ) - 0, one obtains iF = eJf3yQ, which is (4.8). Similarly, rewriting (4.11) as -XF = e-J xQF and using (4.12), one gets XF = e-JoeJxQ, and using the definition of/3, one obtains -5F = eJ~xQ, which is (4.7). Therefore, necessity is proved. Sufficiency: (4.7) and (4.8) == _r(Q - F) = _I(Q) - F(F) Assume (4.7) and (4.8) and consider the input yQ to (Q - F). Then, [Q - F]yQ = QyQ - FyQ = a(Q)_Q - e-jo 'F, by 4.9 = a(Q)Xc = a_(Q)xq- e-J ej3-j(F)xQ, by 4.10 = [_(Q) - -(F)]_Q Therefore, [I(Q) - (F)I = _(Q - F), which proves sufficiency. In the present case F is defined by (4.3), and Q is the solution to the nominal Lyapunov equation. Therefore, necessary and sufficient conditions for equality in (4.6) are: X(ETP+PE) - - [-(ATpPA)], -(ETP+PE) =_[-(ATppA)] The expressions above have a qualitative significance. They show that (4.6) holds with equality, for an allowable perturbation class, if and only if the class includes a perturbation for which the alignment conditions are attained. If the existence of such a perturbation were guaranteed, the use of (4.5) in place of (4.4) would not introduce conservatism. However, it is not evident whether or not the above expressions can be helpful in the choice of the Lyapunov matrix Q. The conservatism of (4.6) would be eliminated if Q were such that the resulting P leads to the attainment of the alignment conditions. However, it cannot be guaranteed that the Lyapunov function constructed with such P would be a Lyapunov function for a larger set of perturbed system than the function obtained with some other P. 82 This section shows that the choice of the nominal Lyapunov matrix has an important role in determining the conservatism of robust stability conditions. In the next section, the problem of the choice of Q is addressed, in the context of structured perturbations. 4.3 Stability Under Structured Uncertainty 4.3.1 Uncertainty Description In this section, the uncertainty class E E ï¿½SD defined in (2.31) is adopted. Uncertainty in this class can be represented as E = "k"=I Pk Ek, where Ek, k = 1,..., mn, is a constant matrix which accounts for the structure of the perturbation due to the parameter Pk. Without loss of generality, a symmetric range about the origin is assumed for each parameter, namely Pk E (-ak, ak), Vk. This description is well suited to the representation of real world systems uncertainty, since it accounts for the possibility that changes in one physical parameter may affect several entries of the matrix A. However, it requires that the perturbation to each element of A be linear in the parameters, and thus may require parameter redefinitions. That description has already been used in robust stability analysis of state space systems [4, 51, 61]. 4.3.2 Sufficient Condition for Robust Stability Let p = [PlP2,.. . ,pm]T be a vector containing the system parameters, an let us define MdefME n, =xn { M : KlXf Re[Ai(M)] < 0, Vi } (4.15) where either K - R or I - C, according to the context, and m Sd=~ f PE Rm : (A + :Z pk Ek) E . ï¿½, (4.16) k=1 Then, Sd represents the stability domain in the space of system parameters. 83 Given the nominal system model and the parametric uncertainty description, the objective of robust stability analysis is to determine the stability domain in the space of parameters, which is usually specified by an admissible upper bound on some norm of p. The Lyapunov Direct Method has been used in robust stability analysis by several authors [4, 16, 42, 51, 55, 56, 58, 59, 61]. Particularly, the uncertainty description above has also been adopted [4, 51, 61]. Introducing that uncertainty description in (3.7), the equation of the derivative of the Lyapunov function becomes: l/,(X) = _XT = X QcT Pk (EkP +PEk)] X (4.17) k=1I where Q, and P are respectively the Lyapunov matrix for the nominal system and the corresponding solution of the Lyapunov equation. Therefore, positive definiteness of the matrix [Q, - TUI Pk (Ek= TP + PoEk) is a sufficient condition for asymptotic stability of (A + E). In order to obtain the stability domain, an explicit condition on some norm of p must be derived. A derivation of stability domains is presented in Section 4.3.4. Before this, some available results are reviewed. 4.3.3 Available Results for Admissible p1 p For simplicity, the subscript will be dropped on the notation of Q, and P. Therefore, Q and P will mean nominal matrices. Let us define: Fk d (Ep+ PEk), k 1,...,m (4.18) def 1 pe d= [Pi ..fPm] (4.20) FQ, def Q-FkQ-2 (4.21) The following norm bound [4] gives a condition for robust stability: =PkJ2 -IIPIf2< - F ', Q a parameter (4.22) [EM=I [a0(Fk)]2 ]12 Notice that both the numerator and the denominator depend on Q, which is treated as a free parameter. Results for a fixed Q have been reported. Using Q = 2 I,, the following conditions can be derived [61]: Pk 11 P 112 < 1 (4.23) E IPklf(Pk) < 1, (4.24) k=1 1 IPjl < ( lPiH' j = 1,. ..,m; (4.25) The choice of Q = 2 I,, has been justified [59] on the basis that it maximizes the ratio !(Q) Fixing Q yields ready to use analytic expressions for bounds on p; however, in view of the facts pointed out in the last section, it is a potentially conservative option. Actually, it has been acknowledged [61] that a state transformation [58, 59] can be applied to the system description, in order that improved results are obtained with Q = 2 I,, for the transformed system. Yet there is no systematic method for choosing the adequate state transformation. The following stability conditions have also been reported [51]: M 11 P 1--1P112 < 2 Pk 1 1 Q-)]T (4.26) Z lPklI(FQk) < 1 (4.27) k=1 max IPkl =11P1=1 < (4.28) It has been shown through examples [51] that less conservative stability conditions can be obtained from these expressions with a choice of Q other than Q = 2 1,,. Furthermore, 85 it has been argued [51] that regarding Q as a free parameter inherently incorporates the degree of freedom brought about by a state transformation [58, 59]. However, no analytical method has been proposed for the choice of Q. Note that, since Q is a free parameter in (4.22) and in (4.26) to (4.28), and no analytical method is available for the selection of Q, it implicitly means that some sort of search over the space of n x n symmetric, positive-definite matrices is required. In the following, a derivation of stability conditions on norms of p, which was independently developed, is explicitly presented, and the corresponding stability domains in the parameter space are defined. 4.3.4 Derivation of Admissible 11 p Using the definition of Fk in (4.18), equation (4.17) can be rewritten as: V(x) = - [XTQIQ - 0XTQ2 ( PkQ-1FkQ) Qx From the inner-product properties < y, y > = I y I2 and < y, My > < ?(M) < y, y >, and defining y(t) = Q2x(t), the inequality below follows from the last equation: 1(x) < - [i- (Z Pk Q- F)k Q ii (4.29) Since the norm term on the right side is always bigger than zero for nonzero y, a sufficient condition for robust stability is 1 PkQ- Fk - < 1 (4.30) New result for admissible 11 p 112 Let us define (4.31) Mr, [Pi I---PmI,]d 86 Mo dli [FQlI...IFQm]T (4.32) Then, substituting in (4.30), one obtains - ( PkQ-2FkQ-) = -d(MAiQ) < -U(M,)-d(MQ) (4.33) The maximum singular-value of Mp is given by: 2 -(Mp) = [max { Ai(MPTMp)}]1 = Pk (4.34) Substituting in (4.33), one obtains that the robust stability condition (4.30) is satisfied whenever 1 def (.5 k 11 :IP 112 < j2Q (4M35 k=1I12 1 (MQ) The corresponding stability domain in the parameter space is given by Sd2(Q) = { p : IP 112 < rd2(Q) } (4.36) The computed stability domain Sd2 is a hypersphere of radius r,2 in Rm. Given A0, Ek, k m 1.., m,the induced 2-norm of MQ, and consequently the radius r2, is parametrized by the Lyapunov matrix Q. A related result for admissible 11 p j[ Considering the matrix Alp defined in (4.31), the following inequality applies: j (Mp) = plln[ I ... 1Pmln. I : "aF ([ Iln . .. [pm1.1 1 (4.37) Now, let us define def p. = pj : jpjj = rnaxk IPk1 (4.38) Substituting p, for Pk, Vk in (4.37), one obtains (4.39) '5F(MP) f JP ([[p./, ...p I ])[pI [x,.[ ]) 1p.I(mn)2 87 where rn is the number of parameters of the system. Using (4.39), one obtains from equation (4.33) that lP.4 (m) -d (MQ) K 1 is a sufficient condition for robust stability; equivalently, 1 IN* =-11 P 11.o < def 1soo(Q) (4.40) (M) 2-j (MQ) Notice that, in view of definition (4.35), 1o = (---rs2. Therefore, the derivation of admissible 11 p 11... above leads to the smallest co-norm upper bound that can be directly obtained from 11 P 11, < 11 P 112 - (m)2 11 p II... The corresponding stability domain is: Sd. (Q) de { P II. < Is.(Q) } (4.41) The stability domain is a hyperrectangle E Rm, with semiside given by 11 p 1100, therefore parametrized by the matrix Q through MQ. The 2-norm and the o-norm conditions derived above differ from the corresponding previous results summarized in the last section. For future references, a derivation of results (4.27) and (4.28) is now presented. Admissible 11 p I1, From the robust stability condition of equation (4.35), one obtains m m II S PkFQk 11i2 < 1 lPkl 1 FQk 11:2 (4.42) k=l k=1 Now, let w = [11 FQ1 11i2 I ... 111 FQ, 11i2]. Then, substituting in the right term of the above inequality, one obtains that a sufficient condition for robust stability is m SIwkpkl <1 (4.43) k=1 This condition, which is the same as (4.27), is given in terms of a weighted 1-norm of p, with the kth weight given by 11 FQA 11i2. The corresponding stability domain is an hyperrhombus in Rm, defined by sdI.(Q) "-Vf P :tP III-< 1} (4.44) The largest possible value of a semiaxis is IPkI < W , Vk. Notice that the weights are parametrized by the Lyapunov matrix Q. Admissible 11 p11 From (4.35), one obtains 1 E' Pk FQk 11i2 < 1 E=i IPk FQk I 11i2. Now, letting def P. = Pj : IPjj = maxk IPkI and substituting p. for Pk, Vk in the last inequality, one obtains the sufficient condition p.I 11 E'=I IFl 11i2 < 1 or, equivalently, 1 . < Id cIIldef IFk(Q) (4.45) which is identical to (4.28). The corresponding stability domain is S d(Q) e {p :11 P 1. < is.} (4.46) Comparison of new results to previous results The new result of equation (4.35) is analogous to the earlier results of equations (4.22) and (4.26). Now, consider the following possible derivation of equation (4.26). The matrix in equation (4.30) can be written as = PkQ- FkQ- = Q- M FQ, where Mi is given by (4.31) and FQ d- F1Q- ... FmQ-I ]T. Therefore, one obtains: a ( PkQ-!FkQ-2 < a(MP) d (Q- -(FQ) (4.47) from which equation (4.26) follows. However, from (4.32) and the definition of FQ above, it follows that MQ = diag[Q- ] FQ. Therefore, "&(MQ) S -d(Q-2)-(FQ). Using (4.33) and (4.47), it follows that -d ( MP) -d ( MQ ) < -a ( MP) - ( Q- 2 ) -d ( FQ ) (4.48) 89 Consequently, condition (4.30) is satisfied with less conservatism by -d(MV)-j(MQ) < 1, as in the new result (4.35), instead of by -a(Mv)-j(Q- )-d(FQ) < 1, which is the case in the derivation of (4.26) given above. Similar reasoning can be applied relative to the derivation of (4.22). The new result for the admissible 2-norm of p is superior to previously available results, in the sense that, if an arbitrary Lyapunov matrix Q is used, equation (4.35) will give better 2-norm bound on P than either (4.22) or (4.26). Therefore, the new result in 'nonconservative', relative to the others. However, the conservatism of all the results depend on the adequate choice of the Lyapunov matrix Q. On the other hand, the derivation of the new result for the admissible c -norm of p, given in equation (4.40), requires that the inequality (4.33) be used, while the derivation of the result (4.33) does not. Therefore, given a Lyapunov matrix Q, the new result is expected to be more conservative than the previously available result. However, while the latter is given in terms of IFQj , the new oo-norm result is given in terms of the same matrix function MQ that appears in the new 2-norm result. Furthermore, it will be shown in the next section that the derivative of cost functionals relative to the elements of Q, are easier to obtain for a functional based on the new oo-norm result than for a functional based on the previous result. 4.3.5 Admissible Weighted Stability Domains In the derivation of the norm bounds (4.35), (4.40) and (4.43), it was implicitly assumed that no 'a priori' information was available about the relative range of the individual parameters. It is equivalent to assuming that the largest value that can be taken is the same for all the parameters, that is IPkI < &,Vk, & = maxk laki. Consequently, the stability 90 domains defined in the parameter space by those equations are, respectively, a hypersphere, a hypercube and a hyperrhombus. If information is available on the actual relative range of the parameters, the conservatism of the stability domains Sd2 and Sd.o can be reduced, by shaping them such that their relevant dimensions become proportional to the ranges of the parameters. The adequate shape can be obtained by weighting the parameter ranges [511. Let us rewrite the uncertainty description as E = Ek'=j pkEk = X'=l PA skEk where Sk, k = 1,..., m are adequately chosen scalars, and define def Pk ' def Plc = -, Ek = SkEk (4.49) Sk so that m E= pkEk (4.50) k=1 Considering the weighted uncertainty description above, and proceeding as in Section 4.3.4, admissible norms for Pk, Vk, are derived. The corresponding stability domains, in the weighted parameter space, are given by (4.35), (4.40) and (4.43). The stability domains in the original parameter space are then obtained using (4.49). 2-norm weighted stability domain Following the same steps of the derivation of equation (4.35), one obtains 2 1 def Pk= [a(Mt,)] = r,2(Q) (4.51) where M is obtained by substituting E for Ek in the definition of MQ. The stability domain in the weighted parameter space is given by Sd2(Q) {p : 11 P 112 < r2(Q) ( (4.52) To obtain the stability domain in original parameter space, consider [P 12 + 2 ... + [Pm12 < [r2(Q)] (4.53) S1 S2 8M This inequality defines a hyperellipsoid with semi-axes ak given by ak < (sk)-2 r2(Q), Vk (4.54) oo-norm weighted stability domain Proceeding as in the derivation of (4.46), one obtains P'1.< 1 def (Q) (4.55) (m) 2(M ) where VI, is as above defined. The stability domain in the weighted parameters is Sd def (Q)} (4.56) Since, max p_ < l..(Q) ==max [Pk < sk1's(Q) (4.57) k 8k k the stability domain in the original parameter space is a hyperrectangle, with semisides lk given by lk < SkI.' (Q), Vk (4.58) The choice of weights The norm bounds for weighted parameters define either regions with equal axes or equal sides, depending on the norm used. It is convenient to obtain stability regions whose relevant dimension is proportional to the corresponding actual parameter range. Let us assume that pi is the original parameter with the smallest range. Then, one possible choice of the weights is I= - Vk (4.59) 4.4 Maximization of Stability Domains 4.4.1 The 'optimal' choice of Q Let us recall the expressions obtained for stability domains in the parameter space: (4.35),(4.36): Sd2(Q) = {p : 11 2 < r,2(Q); r,2(Q) (Mq) 1 (4.40),(4.41): Sdc0(Q) = {p : lllp 11 < (Q); 130(Q) = 1 (4.42), (4.44): Sdlw(Q) = {p :IIp,.. < 1; 1I1piw= IPkI -(FQk) k=1 As previously discussed, the size of the stability domains depends on the choice of the Lyapunov matrix Q. The best choice of Q, namely the one which yields the largest computed stability domain, is problem-dependent, since it is affected by both the system matrix A and the matrices Ek. For instance, let us recall the equations relating Q to MQ. Given A and Ek, k = 1,..., m, and chosen Q, it follows that: Q,A: ATp+PA = -Q - P P, Ek: ETp + PEk = Fk Q, Fk : Q-2 FkQ- = FQk [(FQ1)Ti... :(FQm)T]T= MQ Notice that MQ, which is uniquely determined by Q, is an mn x n real matrix, where m symmetric, n x n blocks are stacked. Defining the set Q = {Q : Q E RnTn, symmetric, positive definite} the quantity rs2(Q) defined by (4.35) relates to Q through the real functional: N:Q -- + Q(MQ) - rS2(Q) (4.60) 93 The above equations show that the functional N(Q) is highly nonlinear, and complex enough to void the possibility of a simple analytical solution for the best choice of Q. Moreover, Q must be restricted to Q, the set of n x n symmetric, positive-definite matrices, which means that the eigenvalues of Q are constrained to have strictly positive real parts. A feasible alternative to the analytical solution is to treat the problem of selecting Q E Q as a constrained parameter optimization problem, where the real elements of Q are the parameters. In the following, the problem of the computation of 'non-conservative' stability domains in the system parameter space is recast as optimization problems over the set Q. Although the discussion refers to the stability domains Sd2, Sdo and Sd1 ,, it applies, with the obvious changes, to the weighted domains S' and S' 2 doo" 'Optimal' 2-norm stability domain The objective to be optimized can be derived from any of the inequalities which give the admissible 11 P 112 as a function of Q. However, it is convenient to choose the least conservative condition, namely the one which yields the largest stability domain for a given Q. As shown in the previous section, the least conservative condition on the 2-norm is given by equation (4.35). Therefore, let us elect that equation as the basis of the optimization procedure. Let us define the objective functional J2(Q) ef -(MQ) (4.61) Then the optimized stability domain can be obtained as: S;2(Q) {P: 11 P 112 < r(Q)}; r,2(Q) (4.62) (4.62 Full Text PAGE 1 ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY By JOSE ALVARO LETRA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1991 PAGE 2 To Carmen Lucia and Ariadne PAGE 3 ACKNOWLEDGMENTS I am profoundly indebted to my advisor and supervisory committee chairman, Dr. Haniph A. Latchman, for his guidance, permanent support and encouragement during my three years at the University of Florida. Despite his several other responsibilities, Dr. Latchman always found time to discuss my work and give me his insightful orientation. I wish to thank the professors who served on my committee, Dr. Thomas E. Bullock, Dr. J. Hammer, Dr. A. Antonio Arroyo and Dr. Spyros A. Svoronos, for their willingness to discuss and advice my work, and for the high level of consideration I was always treated with. I wish to thank the help and advice of Dr. G. Basile, my first committee chairman. I am indebted to the EE Graduate Coordinator, Dr. Leon W Couch, and his staff, for all their assistance. Particularly, I have to thank Mrs. Greta Sbrocco, who always provided helpful orientation on administrative subjects. It was a privilege to work close to my ex-fellow student, Dr. Robert J. Norris, whose valuable incentive and help I now acknowledge. I also wish to thank Dr. Julio S. Dolce da Silva, of the Brazilian Army, for his help on my enrollment and adaptation to the University. I am grateful to the Execito Brasileiro (Brazilian Army) for conceding me the opportunity of coming to the University of Florida to further pursue my studies, and to the CNPq Conselho Nacional de Desen volvimento Cientifico e Tecnologico (Scientific and Technological National Development Agency Brazil) for the scholarship I was granted. 111 PAGE 4 TABLE OF CONTENTS page ACKNOWLEDGMENTS iii ABSTRACT vi CHAPTERS 1 INTRODUCTION 1 1.1 Dissertation Objective 1 1.2 Brief Historical of Uncertainty Treatment 2 1.3 Structure of the Dissertation 9 1.4 Notation 11 2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION 16 2.1 Nominal Models and Definitions 16 2.2 Uncertainty Representation 20 2.3 Conclusions 38 3 STABILITY ANALYSIS OF LINEAR SYSTEMS 39 3.1 Introduction 39 3.2 Stability of State Space Systems 39 3.3 Stability of Transfer Matrix Models 45 3.4 FrequencyDomain Scaling Techniques 63 3.5 Conclusions 72 4 LYAPUNOV DIRECT METHOD IN THE PRESENCE OF STRUCTURED UNCERTAINTY 73 4.1 Introduction 73 4.2 Dependence of Conservatism on Perturbation Structure 76 4.3 Stability Under Structured Uncertainty 82 4.4 Maximization of Stability Domains 92 4.5 Application of Optimization Over O 109 4.6 Conclusions 113 IV PAGE 5 5 STABILITY UNDER DIAGONAL PARAMETRIC UNCERTAINTY 115 5.1 Introduction 115 5.2 Diagonal Representation of State Space Perturbations 116 5.3 Problem Formulation 122 5.4 Necessary and Sufficient Conditions for Robust Stability 127 5.5 Sufficient Conditions for Robust Stability 132 5.6 Numerical Application 136 5.7 Some Extensions of Previous Results 139 5.8 Conclusions 143 6 COMPARISON OF SUFFICIENT PARAMETER NORM BOUNDS 145 6.1 Introduction 145 6.2 Results for Problems with 2 and 3 Parameters 146 6.3 Results for Randomly Generated Matrices 154 6.4 Conclusions 161 7 ITERATIVE CONTROLLER ROBUSTIFICATION 163 7.1 Introduction 163 7.2 Robustification Associated to Lyapunov Analysis 169 7.3 Robustification Associated to Frequency-Domain Analysis 169 7.4 Application 187 7.5 Conclusion 195 8 NECESSARY STABILITY DOMAIN IN THE PARAMETER SPACE 197 8.1 Introduction 197 8.2 Characterization of a Necessary Stability Domain 199 8.3 Computation of the Necessary Stability Domain 202 8.4 Applications 209 8.5 Conclusions 214 9 CONCLUSION 216 9.1 Summary 216 9.2 Directions for Future Work 223 REFERENCES 230 BIOGRAPHICAL SKETCH 234 v PAGE 6 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy ROBUST STABILITY ANALYSIS OF SYSTEMS UNDER PARAMETRIC UNCERTAINTY By JOSE ALVARO LETRA May 1991 Chairman: Dr. Haniph A. Latchman Major Department: Electrical Engineering In the analysis of stability properties of control systems, the uncertainty in mathematical models must be taken into account. Main sources of uncertainty are high order dynamic phenomena of the physical system neglected in the model, and variations in system parameters. The subject of this work is the assessment of stability of linear control systems in the presence of parametric uncertainty. State space and frequency-domain models and uncertainty representation are reviewed, as well as general conditions for nominal and robust stability. Also reviewed are scaling techniques used for reducing the degree of conservatism of frequency-domain stability conditions, including optimal similarity scaling, optimal non-similarity scaling and Perron scaling. Particularly, the perturbed state space model x(t) = (A + E)x(t) is studied. The nominal matrix A is assumed asymptotically stable, and the perturbation E is of the form E = YHt=\ PkEk, where p is a mÂ— dimensional vector of system parameters, and Ek, k = are constant matrices. The application of the Lyapunov Direct Method vi PAGE 7 for obtaining conditions on the norm of p which are sufficient for robust stability is discussed in detail. A new stability condition on || p ||2 is given, which is potentially less conservative than available results. The problem of the choice of the Lyapunov matrix which yields less conservative stability conditions is formalized as a constrained numerical optimization problem. For the case of time-invariant uncertainty, an equivalent frequencydomain stability problem is formulated, where the perturbation is a real, diagonal matrix obtained directly from the state space perturbation. Sufficient stability conditions are derived from the equivalent formulation, and scaling techniques are used, in order to reduce conservatism. Comparison of numerical results obtained for several problems indicates that, for timeinvariant uncertainty, the frequency-domain approach, associated to Perron scaling, constitutes an alternative which has better performance than the Lyapunov Direct Method. The frequency-domain approach and corresponding stability conditions are also shown to be of advantage in iterative optimization of static feedback controllers of fixed order. Additionally, a procedure is suggested for obtaining a necessary stability domain in the space of plant parameters, starting from a known sufficient domain. Finally, the integration of the stability analysis techniques into robust controller design is discussed. vn PAGE 8 CHAPTER 1 INTRODUCTION 1.1 Dissertation Objective At least two common aspects are shared by the majority of the current literature on control systems analysis and design, although many different methods and techniques are nowadays employed. These aspects are as follows: Â• Focus is placed on multivariable systems; Â• Uncertainty in system models is explicitly taken into account. These aspects constitute a frame for the present dissertation. The specific subject is the assessment of robust stability properties of systems under parametric uncertainty, which finds motivation in the following considerations. Control systems are designed to meet some performance specifications. Although the formulation of performance specifications depends on the approach used, it always requires that some quantitative indices be satisfied by the system response, what of course implies in constraints to the dynamic behavior of the system. However, it only makes sense to discuss the quantitative behavior of a control system if its stability can be assured. Otherwise, the dynamic behavior can be expected to blow up under some admissible operating condition, thus rendering the system useless. Stability, therefore, emerges as a fundamental requirement. Control design relies on mathematical modeling of the controlled system. Unfortunately, there always exists a degree of uncertainty between the model and the modeled system, 1 PAGE 9 2 which must be taken into account. The existence of uncertainty gives rise to the requirement of robustness, namely the aptitude of a control system for retaining the desired behavior in spite of the uncertainty. Design methods definitely depend on analysis techniques in order to assess system properties, including robust stability. Techniques for robust stability analysis count on uncertainty representation, which is dictated by several factors, mainly by the causes of uncertainty and available information on uncertainty structure. Variations in system parameters are sources of an important category of perturbations, which is particularly suitable to representation in state space models. Motivated by these facts, this dissertation addresses the problem of robust stability analysis in the presence of parametric perturbations. The perturbation will be assumed to depend linearly on a vector of parameters, thus admitting the practically important case in which one parameter affects several entries of the system matrices in the state space representation. This model has been used in several recent works in stability analysis. The development of the subject is outlined in Section 1.3. Before this, a brief historical summary of the treatment of uncertainty in control theory is given. 1.2 Brief Historical of Uncertainty Treatment The need for control systems has been long felt in the process of technological development. Examples of the use of control systems date back to four thousand years [50]. Noteworthy is the fact that feedback principles are found even in those early examples. Among the several advantages that the feedback principle brings to control systems, appears the property of effectively coping with disturbances and system uncertainty [31]. PAGE 10 3 Important events in feedback history are registered by Sage [50]. Among them are the invention of the mechanical fly-ball governor by James Watt in 1788, which was developed from early windmill regulators, and the analysis of feedback control systems published in 1868 by Maxwell. In 1927, the concept of feedback was introduced by Black in the design of amplifiers for long distance telephone lines; his pioneering work is contained in the paper Â‘Stabilized Feedback AmplifiersÂ’, published in 1934. Although robust to uncertainties caused by nonlinearity and other factors, the feedback amplifier presented unwanted oscillations. The theoretical study of this phenomenon led to the development of the regeneration theory by Nyquist, whose work was published in 1932. The Nyquist criterion, which derives closedloop stability characteristics from open-loop information, would constitute a fundamental technique for frequency-domain stability analysis. Ensuing developments of frequency-domain concepts originated from the work of Bode, in network analysis and amplifier design (1945), which demonstrate the existence of constraints in the manipulation of the frequency response of linear time-invariant systems; from the Nichols transformation of the Nyquist diagram, and from the root locus technique of Evans. The set of those techniques constitute what became known as the classical approach to analysis and design of Single-Input, Single-Output (SISO) systems. In the classical approach, the issue of coping with uncertainty is indirectly addressed, by providing the system with enough gain and phase margins. These margins ensure that unwanted effects of uncertainty will not disrupt stability. In the late Â’50s, problems of more complex nature, mainly originated by the control and guidance of missiles and space vehicles, came into the consideration of control engineers PAGE 11 4 and theoreticians, and dominated the development in the field. The already well-known set of classical tools was not adequate to deal with the essentially multivariable nature of the incoming control problems. The number of degrees of freedom inherent to multivariable systems, and the complex relationship between open-loop and closed-loop properties in those systems, manly due to interaction, which has no counterpart in SISO systems, often preclude the use of the simple techniques developed for scalar systems [21]. In this context, and because the digital computer was already available, the decade of the Â’60s saw a marked tendency towards the use of optimization techniques in the solution of control problems. The design objectives in such techniques were mathematically treated and transformed into a cost function to be minimized. Thus, the approach to control problems shifted from the frequency-domain to state space. Indeed, the state space was well suited for describing multivariable systems, and powerful techniques were developed for handling optimal control problems. Feedback emerged as a convenient property of solutions to optimal problems [31]. Linear Quadratic State Feedback (LQSF) appeared as robust solution to control problems, relying however on exact measurements of the states; on the other hand, the possibility of very accurate models for the applications then sought caused the question of uncertainty to receive comparatively less attention than in the classical frequency domain approach. The state space formulation and the control techniques it brought about, however, did not achieve acceptance in all fields of applied control, particularly in industrial control. Different reasons have been presented for this fact: only approximate models are available for many industrial processes; plants have components which deteriorate due to continued use; long formed habits of dealing with classical techniques by industrial engineers are an obstacle to the adoption of the sophisticated mathematical treatment required by optimal PAGE 12 5 control. The Linear Quadratic Gaussian (LQG) theory, developed in the late Â’60s, can handle external disturbances modelled as Gaussian noise, and preserve the optimality of solutions, but the LQG controller is not robust against plant uncertainty, an important limitation in such industrial applications. The decade of the Â’70s witnessed a renewed effort in control theory. The first phase in the process involved efforts made towards the generalization of classical SISO frequency-domain techniques to multivariable systems. One example of the resulting analysis and design techniques is the Inverse Nyquist Array (IN A) method of Rosembrock (1974), which sought to eliminate the influence of interaction and then apply scalar techniques to the independent loops. Another is the Characteristic Locus Method of MacFarlane and Postlethwaite [37], which introduces a generalization of the Nyquist stability criterion based on the eigenloci of the transfer function matrix, and produces necessary and sufficient conditions for stability. The resulting generalized Nyquist plots are used in the multivariable design in the same fashion that the Nyquist plot is in the scalar case. The original formulation, however, applies to the case of exactly known models. Since the eigenloci are sensitive to perturbations in the transfer matrix, the original formulation had limitations in the context of robust stability. Later developments have extended the generalized Nyquist criterion to uncertain system, through the computation of inclusion bands for the perturbed eigenloci. Sufficient inclusion bands are obtained with the normal approximations method [8], and necessary and sufficient inclusion bands with the E-contours method [9]. Another side of that effort, which continued through the Â’80s, sought a deeper understanding of the structure and property of multivariable systems, with a renewed interest for robustness aspects. PAGE 13 6 Safonov [48, 46] proposed an explicit representation where perturbations in multi-loop systems assume the form of a diagonal perturbation matrix, therefore a structured representation. This representation was later used in the definition of a measure of stability margin for multivariable systems [47]. Doyle and Stein [14] developed the use of maximum-singular values to obtain bounds on the perturbations to multivariable systems, with perturbations modeled as norm-bounded but otherwise unconstrained, having therefore an unstructured representation. In 1976, a parametrization of all stabilizing controllers of a particular system was presented by Youla and coworkers. Zames [60], proposed a scalar design technique which minimizes the effects of external disturbances while ensuring closed-loop stability; performance was measured in terms of oo norm. This work is considered one of the fundamentals of what, associated to the Youla parametrization, has become known as Hoo control. Several multivariable problems, like sensitivity minimization and robustness to additive perturbations, can be expressed as Hoo control problems, that is, problems where the goal is the minimization, in the frequency-domain, of the norm of a transfer matrix. This approach permits the synthesis of a controller which minimizes an objective function, which in general is used to express some performance requirement, while ensuring the stability of the solution by restricting the controller to belong to the set off all stabilizing controllers. However, controllers derived through this approach tend to be of high order, requiring a posteriori order reduction. Although an unstructured uncertainty representation yields a more tractable mathematical problem, it may lead to conservative stability results. Often, some information about the structure of the perturbation is available, and should be used in order to produce tighter results. The work of Doyle [12] gave new dimension to the diagonal perturbation problem PAGE 14 7 pioneered by Safonov, when he argued that model uncertainty can be very effectively posed in terms of block-diagonal norm bounded perturbations. He developed a new analysis tool, namely the //-function, which constitutes a necessary and sufficient mathematical condition for robust stability of transfer matrix models. The computation of this new robustness measure presents considerable difficult for general structured uncertainties. An upper bound presented by Doyle involves the minimization, over the space of diagonal similarity scaling matrices, of the norm of the scaled system matrix; this upper bound actually equals // when there are at most three complex blocks in the diagonal uncertainty representation. For the case of more blocks, or when the perturbation has real components, the upper bound is a conservative estimate of //. For design purposes under structured uncertainty, Doyle has formulated what has become known as the Â‘//-synthesisÂ’ method. In this approach, the cost function to be minimized is the ooÂ— norm of a similarity scaled transfer matrix involving a controller chosen out of the set of all stabilizing controllers. The parameters are the controller itself and the scaling matrix. The formulations by Doyle, as well as previous work by Safonov, introduced the use of frequency-domain scaling in control problems, as a tool for the derivation of less conservative sufficient stability results, in connection with the block-diagonal uncertainty problem. Other models of uncertainty, as well as different forms of scaling, have been proposed. For instance, in LatchmanÂ’s work [33], the highly structured element-by-element-bounded uncertain model is explored, and new, less conservative stability conditions are obtained with the introduction of non-similarity scaling. For the case of elementby-elementbounded complex perturbations, it has been shown [33] that, if the maximum singular-value of the optimally scaled system matrix remains distinct, // is attained, regardless of the number of elements in the perturbation matrix. Relationships between similarity scaling and non- PAGE 15 8 similarity scaling have been derived [40], and used as tool for decreasing the cost of the computation of the /i-function for complex perturbations. The block-diagonal formulation of uncertainties admits complex as well as real perturbations. Real perturbations in frequency-domain models have been employed for example to represent uncertainty in gains [10, 38] and in poles [10] of a transfer function. In this dissertation, a perturbed state space system is given a frequency-domain representation having real diagonal uncertainty, which is derived directly from the state space real uncertainty. For problems involving real uncertainty, results derived with the /^-function approach are usually only sufficient. The derivation of tighter results for the case of real uncertainty is an active area of research [17, 18]; a new upper bound for /i, tighter than the singular-value bound, has been recently introduced [18]. Besides the cited developments in analysis of perturbed transfer matrix models, the analysis of perturbed state space models received a great deal of consideration in the last decade. Two basic approaches can be recognized in the analysis of state space models: the Kharitonov approach and the Lyapunov approach. The approach spurred by the work of Kharitonov [27] deals with robust stability of control systems through stability analysis of characteristic polynomials having perturbed coefficients. Although the original work considered the case of independent coefficient perturbations, new results [2] have later extended the approach to the case of polytope of polynomials. Basically, this extension permits the assessment of stability of a whole polytopic family by analyzing stability properties of its exposed edge polynomials. The Lyapunov approach to robust stability analysis stemmed from the original work on stability by Lyapunov, published in Russian in 1892, which has a French translation dating from 1949. The Lyapunov Direct Method (LDM) yields a sufficient condition for stability; PAGE 16 9 stability assessment however depends on the construction of a suitable Lyapunov function for the system under investigation. In the case of linear, time-invariant systems, a quadratic function of the state is used as Lyapunov function. The condition for robust stability can then be posed in terms of the positive-definiteness of a certain matrix. Although only sufficient, the approach has been used in robust stability analysis in a great number or recent works [4, 16, 42, 51, 56, 59, 61]. In particular, this method has been used in connection with structured perturbations depending linearly on a vector of parameters [4, 51, 61]. This uncertainty representation, on the other hand, has also been used apart from the Lyapunov approach [18]. Additional stability analysis methods for state space systems are the stability radius method [24], and the methods of Qiu and Davison [44, 45]; tensor products are used in the latter. 1.3 Structure of the Dissertation This dissertation is organized into 9 chapters, the first of which contains this Introduction. The next 2 chapters present a review of basic concepts, while the main part of the work is presented in Chapters 4 through 8. Chapter 9 contains the Conclusion. Specifically, nominal and perturbed system models are reviewed in Chapter 2. Special attention is given to uncertainty representation in state space and transfer matrix models, with emphasis placed on diagonal representation of uncertainty in interconnected frequencydomain models. The focus of Chapter 3 is in stability conditions. The review includes the Lyapunov Direct Method, the Generalized Nyquist Criterion, spectral radius conditions for stability, and spectral radius upper bounds given by singular-value and structured singular-value. PAGE 17 10 Chapter 4 concentrates on the assessment of robust stability of state space systems in the presence of structured perturbations which depend linearly on a vector of parameters. The application of the Lyapunov direct method is thoroughly discussed, including a qualitative study of reasons of conservatism under perturbation, a review of available results, the derivation of admissible parameter norms and the use of parameter weighting for shaping the form of the computed stability domain. A new condition on the 2-norm of the vector of parameters, which is potentially less conservative than available conditions, is presented, and similarity scaling is explore in the reduction of conservatism of available results. Finally, the choice of the adequate Lyapunov matrix is cast as an optimization problem. An alternative approach to the assessment of robust stability of state space systems, under time-invariant perturbations linearly dependent on a vector of parameters, is proposed in Chapter 5. Working directly with the perturbed state equations, and exploring diagonalization of uncertainty, an equivalent frequency-domain problem is formulated, from which sufficient stability conditions are derived. The formulation is such that the uncertainty matrix which appears in the equivalent frequency-domain problem is derived directly from the real perturbation to the state space model. The derivation was independently undertaken, and has not been explicitly found in the literature. Conservatism of the stability conditions is reduced through the use of scaling techniques; besides the well-known optimal similarity scaling, conditions are obtained in terms of Perron scaling. Chapter 6 compares numerical results obtained with the LDM of Chapter 4 and the frequency-domain method proposed in Chapter 5. Results obtained from the frequencydomain method were in general less conservative than results from LDM; they were always at least as good as the LDM results. Particularly, it is shown that the stability condition PAGE 18 11 that uses Perron scaling have low computational cost and produces results with the same level of conservatism as results obtained with optimal similarity scaling. In Chapter 7, the frequency-domain approach is explored in the analysis step of an iterative controller robustification technique, similar to that proposed by Bhattacharyya [4]. The alternative approach has computational advantages, mainly when Perron scaling is used, because then it permits the elimination of parameters in the resulting optimization problem. Both the methods discussed in Chapters 4 and 5 yield sufficient stability domains in the space of plant parameters. In Chapter 8, a technique is presented for the computation of a necessary domain, starting from an available sufficient domain. An extensive search in the parameter space, which would be unfeasible for a large number of parameters, is avoided on the basis of a conjecture, which has worked well in all problems considered. Finally, Chapter 9 presents a summary of results and suggestions for further work. 1.4 Notation The following notational convention will be adopted in this document, unless otherwise explicitly stated. Additional symbols will be defined, as required. A 0 : Nominal dynamic matrix (open-loop) A c : Nominal dynamic matrix (closed-loop) A p : Perturbed dynamic matrix D : Diagonal form of real perturbation matrix D c : Diagonal form of perturbation with complex scalars E : Error matrix (parametric perturbation) Ea ' Parametric perturbation to the matrix A PAGE 19 E k : Perturbation due to the k th parameter F\j(M, A) : Upper linear fractional transformation F L (M, K) : Lower linear fractional transformation G 0 (s) : Nominal plant transfer matrix H{s) : Open-loop transfer matrix In : Identity matrix of order n J : Objective function in optimization problems K : Controller L : Left matrix in the decomposition E = LDR M e ft nXm : Real n x m matrix M Â€ C nxm : n X m matrix with complex elements Mij : Element at i th row and j th column of M M h : Complex conjugate transpose of M M+ : Matrix of the complex magnitude of elements of M P : Solution to the Lyapunov matrix equation Pi, : Matrix of upper-bounds on elements of A Q : Lyapunov matrix Qo : Nominal compensated transfer matrix R : The right matrix in the decomposition E Â— LDR RX(A P ) : Largest real part of A,(/l p ), for fixed E RX(A P ) : Largest real part of A,(A P ), for E in a class S : Similarity scaling matrix 5. : Perron scaling matrix PAGE 20 s 0 Osborne scaling matrix s d : Stability domain S dp (Q) : Stability domain, function of Q , in the norm || Â• || p s da ,A K ) : Stability domain, function of K, based on the measure aftT(s) : Closed-loop transfer matrix Ug : Unitary matrix W : Matrix of right eigenvectors dp : Change in parameter p : Imaginary axis of the complex plane km : Multiloop stability margin km : Conservative assessment of k m IsooiQ) : Stability bound on || p ||oo : m-dimensional parameter vector Pw : Worst case parameter combination r S 2{Q) : Stability bound on || p H 2 Sk : Weight applied to the k th parameter s : Complex frequency : Input vector : State vector y 6 : Output vector xm,2Lm : Major (minor) output principal direction of M y\i,y M : Major (minor) input principal direction of M c : Field of complex numbers Qm*.m : Space of complex m x m matrices PAGE 21 14 Vu : Class of frequency-dependent, unstructured uncertainties Vs : Class of frequency-dependent, structured uncertainties Â£u : Class of unstructured real uncertainties Â£s : Class of structured real uncertainties Q : Set of symmetric, positive-definite Q Â£ 3? nxn Sk : Class of scaling matrices related to the block-structure K, : Class of block-diagonal structured uncertainties ft : Field of real numbers : Set of non-negative numbers ft nxn : Space ofnxn matrices with elements in 3? A (s) : Frequency-dependent perturbation A m(-s) : Frequency-dependent perturbation to M Ok : Bound on the range of k th parameter a : Measure of stability margin 6(s) : Upper-bound on the norm of A(s) e : Small quantity in general A ,(M) : i th eigenvalue of M /i(M) : Structured singular-value of M 7r(M ) : Perron radius of M Kw : Set of worst case parameters P(M) Spectral radius of M Pr(M) : Real spectral radius of M *i(M) : i th singular-value of M a(M) : Maximum singular-value of M PAGE 22 o(M) Minimum singular-value of M Characteristic polynomial d : Partial derivative 1*1 Complex magnitude of x det [M] Determinant of square M II * lip p-norm of vector x II M ||, p Matrix norm induced by p-norm II M ||f Froebenius norm of M V : For all : End of proof o : End of statement given without proof : End of example inf, sup : Infimum, supremum max, min : Maximum, minimum DU : Diagonal Uncertain LDM : Lyapunov Direct Method GNC : Generalized Nyquist Criterion MIMO : Multi-Input, Multi-Output OS : Osborne Scaling OSS : Optimal Similarity Scaling PR : Perron Radius PS : Perron Scaling SISO : Single-Input, Single-Output ssv Structured SingularValue PAGE 23 CHAPTER 2 NOMINAL MODELS AND UNCERTAINTY REPRESENTATION 2.1 Nominal Models and Definitions This section introduces basic definitions and models of linear time-invariant systems. Let us consider the unity feedback system with cascade compensation, represented in Figure 2-1. The multi-input, multi-output block G 0 represents the physical system or process under investigation, which is generically designated as plant. (a) (b) Figure 2-1. Unity feedback system a) Closed-loop system b) Uncompensated nominal plant The subscript o designates the nominal model of the plant, namely a mathematical representation where the relationships among the quantities involved are exactly known. Unless otherwise stated, nominal models will be regarded as linear and time-invariant. The cascade connection of plant and compensator defines the open-loop compensated plant , denoted by Q 0 = G 0 I\. 16 PAGE 24 17 Many dynamic systems of engineering significance can be described by a linear differential equation relating the input r(t) and its derivatives to the output y(t) and its derivatives. However, this representation is not the most convenient to deal with. Representations that have become standard in control systems theory are the state space model and the transfer matrix model. State space model. A differential equation of order n with constant coefficients, involving m inputs, p outputs and their derivatives, can be put in the state variable form: x(t) = Ax(t) + Bu(t ) (2.1) y(t) = Cx(t) + Du(t) (2.2) where x(t) Â£ 3?" is the state vector and A Â£ 3f? nxn , B Â£ 3? nXm , C Â£ 3Â£ pxn and D Â£ 3f? pxm are constant matrices. A generic state space model is often designated by the quadruple [A, B ,C, D\. Unless otherwise stated, open-loop plants are assumed to be purely dynamic, thus having a representation of the form [Ac, Bq, Cg, 0]. A dynamic controller is represented by the quadruple [Aft-, 5ft-, Cft-, Dk], which reduces to Dk in the case of a purely algebraic controller. To the closed-loop system corresponds the quadruple [A c , B c , C c , D c \, whose components are easily obtained from the state space descriptions of plant and controller. Transfer matrix model . The nominal transfer matrix may be obtained via the application of the Laplace transform to the state space equations, under the assumption of null initial conditions. The transfer matrix is then given by: H(s) = C(slA)~ X B + D (2.3) where the term ( si Â— A) 1 is the resolvent of the matrix A. PAGE 25 18 Let G 0 (s ) and Ii{s) be the transfer matrices of plant and compensator, respectively. The transfer matrix of the closed-loop unity feedback system, which can be obtained by algebraic manipulation of blocks, is: T(s) = [(I p + G 0 K)l G 0 K](s) (2.4) Note that, in view of the dimensions of the matrices in the state space model, G a (s ) G C pxm . Consequently, K(s) G C mXp and T(s) G C pXp . Of course, T(s ) can be obtained by applying (2.3) to the quadruple [A c , B c , C c , D c \. Characteristic decomposition. A complex, square matrix M G C nxn with distinct real eigenvalues has the characteristic decomposition: M = \VAW~ 1 (2.5) where A = diag{A,}, i = 1 ,...,n, contains the eigenvalues of M. The columns of W are linearly independent vectors of M , arranged in correspondence with the eigenvalues. Matrices with non-distinct real eigenvalues, or with complex eigenvalues, have analogous decompositions, where A assumes a non-diagonal Jordan form. The spectral radius and the real spectral radius of M are, respectively, p(M) d = max | Xi(M) \ (2.6) i p R (M)t f max | A Rt (M)| (2.7) t where A Rt (M) is a real eigenvalue of M . It is easy to show that Pr(M) < p(M) (2.8) The spectral radius has an important role in stability analysis, as will be seen in the following chapters. PAGE 26 19 Singularvalue decomposition . A complex matrix M Â£ C nxn has the singularvalue decomposition M = XYY h (2.9) where E = diag (cr, } , a, Â£ i = 1, . . . , n, arranged in decreasing order, and Y and X are unitary matrices that contain respectively the right and left singular-vectors of Af, arranged in corresponding order with the singular-values. The right singular-vectors j/jvr are called input principal directions , while left singularvectors xm are called output principal directions. The largest and the smallest singular values are of fundamental importance in stability and performance analysis. They are called respectively maximum singular-value and minimum singular-value , and denoted by o(M) d = o 1 {M); PAGE 27 20 Lemma 2.1 . Let M 6 C nxn , and assume that the entries of M depend on a variable x. Then, the derivative of [a(M)\ 2 with respect to x is given by d[[ PAGE 28 21 Two broad categories of modeling error sources can be identified, namely unmodeled dynamics and variations of plant parameters. The objective of this section is to discuss uncertainty representations, with particular attention to the case of parametric uncertainty. The modeling process is guided by the conflicting requirements of fidelity to the plant dynamics and tractability. As a result of the necessary compromises with respect to these conflicts, some secondary dynamic phenomena may be left unmodelled, or may receive simplified representation. On the other hand, a model might adequately represent the plant dynamics under given conditions, yet might not be able to capture variations suffered by the plant during its life span, or even during an operation cycle. Changes in properties of physical components, which affect the plant, are normally expected and cannot be eliminated in some cases. For example, due to a compromise between precision and production costs, almost all technical specifications of serial made industrial components allow variations of properties around the nominal value. Other factors also contribute for changes in properties; among them are aging of the components, hysteresis cycles and environmental conditions. An example of plant with uncertainty due to both simplifications and neglected dynamics is the chemical batch reactor, discussed in [39]. In that case, a truly nonlinear process is linearized at an operation point, thus characterizing a simplification advised by tractability. The dynamics of the resulting equation is uncertain due to neglected nonlinear effects and due to unknown plus neglected high frequency temperature-dependent effects. In order to improve the assessment of stability and performance characteristics of control systems, some sort of mathematical description of the uncertainty associated to a given nominal model is needed. This description is called uncertainty representation. PAGE 29 22 In a fairly general sense, the true modeled object can be represented in terms of its nominal model S 0 and of the modeling error E , by the following relationship: S P = U(S 01 E) (2.13) where S p designates the object obtained when S 0 is perturbed by E, and II(*) describes how the error relates to the nominal model. The object S p may represent either the plant or an interconnected system, which includes the plant as one component. If, for example, the modelled object is a plant, (2.13) becomes G p Â— II (G 0 ,E). An admissible error set is called a perturbation class. Given the perturbation class, the relationship II(*) determines a family of objects around the nominal model; this family is a set that includes a member which is closer to the true modeled object than the nominal model. The relationship IT(*) is determined by the uncertainty description chosen. A mathematical description of uncertainty must satisfy the following requirements [19, 33]: (i) Simplicity: the description should be such that the model is tractable; (ii) Accuracy: the uncertainty class should be such that it would allow only perturbations that really can occur ; (iii) Adequacy: the uncertainty class must admit all possible perturbations. The quality of results obtained from the analysis of perturbed models depends, to some extent, on the uncertainty representation. The following are relevant factors in uncertainty representation: (i) Nature of the model . The uncertainty representation must follow the nature of the nominal model. For example, if a linearized model is constructed for a system described PAGE 30 23 by a nonlinear input-output relationship, the error can be adequately represented by the difference between the true output vector and the output vector of the model. When the system is represented by a MIMO transfer matrix model, the uncertainty is represented by a dimensionally compatible transfer matrix. If a state space model is used, the uncertainty is represented by dimensionally compatible real perturbations to the quadruple [A, B, C, D]. (ii) Type of the error. The uncertainty may assume either the form of an absolute error or the form of a relative error. In the former case, the uncertainty is represented as an additive term, while in the later it appears in multiplicative form. (iii) Structure of the uncertainty . This is the most important characteristic of the uncertainty representation. It is related to the knowledge and assumptions made about the mechanisms that generate the uncertainty. If nothing is known about particular causes of uncertainty, or if it is not practical to consider sources of uncertainty individually, the unstructured representation is used. The effects of all, possibly several sources are lumped together, and represented as if caused by only one source. The error is characterized by a norm upper bound, say || E || < e, but is otherwise unconstrained. The norm upper bound completely characterizes a class of unstructured perturbations. When the mechanisms that give rise to uncertainty are known, it is useful, although not required, to adopt an structured representation. It is in general possible to identify at least some of the causes of uncertainty [14], whence it is in general possible to use at least a partially structured representation of uncertainty. An interconnected system whose components are uncertain presents multiple perturbation Â‘blocksÂ’, which can be of different dimensions. Looking at the whole system, the PAGE 31 24 uncertain has a structure defined by the position of the blocks. An unstructured representation could be used to cover up for various scattered Â‘blocks.Â’ However, this approach would be conservative, because the norm bounded but otherwise unconstrained class of unstructured uncertainties would admit perturbations which donÂ’t satisfy the known block structure. In the following, the general principles given above are applied to uncertainty representation in frequency-domain and state space models. 2.2.2 Representation of Uncertainty in Transfer Matrix Models Unstructured plant uncertainty Let us assume that the nominal and the perturbed plants are represented by transfer matrix models, respectively G 0 (s) Â€ Q pXm and G'p(s) 6 C pxm , and let A(s) represent the uncertainty. The argument Â‘sÂ’ may be dropped, if the dependence on s is clear from the context. In the unstructured representation, the class of admissible perturbations is characterized by a frequency dependent norm bound; usually the norm of choice is the induced 2-norm, which coincides with the maximum singular-value. An unstructured class which admits all possible A in a ball or radius 6(s) in C pxm is defined as: 2>u = {A (s) e C pXm : a[A(s)] < S(s) Â€ Â»+, Vs} (2.14) Additive representation . If the unstructured uncertainty is meant to account for an absolute error in the nominal model, the representation assumes the following additive form, illustrated by Figure 2-2 (a): G p Â— G 0 + Aa, Aa Â€ Â£> u (2.15) PAGE 32 25 Multiplicative representation. This representation accounts for relative errors in the model. It is well suited when the nominal plant has input or output uncertainty. The perturbed model becomes, for each of these cases, respectively: G p = G 0 (I m + Ai), Ai G V a ; G v = (J p + A 0 )G 0 , A 0 Â€ V n (2.16) When both input and output uncertainty are present, as shown by Figure 2-2 (b), the above expressions combine to give G v = ( I p + Ao)G 0 (I m -f Ai). (a) (b) Figure 2-2. Plant uncertainty representation a) Additive representation b) Multiplicative representation Brief analysis. The unstructured representation does not discriminate sources of uncertainty. Neglected dynamics, which usually contributes to high-frequencies error components, and parametric variations are considered together. This representation certainly satisfies the simplicity requirement. However the maximum singular-value, used to characterize the class of allowable perturbations, depends on the whole matrix and does not account for magnitudes or phases of individual elements or submatrices structure. Consequently, the accuracy requirement may not be attained, because the class V\j admits perturbations which are not physically possible to occur. From the point of view of accuracy, it is preferable to use structured representations. Yet, even when some of the plant error components can be represented in structured form, there exist high-frequency components that require unstructured representation [14]. PAGE 33 26 It is interesting to note that additive and multiplicative representations of plant uncertainty lead to different expressions for the perturbation of compensated plants. Regarding Figure 2-1 (a), when the additive representation is used, the perturbed compensated def ~ plant is given by Q p = ( G a + A a )K = G a K + = Q 0 + Aa, while in the case of output multiplicative uncertainty representation, the perturbed compensated plant is Q p = (/ p -|Ao )G 0 K (I p + Ao )Q 0 Therefore, with multiplicative representation, the relative error in the compensated plant is the same as in the nominal plant, while the absolute error changes in the additive representation. Structured plant uncertainty Structured representations are adopted when it is possible to identify the causes of uncertainty, so that their effects can be linked to specific entries of the transfer matrix. Since individual sources of uncertainty are independently considered, the structured representation is more accurate. Element-by-element-bounded perturbations . This highly structured representation can be used when frequency-dependent norm bounds for the uncertainty in each element of the nominal transfer matrix are available. The class is characterized by magnitude bounds and unconstrained element phases, and is defined as [33]: V s = (A(s) Â€ C pxm : A{ < P {j e Â»+, arg(Ajj) = 6 XJ , 0 < 9 {j < 2*, Vs} (2.17) It has been shown [33] that the class Vs defined above is a proper subset of the class V u given by (2.14). The perturbed plant under element-by-element-bounded additive uncertainty is: G p Â— G 0 + Aa, Aa Â€ As ( 2 . 18 ) PAGE 34 27 This structured class admits all perturbations whose element ( i,j ) belongs to a ball of radius P tJ around the nominal element G 0 (i,j), Vi < p, Vj < m. Cases where some elements of the nominal system are exactly known are covered by setting to zero the corresponding elements of P. Since the matrix of upper bounds, namely P , is a nonnegative matrix, this representation permits the use of results from Perron-Froebenius theory in robust stability analysis. Also useful is stability analysis is the result of the following lemma. Lemma 2.2 . For any A Â£ Vs and P Â£? pxm , such that A+ < P,j, a(A+) < a (P) Proof . For any real matrix A Â£ 3J pxm and vector x Â£ 3i m , ct(A) =|| A ||, 2 = sup || Ax ||,2 = sup ll*||=i 11*11=1 P m E E A a x i *= i \j = i Therefore, i i <7(A + ) Â— SUp 2 ; a(P) = sup 11*11=1 [Â•=i W / . ll*ll=i [.-= V=i / Since A^ > 0, the supremum will occur for some x such that Xj > 0, Vj; let x be the value of x which maximizes 7f(A + ). Now, x>0, A* > 0, Pij > 0 => PijXj > &tj x h v (*Â»i) Therefore: I i v-i ) 11*11=1 L*=i l=i ) = a(P)

PAGE 35

28 This proof is an alternative to the original proof [33]. It is known that, VA Â€ Vs, a(A) <
PAGE 36

29 Therefore, the uncertainty can be written as and additive perturbation to the open-loop transfer matrix. This approach however is inadequate for two reasons. The first reason is that, in order to render this formulation useful, it is necessary to compute or estimate a norm bound for the perturbation Aa q . Although this possibly can be done for simple systems, it might become very cumbersome in the case of complex systems. The second and most important reason is that the additive unstructured representation does not carry information about the structure of the perturbation in the interconnected system. Additive block-diagonal representation . An alternative approach, which takes into account the structure of uncertainty, is the block-diagonal representation. It derives from the technique introduced by Safonov and Athans [48], for dealing with systems involving simultaneous perturbations in the context of the LQG regulator problem, therefore in time-domain analysis. The essence of the technique is to rearrange the system in such a way that the perturbations are isolated in a block-diagonal matrix. The technique was explored by Safonov [46] in the derivation of Â‘conic sector conditionsÂ’ for stability of MIMO systems, and by Doyle [12] in the derivation of necessary and sufficient conditions for stability under structured perturbations. A diagonal representation of simultaneous perturbations can be obtained for any system, regardless of the dimensionality of each particular perturbation. Both parameter dependent additive perturbations and actuator and/or measurement uncertainties, represented respectively as input and output perturbations, can be handled [39]. Let us consider its application to the system in Figure 2-3. The loops involving the perturbations Ai and Ao can be regarded as additional system loops, through which the nominal system and the perturbations exchange signals. The

PAGE 37

30 nominal feedback loop provides signal to the i th perturbation through the output j/aÂ», and receives signal through the input Â«aiThe perturbations may be isolated in a block-diagonal structure through the following simple procedure: Procedure 2.1 . Diagonalization of uncertainty in frequency-domain systems: 1. Suppose the additional system loops are open, as in Figure 2-4 (a); 2. Compute the transfer function from each system input to each system output. Inputs and outputs now include the nominal input vector r and the nominal output vector yj, as well as the perturbation outputs ,Â• and perturbation inputs j/Aji 3. Arrange the transfer functions in matrix form. This step will generate the representation in Figure 2-4 (b), which is referred to as the Â‘Af Â— A' form of the perturbed system. (b) Figure 2-4. Block diagonal representation a) Open perturbation loops b) The A Â— M form The perturbation in Figure 2.4 (b) is A = diag(Aj, Ao), therefore a block-diagonal structure; j/a and ua are vectors containing uncertainty inputs and outputs, respectively. The transfer matrix M(s ) is called nominal interconnection structure. The (1, l)-submatrix relates the collective output of the uncertainties to collective inputs, while the (2, 2)-submatrix

PAGE 38

31 is the nominal transfer matrix from r to y. For the system in Figure 2-3, M\\ is: 2/Al (/ + KG 0 )-'KG 0 (/ + KG 0 )~ l K Â«A1 2/A2 -(/ + G 0 K)-'G 0 (I + G 0 K)~'G 0 K Â«A2 2/Al ( I + KG 0 )-'KG 0 (/ + KG 0 )~ l K A/ 0 2/a i i/A2 -(/ + G 0 K)~ l G 0 (/ + G a K)~ l G 0 K 1 O I> o l 2/A2 Mu A Note that the dimension of the square submatrix M\\ dependents on the number of simultaneous perturbations. Therefore, even a SISO system subjected to simultaneous perturbations is characterized by a MIMO nominal interconnection structure. Partitioning the interconnection structure according to the dimensions of inputs and outputs, the system can be represented as: 2/A M u M\2 UA y M 2 1 M 22 r From the partition and Figure 2-4 (b), the following relations are obtained: ( 2 . 20 ) ua = Aj/a; 2/a = M n u& + M \2 r\ y = M 22 r + M 2 iu a Manipulating these equations, one obtains: V = M 22 -h .A/ 2 iA(/ Â— A/nA) l M u \ r (2.21) Thus, the transfer matrix from r to y is given by an upper linear fractional transformation of the uncertainty, namely: Fv(M, A) d =l f [M 22 + M 21 A(/ MuA)* 1 Mu] (2.22) A block diagonal representation of the LFT is shown in Figure 2-5 below. The expression A(I Mu A) -1 represents a feedback loop, with A in the direct path, and M u in the

PAGE 39

32 feedback path. If A = 0, then A) simplifies to the nominal transfer matrix from r to y, namely M 22 = (/ + G o K)~ l G 0 K. Figure 2-5. Block diagram representation of Fu(M, A) The general case of block diagonal representation . The technique applied to the simple example above applies to systems having a larger set of localized perturbations. Uncertainties originating from unmodeled dynamics assume the form of norm-bounded, full complex blocks of different dimensions. On the other hand, uncertainty coming from parametric variations assumes the form of real perturbations, which can be repeated. Additionally, fictitious repeated complex scalars perturbations can be used to reformulate a robust performance problem as a robust stability problem [15]. Therefore, in the most general case, the final block diagonal structure will show (possibly repeated) real scalars, (possibly repeated) complex scalars and full complex blocks of different dimensions. To account for the correct dimensionality of blocks in the diagonal formulation, a block structure of indices is defined [17]. Assume that M Â£ C mxm , and consider the triple (m r , m c , m c ) of nonnegative real numbers such that m r + m c + m c = f m < m, and define

PAGE 40

33 the block structure 1C associated with M by: K-(m T , m c , me ) Â— ... , k mr , k mr +i , . . . , ^m r +m c i ^mr-f m c + l > Â• Â• Â• 1 ^m r +m c +mc ) (2.23) where, for compatibility of dimensions, = m. Given 1C , a family of associated m x m block diagonal perturbations is defined by: = {A = bl diag^J/*, , . . . , S r mr I kmr , ^ c 4 mr+1 , Â• Â• , 6 c m Jk mr+mc , Af , . . . , AÂ£ c )} (2.24) where S[ Â£ 5?, b^ Â£ C and A/ 6 C* m >-+ m c+ ,x * : "'<-+m c +i As required by the dimension of M, X/c Â£ C mxm . Each 6\Iki represents a repeated real scalar, while each represents a repeated complex scalar and Ap represents a full complex block. The general form can be particularized through the convenient choice of indices. For example, if there is no parametric uncertainty, m r = 0. In the case of purely real perturbations, the adequate setting is m c = 0 and me = 0. A class of allowable perturbations, having block sizes determined by the block structure, is defined from (2.24) by specifying an upper bound on the norm: Xtc (b) = {A:Ae A/c, a(A) < b Â£ ft + } (2.25) 2.2.3 Representation of Uncertainty in State Space Models Let us now assume that the nominal plant is described by a state space model. The dynamics of the physical process is captured by the matrix A. Since A has fixed dimension in the state space model, it implies that the dynamical order of the process is well determined. Thus, uncertainty caused by neglected high order dynamics cannot be taken into account in the usual state space model. On the other hand, the state space model is well suited to the representation of parametric uncertainty. Variations in system parameters are represented as perturbations in the

PAGE 41

34 elements of the real matrices that define the model. The perturbations can be collected in the error matrix E, so that the perturbed matrix is represented by MÂ„ = M + E (2.26) where M can be either one of the real matrices in the state space representation. Particular forms of E are discussed below. Unstructured uncertainty As in frequency-domain models, the class of unstructured errors is characterized by a norm upper-bound: Â£u d = {E :|| Â£||
PAGE 42

35 The perturbed matrix takes the form of an interval matrix: M P = (M + E), EeS si (2.30) If only e is known, this representation can be used with the error matrix elementwise bounded by the matrix P = eU n , where U n (i,j) = 1, i,j = 1 [58]. If some of the entries of M are exactly known, the corresponding entries of U n are set to zero, thus accommodating the extra information on the error structure. Dependent variations of elements . This case differs from the previous one in that it admits correlated variations between entries of M. This assumption is actually required in practical cases. For example, consider the case of an open-loop state space model in which the output matrix has some uncertain entries, due to variations in a physical parameter that affects the output gain. If an output feedback controller is used, the dynamical matrix of the closed-loop system is likely to have several uncertain entries. However, the variations on these entries are not free, since they depend on the same physical parameter. A convenient representation for such cases is to obtain the error matrix in terms of the physical parameters. Suppose that an m-dimensional vector of parameters can be identified, and assume that the dependence of M on each parameter is linear. This assumption is not too restrictive, since it is possible to redefine nonlinear combinations of physical parameters such that the assumption is satisfied. The perturbation class can be characterized as: C def Â£sd = {E : Â£? = Â£>Â£?*, k = i Pk | < k = (2.31) Each Ek is a constant matrix which expresses the structural dependence of M on the parameter p*. Such representation has been largely used in stability analysis [4, 51, 61].

PAGE 43

36 The perturbed matrix is represented by: M P = (M + E), E 6 S SD (2.32) Notice that M v Â— M + PkEk is (affinely) linear on the parameters. The following example illustrates the use of this representation of parametric uncertainty. Example 2.1 Consider the circuit diagram represented in Figure 2-5. + V 0 x 2 Figure 2-6. Elementary electric circuit Let the input be u(t ) = u,(f) and the output be y(t) = v Q (t). Then, one has: Â“ i\ Â— i R\ 0 1 C Xi + 1 R\C ii 1 L x 2 0 x\ V 0 R 2 x 2 Assume that R\, R 2 are uncertain, and that the components are rated at L = lif, C Â— IF, Ri o = 0.5fi, R 2o = lfi. The nominal matrices are: C = 0 1 Given that R\,R 2 are uncertain, the terms they affect can be written as:

PAGE 44

37 1 1 1 c, 1 s o W ~ T l ~T t . + H Rl ) 2 + p ' Â— R2 = Ri 0 + HR2) = 1 + pi where Â£(Â•) represents the unknown variation. Therefore, the perturbed open-loop model is given by: xi X2 -2 -1 1 -1 + Pi -1 0 0 0 + P 2 0 0 0 -1 Â£1 + 2 + Pt 1 Â£2 0 0 Eb . y = 0 1 Â“ * Â£l + P2 0 1 x 2 Ec Thus, uncertainties in the physical parameters Ri,R.2 are reflected by the state space model as uncertain input and output gains, plus uncertainties in the dynamic matrix A. Assuming that an output feedback controller K = -1 is used, one has A c = (A + BKC ), where 0 (Â— (2 + pi + 2p 2 + P 1 P 2 )) 0 0 Defining p 3 d = f p x p 2 > the closed-loop perturbed matrix becomes: BKC = Xi X2 -2 -3 1 -1 + Pi -1 -1 0 0 + P 2 0 -2 0 -1 + P3 0 -1 0 0 xi x 2 Now, let p = f \pi p 2 p 3 ] T . The objective of stability analysis is to find out which is the largest || p || such that the perturbed system remains stable, and to characterize the

PAGE 45

38 allowable intervals [Â—a*, a*,]. Alternatively, assume that the parameter ranges are known. For example, assume that the variations in R x , R 2 are within Â±10% of the rated value. Then, the parameters are in the ranges: Pi e [-0.202,0.202]; p 2 Â€ [-0.100,0.100]; p 3 Â€ [-0.020,0.020] In this case, the objective of stability analysis is to check whether or not the system remains stable for all possible combination of parameters in the hypercube defined by these ranges. 2.3 Conclusions This chapter puts together basic concepts concerning system models and uncertainty representation, which will be relevant for subsequent development. Since the objective of this dissertation is the study of robust stability under parametric uncertainty, the state space model will have an important role in following chapters. Also very useful will be the uncertainty description given by (2.31), which accommodates practical cases of parametric uncertainty, as demonstrated by Example 2.1. In Chapter 5, the problem will be given a frequency-domain treatment, and the diagonalization of uncertainty will be employed. Although the diagonalization technique has been used for some years, no explicit derivation has been found. For this reason, indications found in the literature were put together in Procedure 2.1, and the steps leading to the linear fractional transformation (2.22) were completely worked out. The review of fundamental concepts will continue in the next chapter with a summary of stability conditions.

PAGE 46

CHAPTER 3 STABILITY ANALYSIS OF LINEAR SYSTEMS 3.1 Introduction Stability of control systems is a fundamental requirement, which must be ensured prior to any other. This chapter presents a review of stability conditions and stability analysis techniques applicable to linear systems. Both state space and transfer matrix models are considered; in each case, nominal stability and robust stability under additive perturbations are addressed. 3.2 Stability of State Space Systems 3.2.1 Nominal Stability Condition Let us consider the linear, time-invariant system x{t) = Ax{t) (3.1) This model can be interpreted as the representation of either an unforced system or a system under fixed, known input [52]. The following theorem gives necessary and sufficient condition for asymptotic stability: Theorem 3.1 [52]. The equilibrium point 0 of (3.1) is asymptotically stable if and only if all the characteristic values of A have strictly negative real parts, that is lim x(t) = 0 <=> Re[A,(A)] < 0, Vz (3.2) t OO o 39

PAGE 47

40 An asymptotically stable linear system is globally asymptotically stable, because II x (0 II Â— * 0 independently of the initial state x(t Q ). Equation (3.2) states that asymptotic stability depends on the eigenvalues of A. However, it is not necessary to compute the eigenvalues in order to check stability. The RouthHurwitz criterion gives necessary and sufficient condition for stability based on the signs of the coefficients of the characteristic polynomial. Furthermore, the Lyapunov direct method permits sufficient conditions for stability to be derived from a matrix function involving A. Nominal stability assessment through the Lyapunov Direct Method . The stability properties of the equilibrium point x(t) = 0 of the system x(t) = Ax(t) can be determined through the Lyapunov direct method, which does not require the computation of the characteristic polynomial. According to Lyapunov theory, a sufficient condition for global asymptotic stability of the equilibrium point x(t) = 0 is the existence of a scalar positive definite function of x, say V(x), having a negative definite time-derivative V(x) [52]. For LTI systems, the natural choice of a Lyapunov function candidate is the quadratic function V(x) = x T Px T (3.3) where P is a real symmetric matrix. As long as P is positive definite, the scalar function F(x) is positive definite. The time derivative of the quadratic function is given by: V(x) = x T Px + x T Px = x t (A t P + PA)x d = f -x t Qx (3.4) from which the matrix Lyapunov equation , relating the matrices A,P and Q, is obtained: (A T P+PA) d ^-Q (3.5) Global asymptotic stability of the equilibrium point x = 0 of x = Ax(t ) is ensured if, for a given A, it is possible to find symmetric positive definite matrices P and Q satisfying

PAGE 48

41 equation (3.5). It is so because, if such P and Q exist, F(x) is a scalar positive definite function whose timederivative is negative definite. On the other hand, if there exists Q positive definite such that the corresponding P is negative definite, the equilibrium point is unstable. The following theorem formalizes the relationship between the asymptotic stability of A and the matrix Lyapunov equation. Theorem 3.2 [52]. The following statements are equivalent, VA Â£ 3? nxn : 1. All eigenvalues of A have strictly negative real parts; 2. For every positive definite Q Â£ $?' ixn , the equation (3.5) has a unique, positive definite solution for P; 3. There exists some positive definite matrix Q Â£ SfJ nxn such that the equation (3.5) has a unique, positive definite solution for P. O This theorem provides a computational device for assessing stability without computing the eigenvalues of A. Choosing any positive definite Q and solving (3.5) for P, if the solution exists, is unique and positive definite, then A is asymptotically stable. If there is no solution, or if the solution is either not unique or not positive definite, then A is not asymptotically stable. 3.2.2 Assessment of Robust Stability Robust stability assessment through the Lyapunov Direct Method Let us consider the perturbed state equation x(t) = A p x(t) = (A + E)x(t) (3.6) PAGE 49 42 where the nominal matrix A is asymptotically stable. Since A is stable, the matrix Lyapunov equation for the nominal system, namely A T P + PA = -Q, has a unique, positive definite solution P for every positive definite matrix Q ; let P a be the solution corresponding to some positive definite Q a . Now, let V'p(x) = x T Px, where P is symmetric and positive definite, be a Lyapunov function candidate for the perturbed system (3.6). The time derivative of V^x) is: V p (x) = x T Px + x T Px = [(A + E)x] T Px + x T P[(A + E)x] = x t [(A t P + PA) + ( E t P + PE)]x Let us choose P = P 0 , the positive definite matrix defined above. Then, the last equation becomes V p (x) = -x t [Q 0 (E t P 0 + P 0 E))x = { -x T Q p x(t) (3.7) According to Theorem 3.2, since P 0 is positive definite, A p is asymptotically stable, if Q p is positive definite. Therefore, the robust stability analysis problem becomes that of finding conditions on E which ensure the positive definiteness of Q p . Certainly the conditions that can be derived depend on the description of the uncertainty E. Although stability conditions obtained from the Lyapunov direct method are only sufficient, a positive feature of the method is that it can be applied with virtually all uncertainty descriptions, including timevarying and nonlinear uncertainties. In Chapter 4, a detailed treatment of stability conditions according to the Lyapunov direct method will be given, for the case of E belonging to the class Â£sd defined by (2.31). PAGE 50 43 Other results A Perron radius stability bound [44]. Sufficient conditions for ( A + E ) being asymptotically stable are A stable and (A -f E) without eigenvalues on the imaginary axis of the complex plane, for all E in an admissible class. It can be show that ( A + E) has no eigenvalue on the imaginary axis if there exists a non-singular matrix R Â€ 3J nxn such that || RE(juI n A) -1 # -1 || p < 1, Vie > 0, VÂ£ (3.8) Assuming that the uncertainty can be decomposed as E Â— 5jAÂ£; 52, where 5i Â€ 3J nXp and S 2 G 5R 9Xn are known constant matrices which account for the structure and the matrix A# 6 R pxq , p < n, q < n contains the perturbation factors, and using condition (3.8), with the further assumption that | A |< e,yc; e,j > 0, e > 0, where e is unknown, the following sufficient robust stability condition can be obtained [44]: ( < sup w > 0 7T [ I s 2 (jul A) _1 5i I U ] where U = [e,y], and x(*) is the Perron eigenvalue. The advantage of a condition based on the Perron eigenvalue is that it is easily computable; however, it can be too conservative. It will be shown in Chapter 5 that a less conservative robust stability condition can be obtained by explicitly using Perron scaling. The relevant concepts of Perron theory are reviewed in Section 3.4 ahead. Stability radius condi tion [24]. The objective of the stability radius method is to compute the distance from the stable matrix A to the set of unstable matrices of the same dimensions. The distance is measured by the smallest norm of a destabilizing matrix, namely the smallest norm of E such that ( A + E) have a purely imaginary eigenvalue. PAGE 51 44 Considering the decomposition x(t) = (A + E)x(t) = {A + BDC)x(t) (3.10) where A Â£ Â§Â£ nxn is stable, B Â£ SfJ nxm and C Â£ 5i pxn are known constant matrices which define the uncertainty structure, and D Â£$J mXp is a matrix of unknown factors, the stability radius of A is: r&(A; B , C) = inf {|| D || : {A + BDC) unstable} (3.11) An analytical expression for the real stability radius has been obtained [24], but the computation is too complex, even for unstructured perturbations. In the case of structured perturbations of rank 1, namely when either only one row or only one column of A is perturbed by each factor, the computational burden of the analytical expression is considerably simplified. Letting G(s) C(sl A)~ 1 B , and defining as Gn(ju) and Gi(ju>), respectively, the real and imaginary parts of G(ju), and as Q and 0, respectively, the set of frequency points for which G\(ju) = 0 and its complement in SR, the real stability radius for the case of rank 1 perturbations is given by: rÂ»(A; B,C) = min max^gQ || G(ju) sup 1 GrO'w) 12 2 -|I I flCTOT Â— J 2 (3.12) Therefore, in the case of rank one perturbations the computation of the real stability radius involves an unidimensional optimization problem. If only one entry of A is under perturbation, then D and the associated G{s ) become scalars; the second term in the right side of (3.12) becomes infinity, and the real stability radius is easily computable.

PAGE 52

45 3.3 Stability of Transfer Matrix Models 3.3.1 Nominal Stability Analysis Input-output stability A linear system is Bounded-Input, Bounded-Output (BIBO) stable if an input bounded in magnitude always produces a bounded output. Let H(s) be a matrix whose elements are proper rational functions of s. H(s) can be written as H(s) N[s) d(s) N(s) ni,(Â» Pi) (3.13) where dj. is the degree of the denominator polynomial d(s), which is given by the least common denominator of all (non-identically zero) minors of H(s) [39]. The transfer matrix, which was assumed proper, is stable if all poles p, are in the open LHP. If pj = 0, for some j, then stability requires that the multiplicity of pj = 0 be 1. Under the assumptions that each element is a proper rational function of s, the transfer matrix possesses a state space realization [A, 5, C, D], such that the transfer matrix relates to the state space realization by H(s) = C(sl Â— A)~ X B + D. Although the transfer matrix representation of a system is unique, the state space realization is not. This transfer matrix can be rewritten as C(slA)~ X B + D = Z(s) Z(s) det(sI-A) n?=i [* A t (A)] (3.14) If there is no cancellations of terms of the form [s Â— A,(A)], between the denominator and all the elements of the numerator in (3.14), then (3.13) and (3.14) are equivalent; the pole polynomial d(s) of the transfer matrix and the characteristic polynomial det(s/ A) are the same. In this case, input-output stability is equivalent to the asymptotic stability of the dynamic matrix A.

PAGE 53

46 A necessary and sufficient condition for non-cancellations of system poles in (3.14) is that the state space realization [A, B , C, D ] be a minimal realization of the dynamic system, that is be state controllable and observable. Internal stability of closed-loop systems Asymptotic stability of closed-loop systems, like the feedback system shown in Figure 21 (a), is equivalent to the internal stability of the loop [53]. A closed-loop LTI system is internally stable if any two points of the loop are connected through an exponentially stable transfer matrix [38]. Let Ii(s) in Figure 2-1 (a) be a stabilizing compensator for G 0 (s), and let rj designate an external signal placed at the plant input. The vector [j/, u] T , formed by the outputs of plant and compensator, is related to the vector [r, r^] T of their inputs by: H(Go,K) = 1 " r rd (3.15) (/ + G 0 K)~ X G 0 K (/ + G 0 K)~ l G 0 (/ + KG 0 )~ l K -(/ + KG o )~ x KG 0 Therefore, internal stability of the unity feedback system with cascade compensation is equivalent to the stability of the four transfer matrices in H(G 0 K). The characteristic polynomial of each of these matrices must be checked in order to assess the internal stability of the closed-loop system. Also, it can be shown that external stability and internal stability of the closed-loop system are equivalent if the state space representations of the plant and controller are stabilizable and detectable [53]. Note that, if the compensator K is already known to be stable, then the stability of (/ + G 0 K)~ X G 0 is necessary and sufficient for the stability of H(G 0 ,K).

PAGE 54

47 Spectral radius condition for stability The term (I + G 0 K)~ 1 G 0 represents the transfer matrix of a feedback loop, with G 0 in the forward path and K in the feedback path. This loop can be represented in state space form by r c = [A c , B c ,C c , D c \. Stability of the feedback loop depends on the pole polynomial of its transfer matrix; therefore, it depends on the characteristic polynomial of A c . The following result relates the characteristic polynomial of A c to the characteristic polynomials of Aq and Ak . Assume that G 0 (s) and Ks are proper transfer functions having respectively minimal realizations [Ac, Bq, Cg, Dq] and [Ar, Br, Ck, Dk], and define the return-difference operator as F(s) = f [/ + K(s)G 0 (s)] (3.16) further assuming that det[/ + F(oo)] = det[/ + K (ooJGofoo)] = det[7 + Dj^Dq] / 0 Let 4>c be the closed-loop characteristic polynomial. Then [26]: (j>c Â— det(s/ Â— A c ) = det(sl A a ) det(sl A k ) det[F[oo)\ ni'-AitM.)] = n t< a,( A g )j f|[, a,(a K )] . d ; i t .^ (a) j, <3.17) 1=1 1=1 t= i aei[,r ^oo;j The important fact revealed by this equation is that, when Aq and Ak are Hurwitz, the matrix A c is Hurwitz if and only if all the zeros of det[7 + 7i (s)G 0 (.s)] have negative real parts. It is important to notice [11] that, if cancellations of terms [5 Â— A,(*)] occur between the left and the right side of equation (3.17), the zeros of det[7 -f7f(s)(j 0 (.s)] are a proper subset of the closed-loop eigenvalues.

PAGE 55

48 Assuming that G 0 (s ) and Ii{s) are stable, equation (3.17) shows that a necessary and sufficient condition for stability of a feedback loop is that, for all s such that Re(s) > 0, det[/ + KG 0 (s)] f 0 : <=> A t[I + KG 0 (s)} Â± 0, Vi <=> \i[KG 0 (s)]t-l, Vi <= p[KG 0 (s)} < 1, (3.18) 4= a[KG 0 {s)] < 1, Thus, small loop gain is a sufficient condition for stability of a feedback loop. Internal stability of a feedback loop can alternatively be checked through the Nyquist criterion, which is reviewed next. Nyquist stability criterion The Nyquist stability test permits the assessment of closed-loop stability without requiring the solution of the closed-loop characteristic polynomial. Due to its graphical character, it is very appealing in computer-aided analysis and design environments. Let us initially discuss the case of scalar system. Suppose that plant and controller in Figure 2-1 (a) are scalar transfer functions. Let q 0 {s) = g 0 kq 0 (s ) = and let f(s) represent the return difference transfer function. Then, A .).i +fcW .:SJl+fif! (3.19, d(s) It can be easily verified that (3.20)

PAGE 56

49 where d> 0 (s), <^ c (s) designate respectively the open-loop and the closed-loop characteristic polynomials, and let p 0 , p c be their respective number of unstable poles. Closed-loop stability analysis requires the determination of the number p c ; for closed-loop stability, p c must be zero. The Nyquist criterion obtains p c from the knowledge of p 0 and the application of the principle of the argument to equation (3.20). Let n 0 be the number of clockwise encirclements of the origin by the map of the standard Nyquist contour under f(s). Equivalently, n 0 corresponds to the number of clockwise encirclements of the critical point ( Â— 1 , jO) by the map of the contour under g 0 (s). Since n 0 corresponds to the difference between the number of roots of the numerator and denominator of /(s), which are respectively p c and p 0 , the following relationship is satisfied: Pc = n Q + p 0 (3.21) The closed-loop system is stable if and only if p c = 0, or, equivalently, if and only if n a = -p 0 . That is, if and only if the map of the Nyquist contour by q 0 (s) encircles the critical point, in the anticlockwise direction, a number of times equal to the number of unstable poles of (f> 0 . Now, consider the case in which G 0 (s), K(s) in Figure 2-1 (a) are MIMO transfer matrices. Let 4>c be respectively the open-loop and the closed-loop characteristic polynomials, and consider the return difference operator defined by equation (3.16). Defining 7 = det[/-(A'(oo)G(oo)], equation (3.17) shows that det[A(s)] = 7 ^ (3.22) o,

PAGE 57

50 and the number of encirclements of the origin by the characteristic loci of F(s ), which is the same as the number of encirclements of the critical point by the characteristic loci of Q 0 . The characteristic loci of F(s) are the maps of the Nyquist contour under the characteristic values of F(s). Let g,(s) be the characteristic values of F(s ), Q 0 (s), respectively, and recall that Q 0 Â€ C pxp . The characteristic values q,(s) are the solutions of the characteristic equation V(q,s) d = det [q(s)I <2 0 (s)] = 0. In general, the characteristic equation can be factored as a product of irreducible polynomials, V(g, s) = Vi (q, s), . . . , V/(< 7 , s ). Each polynomial V, is a polynomial of order n, in qi, with coefficients a,y(s), j Â— 1, . . ., n t , such as: Vi(q,s) = q?'(s) +
PAGE 58

51 The map of the standard Nyquist contour under the characteristic values of Q 0 (s) generates a set of closed curves, which constitute the characteristic loci of Q 0 (s). The number of encirclements of the critical point by the characteristic loci of Q a (s) and the number of unstable poles of (f>o(s) are used to assess closed-loop stability. The generalized criterion is formally stated as follows: Generalized Nyquist criterion. Let no be the number of encirclements of the critical point by the characteristic loci of the open-loop transfer matrix Q 0 (s), and let pc and po be the number of unstable poles of c and