
Citation 
 Permanent Link:
 http://ufdc.ufl.edu/AA00039334/00001
Material Information
 Title:
 Stability analysis and control design for uncertain and timedelay systems
 Creator:
 AlShamali, Saleh A
 Publication Date:
 2004
 Language:
 English
 Physical Description:
 x, 81 leaves : ill. ; 29 cm.
Subjects
 Subjects / Keywords:
 Boolean functions ( jstor )
Circles ( jstor ) Critical points ( jstor ) Governing laws clause ( jstor ) Linear systems ( jstor ) Mathematical robustness ( jstor ) Scalars ( jstor ) Sliding ( jstor ) Trajectories ( jstor ) Uncertain systems ( jstor ) Control theory  Mathematical models ( lcsh ) Dissertations, Academic  Electrical and Computer Engineering  UF Electrical and Computer Engineering thesis, Ph. D Linear systems ( lcsh ) Time delay systems ( lcsh )
 Genre:
 bibliography ( marcgt )
theses ( marcgt ) nonfiction ( marcgt )
Notes
 Thesis:
 Thesis (Ph. D.)University of Florida, 2004.
 Bibliography:
 Includes bibliographical references.
 General Note:
 Printout.
 General Note:
 Vita.
 Statement of Responsibility:
 by Saleh A. AlShamali.
Record Information
 Source Institution:
 University of Florida
 Holding Location:
 University of Florida
 Rights Management:
 The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for nonprofit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
 Resource Identifier:
 024492136 ( ALEPH )
880637148 ( OCLC )

Downloads 
This item has the following downloads:

Full Text 
STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS
By
SALEH A. ALSHAMALI
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2004
Copyright 2004
by
Saleh A. A1Shamali
I dedicate this work to my parents and my wife Muna.
ACKNOWLEDGMENTS
I wish to express my deep gratitude to my advisors, Dr. Haniph Latchman and Dr. Oscar Crisalle, for their support and guidance during my Ph.D. study. They gave me a lot of freedom and flexibility to choose my research topic. I am thankful for their encouragement which gives me confidence as I begin my career in academia as an assistant Professor, shortly after I graduate with my Ph.D.
I also wish to thank Dr. Tan Wong and Dr. Norman FitzCoy for serving on my committee, and for providing constructive ideas to further develop my research.
I am very thankful to Dr. William Hager and Dr. Sergi Pilyugin for their help on some of the mathematical difficulties I ran into during my research.
I would also like to thank my List lab colleagues, in particular, Dr. Baowei Ji, who offered his help and shared his extensive knowledge in the area of controls with me, and Mr. Minkyu Lee for his recognized role in administrating the List lab, and for taking the time to solve the many technical problems I ran into while he worked on his Ph.D. dissertation. I also wish to thank my List lab colleagues, Mr. YuJu Lin, Mr. Kartikeya Tripathi, Mr. Suman Srinivasan, and the rest. I had a wonderful time and enjoyed being around them.
I am grateful to my parents, sisters, and brothers back in Kuwait for their support, and to my wife for standing behind me and for taking a long leave from her job to stay with me and take care of our son, Mohammed. My family has always been a source of inspiration and support for me throughout the course of my Ph.D. research.
TABLE OF CONTENTS
ACKNOWLEDGMENTS .....
LIST OF FIGURES ................................
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CHAPTER
1 INTRODUCTION ..............................
Robustness Analysis Sliding Mode Control Bilinear Systems ... Thesis Structure ...
2 THE NYQUIST ROBUST SENSITIVITY MARGIN ...........
2.1 Introduction ...................
2.2 Background ....................
2.3 The Nyquist Robust Sensitivity Margin .... 2.4 Application to Systems with Affine Uncertainty 2.5 Exam ples .....................
2.5.1 Example 1 ................
2.5.2 Example 2 ................
2.5.3 Example 3 ................
2.6 Conclusions ....................
2.7 Supplementary Calculation Algorithms ....
2.7.1 Supporting Circle of an Arc ....
2.7.2 Minimum Distance between a Line and a 2.7.3 Identifying Points on the Arc .......
Page
iv
Structure Point
3 SLIDING MODE CONTROL FOR TIMEDELAY SYSTEMS
Introduction ...............
Problem Formulation ........ Switching Function and Control Law Existence of a Sliding Mode ..... System Stability .............
Example ..................
Conclusions .................
Design
I t ï¿½
4 STABILIZATION OF TIMEDELAY BILINEAR SYSTEMS ....... 45
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Problem Statement ......................... 46
4.3 Preliminary Results ......................... 47
4.4 M ain Result ... . .... .. .... .. . . .. . ... . .... 54
4.5 Exam ple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 58
4.7 Further Analysis of the System in Lemma 1 ............... 60
5 FUTURE WORK AND DISCUSSIONS ....................... 63
5.1 Problem 1: Sliding Mode Control for a Delayed Bilinear System 63 5.2 Problem 2 Extending the NRSM ..................... 63
APPENDIX ......... ................................... 66
A DERIVATION FOR THE REACHING TIME ................... 66
B DERIVATION OF A BOUND USED IN CHAPTER 3 ............. 69
C PROOF OF LEMMA 1 OF CHAPTER 4 .................. 70
D PROOF OF CLAIM (i) OF LEMMA 1 OF CHAPTER 4 ........... 72
E THE MATRIX MEASURE  DEFINITION AND PROPERTIES .... 74 F THE COMPARISON THEOREM ...................... 75
REFERENCES ................................... 76
BIOGRAPHICAL SKETCH ............................ 81
LIST OF FIGURES
Figure page
2.1 The uncertain system g(s) = go(s) + 6(s) in a unityfeedback configuration .......................................... 8
2.2 Uncertainty value sets at a frequency wi: (a) convex critical value set
V,(wi), (b) nonconvex critical value set V(wi). Both figures show the
worstsensitivity plant gs (jwi), located closest to the point 1 + j0. 10
2.3 Illustration of the inversesensitivity circle of radius 77(w) introduced
in definition (2.11) ....... ........................... 12
2.4 The center z0 and radius r of the supporting circle of the arc
A(p1,P2,P3) are determined from the intersection of the auxiliary
lines L1 and L2 ....... ............................. 18
2.5 Frame for the value set of system (2.23) at w = 9, and the corresponding inversesensitivity circle. The nominal plant go(jw) is indicated
by the '+' marker .................................. 23
2.6 Value of kN,S(w) and kN(w) as a function of frequency for the first
example .......................................... 23
2.7 Frame for the value set of system (2.25) at w = 4.72 and a = 1.86.
The nominal plant go(jw) is indicated by the '+' marker ....... . 25
2.8 Plot of the Nyquist robust sensitivity margin kN,s = [maxkN,S(w) as
a function of the blowup factor a. The parametric robust stability margin is d = 1.89, which corresponds to the value of the blowup
factor a that makes kN,, approximately equal to unity ........ ..26
3.1 Trajectories for the states of the transformed system (a), the states
of the original system (b), the control law (c), and the switching
function (d) ....... ............................... 43
3.2 Trajectories for the states of the transformed system (a), the states
of the original system (b), the control law (c), and the switching
function with approximation to the signum function (d) ....... .. 44
4.1 Graphical interpretation of the differential equation of Lemma 2: (a)
derivative graph, (b) solution curves .... ................. 50
4.2 Plot of the trajectories of the system states .................. 59
4.3 Plot of the trajectories of the system states .................. 59
4.4 Graphical interpretation of the differential equation(4.4) for the case
where b > 0 : (a) derivative graph, (b) solution curves ......... 61
4.5 Graphical interpretation of the differential equation(4.4) for the case
where b < 0 : (a) derivative graph, (b) solution curves ......... 62
5.1 The negative feedback loop of the uncertain system p(s) with a
controller c(s) ....... .............................. 64
5.2 The standard M  A loop for stability analysis ................. 65
5.3 A system with parametric uncertainty in the standard M  A loop. 65 A.1 Plot of the switching function for s(0) > 0 (a), and s(0) < 0 (b) . .. 67
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS By
Saleh A. AlShamali
December 2004
Chair: Haniph A. Latchman
Cochair: Oscar D. Crisalle
Major Department: Electrical and Computer Engineering
Uncertainty and timedelay in real systems constitute the two major challenges that face control engineers since both can contribute to instability or poor performance. In this dissertation three analysis and control design problems are addressed. These problems involve linear systems with parametric uncertainty structure, and linear and bilinear systems with time delay.
In the first problem, the Nyquist robust sensitivity margin is proposed as a scalar metric for robust stability and robust performance. The work was motivated by the critical direction theory (CDT) in which attention was given to plants that lie along the critical direction. The advantage of the new metric, however, is that it takes into account plants that are in close proximity to the critical point 1 + jO but that do not lie along the critical direction. The approach introduced has therefore the advantage of capturing the worst case sensitivity as well as providing a more meaningful indication of robust stability. The concept has been applied successfully to a class of linear systems with affine uncertainty structure.
The second problem involves designing a sliding mode control (SMC) to stabilize a class of timedelay linear systems. The delay is assumed to exist in both the control variable as well as in the state vector. The system is first rendered inputdelay free through an appropriate transformation. Then an SMC is designed for the statedelay system. Sufficient stability conditions ensuring the asymptotic stability of the closedloop system have been derived.
The third problem addresses the stabilization of a class of timedelay bilinear systems. A statefeedback control law is designed to ensure the asymptotic stability of the delayed bilinear system. The work builds on two simple scalar systems and utilizes the results to prove a more complicated system. The analysis allowed us to obtain a bound on the maximum value of the delay that the system can tolerate. Furthermore, a region of attraction based on the initial condition of the systems states is established.
CHAPTER 1
INTRODUCTION
1.1 Robustness Analysis
The robustness analysis problem investigates the behavior of a dynamical system under uncertainty, namely, how the system stability and performance are influenced by the uncertainty. Many robust stability tools have been developed along the years, among which are the wellknown scalar stability margins: the structured singular value (w) introduced by Doyle [17] and the multivariable stability margin given by Safanov k.(w) [46].
The critical direction theory introduced by Latchman and Crisalle [35] and later generalized by Baab et al. [2] also provides an effective tool for analyzing the robust stability of uncertain systems, namely, the Nyquist robust stability margin kN(w). The concept was applied successfully to a class of linear systems with affine and ellipsoidal uncertainty structure, and it works for the case of convex and nonconvex value sets.
Uncertainties are classified depending on their source as nonparametric (unstructured) and parametric (structured) [7]. The nonparametric uncertainties do not have a well defined structure and are represented by a disk which over bounds the actual uncertainty. Therefore, this type of uncertainty description usually introduces conservatism. Examples of uncertainties that are represented as unstructured include nonlinearities and unmodelled dynamics. Parametric uncertainties, on the other hand, have a structure that reflects the variation of the system parameters. Thus, they are less conservative. Examples of such uncertainties include interval and ellipsoidal uncertainty.
1.2 Sliding Mode Control
A variable structure system (VSS) is a dynamical system composed of distinct structures. A VSS switches between the different structures based on the value of its states and according to a switching logic which takes into account the desired properties in each structure. In fact, a variable structure system can have properties that are not existent in its individual structures [51].
A sliding mode control system (SMC) is a specific case of VSS in which the system trajectories exhibit a sliding behavior. The design of an SMC consists of two stages. The first stage is the design of a switching surface such that once the trajectories are confined to the surface the system demonstrates the desired properties (i.e., tracking, regulation, etc.) The second stage involves the design of a control law that forces the trajectories into the sliding manifold (discontinuous control), and a linear feedback control that guarantees closedloop stability (equivalent control). The latter is derived by setting the time derivative of the switching function equal to zero and solving for the control law. The former is proposed with appropriate gains to allow the system to overcome uncertainties.
The system motion in SMC runs through two phases. The first phase (reaching phase) is characterized by a fast motion. The system during this phase is robust against uncertainties (matched and unmatched) and external disturbances. This is mainly due to the discontinuous control law which acts as a high gain feedback control that counteracts high frequency signals. The second phase (sliding phase) is characterized by a slow motion. The system is however robust only against matched uncertainty.
The theory of sliding mode control has been covered comprehensively in the literature. Utkin [51] presents a survey for the early contributions in SMC. The survey by Hung et al. [26] presents a tutoriallike paper for variable structure control (VSC) with sliding mode. An interesting tutorial paper by DeCarlo et al. [16]
provides an introduction to variable structure control for multivariable nonlinear timevarying systems. Finally, a useful guide to SMC is also given by Young et al. [56].
1.3 Bilinear Systems
Bilinear systems occupy an intermediate level between linear and nonlinear systems in terms of their complexity. the general form of a bilinear systems is given as
i(t) = Ax(t) + Bu(t) + Nx(t)u(t) (1.1) where it is clear that the control action enters the system linearly through the term (Bu(t)), and nonlinearly through the term (Nx(t)u(t)), hence the name bilinear system. A special form of the system (1.1) is the homogenous bilinear system
(t) = Ax(t) + Nx(t)u(t)
where the linear part is omitted. A formal definition of a bilinear system is given in Elliott [19]. Many natural as well as manmade systems can be represented as bilinear models [19, 39, 40]. Examples of bilinear systems can be found in economics, industrial processes, and biochemistry, just to mention a few.
1.4 Thesis Structure
The thesis is organized as follows. In Chapter 2, a new metric to the robust stability of closedloop systems with affine uncertainty structure is presented. The new concept is motivated by the fact that the critical direction theory considers only plants that lie along the critical direction defined as the ray starting at the nominal plant and pointing towards the critical point 1 + jO. Hence, plants that are very close to the critical point but that do not lie on the critical ray are ignored. Therefore, the Nyquist robust sensitivity margin, kN,,, is proposed to take into accounts such plants. Chapter 3 considers the stabilization of a class of timedelay linear systems
4
via sliding mode control. The delayed system is assumed to have a constant delay in both the input and the state. In Chapter 4, a state feedback control design for a class of bilinear systems with statedelay is presented. The stability conditions derived provide a bound on the system delay, and define an attraction region based on the initial condition. The future work proposed for consideration is presented in Chapter 5.
CHAPTER 2
THE NYQUIST ROBUST SENSITIVITY MARGIN
2.1 Introduction
The critical direction theory introduced by Latchman and Crisalle [35], Latchman et al. [36], and later generalized by Baab et al. [2] is an effective approach for analyzing the robust stability of uncertain systems with convex and nonconvex uncertainty value sets. A key concept introduced by the theory is the Nyquist robust stability margin, kN(W), which provides a measure of robustness. The approach has proven useful in characterizing the robust stability of singleinput/singleoutput systems with real affine parametric uncertainty, among others, and has recently been applied to the design of robustly stabilizing H, controllers by identifying an appropriate weighting function for the controller sensitivity function [29, 30].
This chapter proposes an alternative robuststability analysis which has the benefit of also capturing the concept of robust sensitivity, hence directly incorporating the notion of performance robustness. The resulting Nyquist robust sensitivity margin kN,s(W) is inspired on the critical direction theory framework, but is formulated to take into account in an explicit fashion the effect of the uncertain systems that have the worstcase sensitivity.
The earlier critical direction theory involving the margin kN (w) considers only a subset of the uncertain systems in the robustness analysis, namely, those uncertain systems whose image on the Nyquist plane lie along a prespecified oriented line. Although the restricted criticalset of systems considered leads to nonconservative conditions for robust stability, the approach ignores all perturbed systems that have a poor sensitivity (i.e., systems located close to the critical point 1+jO on the Nyquist
plane) whenever these lie outside the oriented line. The new paradigm involving the margin kN,s (w) seeks to quantify the effect of the systems located closest to the critical point through the introduction of a sensitivity perturbation radius that is calculated at each frequency by solving an optimization program.
To illustrate the approach, the robust stability analysis proposed is developed for uncertain systems described by rational transfer functions with real affine parametric perturbations. More specifically, the numerator and denominator polynomials depend affinely on a set of real parameters that are known to belong to a given uncertainty description. A systematic algorithm for the calculation of kN,, (w) is developed by taking advantage of simple geometrical features adopted by the Nyquistplane images of such systems [21]. The analysis is carried out in detail in Section 4.
The robust stability of the real affine uncertain systems considered in Section 4 can be analyzed using alternative approaches, for example, based on generalizations of Kharitonov's methodology [321. In particular, one may adopt the approach in Barmish [3] which proposes a strict positivity condition that must be evaluated at a finite number of frequencies, or the box theorem [8], or the worstedge algorithm of Sideris [48]. Furthermore, for the robust stability of interval plants, Wang [53] has shown that it suffices to check two vertices. Some results concerning the robust stability of control systems under unstructured as well as parametric uncertainty have been addressed in Chapellat [9]. These alternative results successfully reveal whether the system is robustly stable; however, in contrast to the Nyquist robust sensitivity margin proposed here, they do not provide a scalar indicator of the closeness to instability. Hence, the scalar kN, can be used to compare alternative closedloop designs and determine a hierarchy of robust stability among the alternatives. A recent result by Wang [54] concerning interval plants shows that the maximum H, norm of the sensitivity function is achieved at twelve (out of sixteen) Kharitonov vertices. The result, however, applies to interval polynomials while our approach
applies to transfer functions. A systematic algorithm for the calculation of kN,,(w) is developed by taking advantage of the simple geometrical features documented in Fu [21] adopted by the Nyquistplane images of such systems.
The chapter is organized as follows. In Section 2, the classical critical direction theory is briefly reviewed for contextual reference. Section 3 presents the definition of the new Nyquist robust sensitivity margin, discusses its properties and computational challenges, and compares and contrasts the new margin with its Nyquist robust stability margin predecessor. The application of the Nyquist robust sensitivity margin to systems with affine uncertainty structure is presented in Section 4, including the details of a systematic algorithm for the efficient calculation of the margin. Section 5 presents examples, including an illustrative case showing how to utilize the proposed method for calculating a parametric robuststability margin that is interpreted as a blowup factor.
2.2 Background
A general linear time invariant system (LTI) can be represented in a state space form as follows:
i(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t) (2.1)
Furthermore, the representation (2.1) can be expressed in the following transfer function form:
G(s) = C(A  sI)IB + D = Cadj(sI  A)B + D (2.2) det(sI  A)
provided there are no cancellations between the numerator and denominator polynomials. When the system matrices (A, B, C, D) are uncertain, the transfer function form is given, for the MIMO case, by G(s) = Go(s) + A(s), where Go(s) is a known transfer matrix, and A(s) is the transfer matrix representing the uncertainty. The
8
transformation of system (2.1) into (2.2) allows us to use frequency domain techniques to analyze the stability and performance of the closedloop system. Since the development of the Nyquist robust sensitivity margin in this chapter requires frequency domain techniques such as the Nyquist theorem, the transfer function form is the appropriate environment to use in assessing the robust stability of the system.
Consider the uncertain singleinput/singleoutput transfer function
g(s) = go(s) + j(s) (2.3)
shown in Figure 2.1, where go(s) is a nominal system, J(s) E A is an unknown perturbation belonging to a known set of allowable perturbations A. The closedloop system of Figure 2.1 is said to be robustly stable if stability is ensured for all 6(s) E A. The problem under consideration is the analysis of the robust stability of the uncertain closedloop system (2.3) under negative unity feedback. The developments assume the following standard premises that are commonly used in Nyquistbased robustness analysis: (Al) the nominal transfer function go(jw) is stable under negativeunity feedback, and (A2) the uncertain system g(jw) and the nominal system go(jw) have the same number of openloop unstable poles.
Figure 2.1: The uncertain system g(s) = go(s) + 6(s) in a unityfeedback configuration.
The key concepts and definitions pertaining to the critical direction theory are readily summarized utilizing Figure 2.2. First, the critical line is the oriented line (i.e., a ray) in the Nyquist plane originating at the nominal point go(jW) and passing through the critical point 1 + jO. The critical direction
d(jw) 1 + goW) (2.4) d1 + go(jw)(
is a unitlength vector with origin at go(jw) and pointing towards the critical point. Then, the critical ray is characterized by r(w) = go(jw) +a d,(jw) for a E RA+. The uncertainty value set
V(w) = {g(jw) : g(jw) go(jw) + 6(jw), 6(s) E A} (2.5) represents the Nyquistplane mapping g(jw) = go(jw)+6(jw) of the uncertain system. The boundary of the uncertainty value set (2.5) is denoted as V(w). Finally, the critical value set V,(w) := V(w) n r(w) is the subset of V(w) that lies on the critical line.
The the critical value set V,(w) may be convex, i.e., a set described as a single point or as straightline segment (such as the straightline segment go(jwi)g8(jwi) joining the points go(jwi) and g,(jwi) shown is Figure 2.2a), or nonconvex, i.e., a union of isolated points and straightline segments (such as the union of the disjoint segments g,(jwi)g1(jwi) and g2(jwi)g3(jwi) in Figure 2.2b). Note that it is possible to encounter an uncertain system with a highly nonconvex value set V(W) that nevertheless features a convex critical value set V1(w), as illustrated in Figure 2.2a.
For the general case of convex or nonconvex critical value sets, Baab et al. [2] define the critical perturbation radius
PC I1 +gO(jw)f(w) if 1+jO V(w) (2.6) I1 + go(Jw)I (L ) otherwise
where
t(w)= mi I1+zI (2.7) zc8:(w,)
g3 (jM0) g2(jo4)
dc(jic) Img(w)j)
1+ o + j O
gs(P4q) Reg(jw Reg( jw)
P~V(04)
"..1 g (m(a) (b)
Figure 2.2: Uncertainty value sets at a frequency wi: (a) convex critical value set V,(wi), (b) nonconvex critical value set V(wi). Both figures show the worstsensitivity plant g, (jwi), located closest to the point 1 + jO. represents the minimal distance from the critical point 1 + jO to the set of critical boundary intersections B3(w):= {OV(w) Nr(jw)} \ go(w), where '\' is the setexclusion operator. In the case where go(w) is the only element of OV(w) n r(jw), then B3(w) := {go(w)}. As shown in Baab et al. [2], when V,(w) is a convex set, as illustrated in Figure 2.2a, the definition (2.6) reduces to
pc(w) := max{a E R' : g(jw) = go(jw) + a d,(jw) E V,(w)} (2.8) a form first invoked in Latchman et al. [36], where pc(w) is simply interpreted as the distance between the critical point and the point where the boundary O(w) intercepts the critical direction. Finally, the Nyquist robust stability margin is defined as
pc(w) (2.9) l+ g0(jw)(
The main result of Baab et al. [2] is restated in the following theorem. Theorem 1 Consider the uncertain system (2.3) with assumptions (Al) and (A2). Then, the closedloop system is robustly stable under unity feedback if and only if kN(w) < 1 Vw.
Proof. See Baab et al. [2]. U Note that the theorem is valid in general for convex as well as nonconvex critical value sets Vc (w). Since control design is often carried out under sufficientonly conditions, for control synthesis purposes it may be acceptable to adopt the definition (2.8) instead of (2.6) when working with nonconvex critical value sets. Then the resulting condition kN(W) < 1 Vw, where kN is calculated through (2.9), is only sufficient for robust stability.
2.3 The Nyquist Robust Sensitivity Margin
The main drawback of definition (2.6) is that the resulting Nyquist robust stability margin value kN(w) obtained through (2.6) and (2.9) may convey no information about the worstcase sensitivity in the value set. Figure 2.2b shows that at the frequency w = wi the plant g,(jwi) is the element of V(wi) that is closest to the point
1 + j0. Hence the sensitivity 1 is the largest among all the plants in the value set. Note that since g,(jwi) V V, (wi), this plant is ignored in the classical critical direction analysis presented in Section 2 which focuses only on plants that lie along the critical direction.
In this section an alternative approach is presented to include sensitivity effects in the robustness margin. To this end we define the sensitivity perturbation radius
Ps(W) + gO(jw)  ?(w) if  1 + jO V V(w) (2.10) I1 + go(jw)j + c(w) otherwise
where
7(w)= 1 +z (2.11) ZE8V(W)
represents the minimum distance between the critical point 1+jO and the boundary set aV(w). Then, in a fashion analogous to (2.9), the Nyquist robust sensitivity margin
is defined as
kNp (w) (2.12) kg, () :=11 + go(jw)l
Img(j0)
1+jO g'U0
/ Reg(jCO)
g0(ja)
Figure 2.3: Illustration of the inversesensitivity circle of radius rj(W) introduced in definition (2.11).
Figure 2.3 gives an interpretation of q7(w) defined in (2.11) as the radius of the inversesensitivity circle, namely, the smallest circle with center at  1 + jO0 that contains a point belonging to the boundary OV(w). Furthermore, the definition (2.11) and Figure 2.3 can be used to conclude that q (w) = 1 + g, (jw) 1, where g, (jw) is the perturbation in V(w) that has the worst sensitivity.
It is also of interest to note that kN,,(w) = 1 corresponds to the case where
 1+jO0 E 9V(uw). This follows from the fact that kN,,8(W) =1 if and only if q (w) = 0, and the latter equality is realized from the optimization problem (2.11) only when
 1 + j0 E 9V (w). Finally, it is of utility for the suite to note that at all frequencies W 'q (W) <_ (W) (2.13)
This inequality is derived as follows. Since the valueset boundary OV(W) contains as a subset the set of critical boundary intersections B,(w), the optimization problem (2.11) is carried out over an optimization domain that is a superset of the optimization
domain used in the optimization problem (2.7). Consequently, the solutions to the respective optimization problems must follow the relationship (2.13). Theorem 2 Consider the uncertain system (2.3) with assumptions (Al) and (A2). Then, the closedloop system is robustly stable under unity feedback if and only if kN,s(W) < 1 Vw.
Proof: From the zeroexclusion principle [4] it can be claimed that the uncertain system (2.3) under assumptions (Al) and (A2) is robustly stable if and only if 1 + jO V V(w). Therefore, it must be shown that under the definitions (2.10)(2.12) for ps(w), the condition kN,s(W) < 1 is equivalent to the set membership condition
1 + jO V V(w).
First, to prove sufficiency one must show that kN,,(w) < 1 Vw implies that
1 +jO V V(w). The proof proceeds by contradiction. Assume that kN,8(W) < 1 and that there exists a frequency w such that 1 + jO E V(w). Invoking the sensitivityperturbation radius expression (2.10) for the case where 1 + jO E V(w) and the definition (2.12) it follows that
kN,s(w)  op(w)  11+gï¿½)(jw)1+()  1 + '(u) 1+go(jW) I1+go(jw)I 1+go(jw)
Since by definition (w) > 0, then the equation above implies that kN,,(w) > 1, which is a contradiction. This proves sufficiency.
Second, to prove necessity one must show that at any frequency w the condition
1 +jO V V(w) implies that kN,,(w) < 1. Assume that 1 +j0 V V(w). Invoking the sensitivityperturbation radius expression (2.10), now for the case where 1 + jO V(w), and the definition (2.12), it follows that
kN,8(W) = p,(w) _ 1 + go(Jw)l  n(w) = 1  7L(W) (2.14)
=1 + g(jw) 11 + g(jw)I 1l + go(jw)I (
Since in this case 1 + j0 V V (w), it follows that 1 + jO V (w), and hence from (2.11) it is concluded that rq(w) > 0. Furthermore, from (2.11) it is obvious that
rq(w) 1 + go(w)j. Hence, it follows that 0 < ( 1 which can be used in I1+gO(iW)I
(2.14) to conclude that kN,s (W) <1.
Figure 2.2a illustrates a special situation where kN,,(W) = kN(w). This follows from a simple argument using the elements shown in the figure, where is clear that in this case q(w) = (w) = 11 + g,(jw)l. Hence p,(w) = pc(w) from (2.10) and (2.6), and therefore it follows from (2.12) and (2.9) that kN,,(w) = kN(w). It is also straightforward to verify that the robustness margins satisfy the following two properties: (Pl) if kN,,(W) < 1, then kN,,(w) > kg(w), and (P2) if kN,,(w) > 1, then kN,s,(w) _ ky(w). These two properties follow in a straightforward fashion after using the inequality (2.13), the perturbation radius definitions (2.10) and (2.6), and the robustness margin definitions (2.12) and (2.9) . Although in general kN,8(w) 0 kg (w), as suggested in Figure 2.2b, both margins are nevertheless equivalent as indicated in the following theorem.
Theorem 3 The Nyquist robust sensitivity margin kN,,(w) and the Nyquist robust stability margin kN(w) are equivalent in the sense that (i) kN,,(w) < 1 4=#, kN(w) < 1, (ii) kN,8(W) = 1 ï¿½==> kN(w) = 1, and (iii) kN,,(w) > 1 4=>. kN(W) > 1.
Proof: The proof of sufficiency is developed below for cases (i)(iii). The proof of necessity for the three cases in question follows an analogous argument, and is therefore omitted here for brevity. For case (i), assume kN,S(W) < 1 and utilize (2.12) to conclude that
pS(w) < 1+ go(jw)1 (2.15) Also, from Theorem 2 and from the zeroexclusion principle [4], the condition kN,8 (w) < 1 implies that 1 +jO V(w); hence from equations (2.6) and (2.10) the appropriate expressions for the respective perturbation radii are pc(w) = 11 + go(jw)  (w) and pS(w) = I1 + go(jw)  17(w). From the latter two equations and inequality (2.13) it follows that
(2.16)
Pc(W) : P8(W)
Inequalities (2.15) and (2.16) imply that pc(w) < I1 + go(jw)I which yields the result kN(w) < 1 after invoking (2.9). For case (ii), assume kN,,(w) = 1 and utilize (2.12) to conclude that p,(w) 1 + go(w) , which in turn from equation (2.10) implies that 77(w) = 0. Since 77(w) 0 solves the optimization problem (2.11), it follows that
1 +j0 E OV (w). Now, the fact that 1 +j0 0 OV(w) implies that 1 +jO E B,3(w), where B,3(w) is the optimization domain in (2.7). Since 1 + jO E B,(w), it follows that the solution to the optimization problem (2.7) is (w) = 0, which can be used to conclude from (2.6) that p,(w) = 11 + go(w)I. Substitution the latter equality into (2.9) yields kN(W) = 1. For case (iii), assume kN,,(w) > 1 and utilize (2.12) to conclude that
p (w) > 11 + go(jw)1 (2.17) Also, from Theorem 2 and from the zeroexclusion principle [4], the condition kN,,(W) > 1 implies that lï¿½j0 E V(w); hence from equations (2.6) and (2.10) the appropriate expressions for the respective perturbation radii are pc(w) = I1 + go (jw) I + (w) and pS(w) = I1 + go(jw)l + ,(w). From the latter two equations and inequality (2.13) it follows that
Pc(W) ps(W) (2.18) Inequalities (2.17) and (2.18) imply that pc(w) > 1 go(jw)[ which yields the result kN(W) > 1 after invoking (2.9). U
The Nyquist robust sensitivity margin kN,s(W) serves a role analogous to that of the structured singular value j(w) [17] or the multivariable stability margin kin(W) [46], as a scalar indicator of robust stability. Given that the optimization problem (2.11) must be solved, the deployment of an analysis approach based on kN,,(w) requires knowledge the valueset boundary OV(w). Fortunately this information is available in a number of problems of interest, such as the case of systems with real affine uncertainty structure discussed in the following section.
2.4 Application to Systems with Affine Uncertainty Structure
The robust stability analysis approach proposed is applied to a class of uncertain systems with real affine uncertainty structure of the form (,, q) no(s) + Z .1 qini(s) (2.19) do (s) +Ei=l qidi(s)
where ni(s) are numerator polynomials of known order f and known real coefficients nik, k = 0,1,...,, i = O, 1...,p, and where di(s) are denominator polynomials of known order m and known real coefficients dik, k = 0, 1, ..., m, i = 0, 1, ...,p. The element q E Q is a vector of real perturbation parameters, where the real uncertainty domain
Q {q E 1ZP:q
is a bounded rectangular polytope. In this case the uncertainty value set V(w) is simply the map g(jw, Q) : R x Q + C.
The objective is to calculate the value of kN,8(w) as a function of frequency using the expression (2.12). This in turn requires the calculation of the sensitivity perturbation radius p,(w) through its defining equation (2.10). Note that in order to apply (2.10) two problems must be addressed, namely, the optimization program (2.11) must be solved to find the inversesensitivity radius 77(w) (Problem I), and the setmembership clause 1 + jO V V(w) must be assessed as true or false (Problem II) so that the appropriate branch of equation (2.10) can be identified.
It is shown in [21] that the mapping g(jw, E(Q) (denoted in the suite as the valueset frame at the frequency w), where E(Q) represents the set of edges of Q, spans the boundary set OV(w). Furthermore, the frame g(jw, E(Q)) is a set comprised of arcs of circles and straightline segments [21]. More precisely, let Ei(Q) and its corresponding extreme points q, and q+ represent the i  th edge of the rectangular polytope Q. Then the frame g(jw, E(Q)) is composed of a set of frameelements g(jw, Ei(Q)), and each frame element is either a straightline segment or an arc
of a circle. These simple geometric properties of the frame allow the development of a precise solution of Problem L In fact, the minimization problem (2.11), which is equivalent to finding the minimum distance between the point 1 + jO and the boundary of the value set, reduces to a simple geometric problem: finding the shortest distance between the point 1 + jO and an arc of circle or a straightline segment. Problem (2.11) can then be posed for each frame element, and the smallest solution found after considering all the edges of Q yields the value of 77(w) sought.
For completeness it is convenient to briefly summarize relevant geometrical concepts regarding lines and arcs of circles. The interested reader is referred to [15] for further details. A line passing through points P1, P2 E C is defined by
L(p1,p2) := {z C C: z = P1 + u(p2  Pl), u E R}
and a circle with radius r and center z0 is given by C(r, zo) := {z E C : I  z012 = r2}
The arc A(pi, P2, P3) of a supporting circle C(r, zo) is described by three points.
One important issue to resolve is whether the map g(jw, E1(Q)) of a given edge E1(Q) is a straightline or an arc of a circle. This can be resolved by taking advantage of the cross product
Re(p1) Re(p2)
P1 X P2 =
Im(p1) Im(p2)
where represents the determinant operator. Selecting three distinct points p1, P2, and p3 of the map g(jw, Ei(Q)), if follows that if (p3P1)X(p2p1) < 0 (>0) itcanbe concluded that the segment is an arc of a circle turning to the right (left). If the cross product equals zero, then the three points are collinear and the segment is a straight line. Two of the points in question should be p1 = g(jw, q) and P3 = g(jw, qt). The
P2
r
P, zo P3
L2 L1
Figure 2.4: The center z0 and radius r of the supporting circle of the arc A(pi, P2, P3) are determined from the intersection of the auxiliary lines L1 and L2 third point can be taken as the image of a distinct point on the edge E(Q) that can be selected arbitrarily, say for example, P2 = g(jw, (q + q+)/2).
For the case where the map g(jw, Ej(Q)) of an edge Ej(Q) is an arc, the minimum distance from 1 + jO to the arc can be given by either (i) the distance to one of the end points of the arc, or (ii) the distance to an internal point of the arc. Clearly, if the ray originating at the center of the supporting circle of the arc and passing through
1 + jO does not intersect the arc, then the minimum distance can be determined from one of the two end points of the arc. On the other hand, if the ray intersects the arc, then the distance between 1 + jO and the point of intersection defines the minimum distance sought.
Finally, the procedure described requires finding the supporting circle of an arc. From Figure 2.4, the center z0 of the supporting circle of an arc that passes through three distinct points P1, P2, and p3 on the complex plane can be determined from the intersection of the lines L1 := (p1 + P2)/2 + jul(p2 pi), and L2 := (P3 + P2)/2 + ju2 (p3  P2), where U1, u2 E R. The radius r is then found in an obvious fashion, say for example as r = JP2  zOI.
For the case where the map g(jw, Ei(Q)) of an edge Ek(Q) is a straightline segment, the minimum distance to the point 1 + jO is found using a procedure formally similar to the case of the arcs. First a supporting line is found. Then, one finds the point of intersection between the supporting line and a normal line that
passes through 1 +j0. The intersection point gives the minimal distance to 1 +jO if the intersection point is also an element of the straightline segment. Otherwise, the minimum distance is defined as the distance between 1 +jO and one of the two endpoints of the straightline segment.
The procedure described above solves Problem I, yielding a numerical value for the inversesensitivity radius qj(w) at each frequency. Problem II can be addressed efficiently through the assistance of the following theorem, which is a restatement of an equivalent theorem derived in Baab et al. [2]. A detailed proof is given in the original reference.
Theorem 4 Consider the realaffine uncertain system (2.19) (2.20) configured in the unityfeedback form given in Figure 2.3 under the assumptions (Al) and (A2). Then 1 + jO V V(w) if and only if at frequency w the following linear equality/inequality problem is infeasible:
Aq=b (2.21)
subject to
Bq < b+ (2.22) where
A T RNpR + STRDP 1T o b STR do,R 2
[ SnN ,R S'Dp'R E R2, b :L  R dno R E
Sn/NP,! + sdIDP,Isn,ino,/T  sd,/do,/
B :
0
0
0
0,
and
S,,R [1 72 4 w6 "] J 7 rF/21+i Sf :+ 1 3 5 . T 3i +5)/21
T 2 4 W 6 SdR := 1 _(4) ... ] E f/1
s ,I = w3 w 7 ...] E 7?f(,+1)/21
no
no0,R n02 E Z[1/21+1
nojr: no3 E 'R(1+1)/2l,
nio
Np,R /1
ti2
Np,l n12
n20 ... np0 n22 ... rp2 E )Z(FI/2]+l)xp
fl20 n22
... npO
.. n"rp2
d2o
d22
doo d dï¿½,R :L do2 ' [M/21+1, Dp,R d12
... dpo
... dp2 E R(fm/2)1+1)Xp
q+ q
+
b+:
 q;
C )Z[(I+I)/2] p
do1 dl d21 ... dpl
doj : d3 E 7R F(+l)/21, Dp,I := d13 d23 ."'" dp3 E Rf(+l)/21 xp
where ['] represents the greatestinteger function.
Proof: See Baab et al. [2]. U In summary, for the system (2.19) with parametric uncertainty (2.20) it is possible to solve Problem I and calculate with very high numerical precision the sensitivity radius q(w) because the solution to (2.11) is given by a set of simple algebraic equations. In addition, it is possible to solve Problem II in a numerically efficient fashion because the condition 1 + jO E V(w) can be determined via a simple feasibility problem involving linear equalities and inequalities. Hence, the sensitivity perturbation radius p,(w) in (2.10) and the Nyquist robust sensitivity margin kN,s(W) in (2.12) can be computed precisely and efficiently.
2.5 Examples
Three examples are presented. The first one calculates the margins kN,s(W) and and kN(w) to compare and contrast their values and to shed light into their interpretation. The second example illustrates in a dramatic fashion the fact that kN,s(W) provides a more meaningful indication of the degree of robust sensitivity of the closed loop. Finally, the last example is designed to illustrate how the concepts proposed here can be utilized to formulate and calculate an alternative robustness measure, namely a parametric stability margin.
2.5.1 Example 1
Consider the affine system of the form (2.19) with the structure [29]
g(s,q) =c(s) 5 s + q (2.23) s2 + q2 S + q3
where
3603.7935 s + 18018.9673
c(s) = s2 + 1434.5016 s  2312.4499
is a feedback controller. Let the real perturbation parameters belong to the uncertainty domain
Q {(qI,q2,q3) G R:3 0 < ql < 8, 2 < q2 < 6, 19 < q3 _< 11} (2.24) Figure 2.5 shows the uncertainty value set for (2.23) at the frequency w = 9, including the corresponding sensitivity circle centered at 1 + jO. Note that Figure 2.5 also shows the frame of the value set of (2.23), namely, the straightline segments and arcs of circles that result from the mapping of all the edges of Q. The problem is to analyze the robust stability of the feedback loop involving the uncertain system (2.23) subject to the uncertainty description (2.24).
The margin kN,s(W), calculated following the algorithm given in Section 4, and the margin kN(w), calculated using the technique described in Baab et al. [2], are plotted in Figure 2.6 for frequencies w E [103, 10]. Given that kN,8(w) < 1 Vw, it can be concluded from Theorem 2 that the closedloop system is robustly stable. Since the two margins are equivalent, the values of kN,,(w) < 1 reported in the figure correspond to values kN(W) < 1 at the same frequency, consistent with Theorem 3. Note that Figure 2.6 shows that in this particular case kN,8(w) is an upper observation is consistent with property (P1) which implies that kN,8(w) is an upper bound for kN(w) when the system is robustly stable.
2.5.2 Example 2
Consider the system [2]
g(s, q) c(s) n(s) (2.25) d(s)
where n(s) = s2 + (4 + 0.4q, + 0.2q2)s + (20 + q, q3), d(s)  s4 + (9.5 + 0.5q, 0.5q2 + 0.5q3)s3+ (27 +2q, + q2)s2 + (22.5  ql + q3)s+0.1, c(s) 0.3s+ 1, and where
Ag,
0.60.4 /
0.2
, 1+jO
0
0.2
0.4(
0.6
0.8
1.0
1.2
1.4
2.0 1.5 1.0 0.5 0
Re
Figure 2.5: Frame for the value set of system (2.23) at w = 9, and the corresponding inversesensitivity circle. The nominal plant go(jw) is indicated by the '+' marker.
0.8 0.7
0.6
0.5
0.4k~,),
0.3
0.2
0.1
0 3 , ,,
0 10 10i 10 10' i)
Figure 2.6: Value of kN,,8(w) and kN (w) as a function of frequency for the first example.
I
the real perturbation parameters belong to the polytope
Q1 = {(ql,q2, q3) E R 3 < q < 3, i = 1,2,3} (2.26)
Introducing a parametric blowup factor a = 1.86 to define a larger uncertainty domain
Q=oxQ1 (2.27) such that
Q {(qj, q2, q3) e R3 :5.58 < qj _< 5.58, i =1, 2, 3}
and using w = 4.72 yields the frame g(jw, E(Q)) depicted in Figure 2.7. The results obtained are as follows: p,(w) = 0.8466, pc(w) = 0.0206, kN,s(W) = 0.8577, and kN(w) = 0.0209. Both kN,,(w) and kN(w) are less than unity at the frequency considered. Further analysis shows that the result holds at all frequencies; hence, the closed loop system is robustly stable according to Theorem 2. Certainly, kN,,(W) and kN(w) are equivalent in terms of assessing the robust stability of the system, as proved in Theorem 3. However in the terms of robust performance, kN,s(w) is a more meaningful metric than kN(w) because the larger value of kN,,(w) better reflects the fact that the value set in Figure 2.7 is in close proximity of the critical point 1 +j0. In fact, at the frequency in question kN,,(w) = 0.8577 is a much larger value than kN(W) = 0.0209. The significantly smaller value of kN(w) does not yield comparable insight into the proximity of the worstsensitivity perturbation to the point
1 + jO because the calculation of kN(w) does deliberately ignores all uncertainties lying outside the critical direction.
2.5.3 Example 3
An useful application of the Nyquist robust sensitivity margin proposed is the calculation of a parametric stability margin. Consider the system (2.25) used in Example 2 and the polytope (2.26)(2.27) featuring a variable blowup factor a > 0.
0.3
0.2 go(o))
0.1 /
/ 1+jO
0
0.1
0.3
0.4
0.5
0.6
0.7
1.0 0.8 0.6 0.4 0.2 0
Re
Figure 2.7: Frame for the value set of system (2.25) at w = 4.72 and a = 1.86. The nominal plant go(jw) is indicated by the '+' marker.
Figure 2.8 shows the Nyquist robust sensitivity margin kN,s := maxkN,,(w) that results when magnifying the original perturbation polytope Q by different blowup factor values. The numerical study shows that when the blowup factor has the value a = 1.89 then the Nyquist robust sensitivity margin kN,, is approximately equal to unity, hence reaching the limit of robust stability. The limiting value d = 1.89 is the parametric robust stability margin for the uncertain closed loop. In other words, the controller c(s) introduced in Example 2 can robustly stabilize the closedloop system subject to a parametric blowup factor of the parametric uncertainty domain (2.27) less than d. Note that the blowup factor used in Example 2 is a = 1.86 < d; hence, the uncertain closedloop of Example 2 is robustly stable.
0.41 1 1
1.0 1.3 1.6 1.9
a
Figure 2.8: Plot of the Nyquist robust sensitivity margin kN,, = maxkN,,(w) as a function of the blowup factor a. The parametric robust stability margin is Z = 1.89, which corresponds to the value of the blowup factor a that makes kN,, approximately equal to unity.
2.6 Conclusions
The new concept of a Nyquist robust sensitivity margin can be used to quantify the robust stability of uncertain closedloop systems while at the same time producing a meaningful indication of the worstcase sensitivity that is realized. Hence, in this sense the approach is more attractive than the classical Nyquist robust stability margin framework, which ignores all systems that do not lie along the critical direction, and that may therefore exclude from the analysis perturbed systems that have the worst sensitivity.
On the other hand, in general the calculation of the Nyquist robust sensitivity margin may involve more numerically intensive optimization work, since the program (2.11) is a superset of the program (2.7). In other words, the calculation of kN,,(w) requires knowledge of the entire valueset boundary OV(w), whereas the calculation
of kN(w) requires knowledge of only those points of 9V(w) that lie along the critical direction.
The examples presented illustrate the ability of the Nyquist robust sensitivity margin methodology to produce meaningful quantitative measures of robustness for uncertain systems, even in the case where the uncertainty value set is originated by a real parametric uncertainty description. The examples also shows how the proposed paradigm can be used to characterize alternative robustness measures, such as a parametric blowup factor for a real uncertainty description comprised of a rectangular polytope.
The numerical algorithm used to solve the problem of Section 4 calls for a modest computational requirement because the calculation of the sensitivity radius can be carried out in a straightforward fashion. It is anticipated, however, that the computational cost associated with other particular real parametric uncertainty descriptions may become significantly more expensive given that the general parametric uncertainty analysis problem is found to be NP hard [44].
2.7 Supplementary Calculation Algorithms
Further detail on the computational techniques discussed in Section 2.4 are presented in this section. First, an alternative algorithm to finding the supporting circle of an arc is introduced. Second, a simple algorithm to find the minimum distance between the critical point and a line is discussed. Finally, a technique is discussed for determining whether the intersection point of the line through the critical point and the supporting circle of the arc segment actually lies on the arc.
2.7.1 Supporting Circle of an Arc
An alternative method for finding the supporting circle on an Arc defined by three points, A(xl, X2, X3), is to utilize the equation of the circle. A circle that passes
through the point x and centered at the point C is given by (Xr  Cr)2 + (Xi  Ci)2 = r2 (2.28) where the subscript r refers to the real part of the complex point x, and the subscript i refers to its imaginary part. Now, using the three points that define the Arc, namely x1, x2, and x3, into equation (2.28), the following three equations are obtained: r  2XrICr ï¿½ +  2xilCi + C2 = r= 2_ 2r Cr62 +4 X2ilT Xr2  2X2C + C+2  2Xi2C, + 02 r2
Xr3  2Xr3Cr + C_ + X2C  2Xi3C+r2 (2.29) The three equations (2.29) have three unknowns, namely Cr, Ci, and r, which precisely identify the supporting circle of the Arc defined by x1, X2, and X3.
2.7.2 Minimum Distance between a Line and a Point
Given that uncertain systems of the form (2.19) produce either a linesegment or an Arc, it is important to be able to find the minimum distance drmin between the critical point 1 + jO and a linesegment. The projection technique can be utilized to do just that. The following steps describe the procedure: First, given the critical point cp, and the two endpoints ps and pe of the linesegment, three vectors are defined as follows:
Re(cp) Re(ps) 1 Re(pe)
Vqjp I vpa= I Vpe
Im(cp) Im(ps) Im(pe) Next, define the vector v = vpe  vps, and the the direction vector Vd = pe  ps. Finally, the projection is computed as p =(vCP  vPS) . v
 Vd
where the '.' refers to the dot product operator. Now, the minimum distance is calculated based on the sign of the projection as follows:
 if P < 0 then d,.in = ]CP psI  if P > 1 then dain = Icp  peI
if 0 > P < 1 then dm.i = Icp (ps + pro(pe  ps))l
2.7.3 Identifying Points on the Arc
One decision that has to be made in Section 2.4 is whether the intersection point of the ray originating from the center of the supporting circle of the Arc and passing through the critical point 1 + jO lies on the Arc. The key step is to identify the valid Arc phase range, IarC, such that if a point p E Iarc then p lies on the Arc. The algorithm can be summarized as follows:
* Shift the supporting circle and the intersection point p to the origin.
" Find the phase of the start and end points of the Arc, and convert them such
that the phase ranges between [0 , 27r].
" Denote the smallest phase bmin and the largest ï¿½max
" Find the phase of a mid point on the Arc and denote it 0kin.
 If Omin _ Om > ,max then Iarc = [kmin , max]. Furthermore, if p E Ia~r
then p lies on the Arc.
 Else if O9m < 'min, and Om < Cmax, then Iac = [0 , Omin] U[mrax , 0].
Thus, if p G Iar, then p lies on the Arc.
 Else 'arc = [max , 0n] U[Om , 0] U[0 , O'min]. Therefore, if p E Iarc then
p lies on the Arc.
CHAPTER 3
SLIDING MODE CONTROL FOR TIMEDELAY SYSTEMS
3.1 Introduction
Delay is inherit in some control systems such as processes involving heat or mass transport. The presence of delay in a dynamic system can have a destabilizing effect or can cause poor performance. Furthermore, delay can pose a significant challenge to ensure closed loop stability [27]. Throughout the literature, a variety of linear and nonlinear controllers have been used to stabilize timedelay systems, where the delay may appear in the state, input, or in both. Local and global stability conditions have also been derived to guarantee the asymptotic stability of the closed loop system.
The emphases of this work in on sliding mode control (SMC), a technique known for its robustness with respect to perturbations and system uncertainties, that has been used to stabilize systems with time delays; however, most of the literature focuses on systems with either state delay [6, 33, 43, 28] or with input delay [25, 45]. Some work has been done regarding systems with simultaneous state and input delays [22, 20, 55]. In Gouaisbaut et al. [22] a sliding mode controller is designed to stabilize a linear systems with input and state delay. The technique is based on transforming the system into a regular form [31], and then a memory control law that depends on previous values of the input is designed the ensure reaching the manifold as well as the asymptotic stability of the closedloop system. LyapunovKharsovski methods are employed to derived the stability conditions. The method presented in Feiqi et al. [20] incorporates a dynamic compensator into the switching function (manifold) in order to simplify the equivalent control law. Then, a control law that is a function of the switching manifold and the system state is utilized to stabilize a system that
features a constant delay in both a state and the input. The work by Xia et al. [55] considers the derivation of delayindependent as well as delaydependent stability conditions for a class of linear systems with simultaneous delay in the state and the input. An integral switching function with a compensator is utilized to obtain a simple equivalent control law. The stability conditions are given in terms of LMIs.
This chapter introduces a new approach to address the problem of stabilizing a linear system featuring both state and input delay. First, a state transformation is used to map the original system into an inputdelay free form where only state delays are present. Then, a new state defined as the difference between the original state and the transformed state is incorporated into the transformed system equation. Introducing an integral switching function in terms of the transformed states allows the derivation of a simple state feedback equivalent control law. This control action along with a proposed discontinuous control law are shown to drive the states to the switching manifold in finite time. Finally, using a bound on the new state, and utilizing Lyapunov techniques, the development delivers sufficient stability conditions that ensure the asymptotic stability of the closedloop systems in terms of a constant LMI that depends on the delay of the system.
The chapter is structured as follows. In Section 3.2 the problem is formulated along with the transformation that eliminates the input delay. The design of the control law which consists of an equivalent control action and a discontinuous control action is discussed in Section 3.3. In section 3.4 the control law is shown to drive the system states to the sliding surface in finite time. Derivation of sufficient conditions for the asymptotic stability of the transformed and original systems is given in Section 3.5. A bound on the time delay tolerable by the systems is also given. The chapter concludes with an illustrative example that verifies the results in Section 3.6, and presents a summary in Section 3.7.
3.2 Problem Formulation
The timedelay system considered is of the form
i(t) = Ax(t) + Adx(t  h) + Bu(t) + BdU(t  h)
x (T) = (T), T E [Ih, 0]
u(T) = (T), r E [h, 0] (3.1) where, x(t) E R' is the state, u(t) E Rm is the control input, and A, Ad, B, and Bd are matrices of appropriate dimensions. The system delay h is considered to be constant, (T) is an initialstate function, and 11(T) is an initialinput function. The notation I  is used to indicate, dependent on the scalar or vector nature of the argument, an absolute value of a scalar quantity or a vector norm, and I is used to indicate an induced matrix norm.
System Transformation. The following state transformation is introduced as suggested in [1] to map (3.1) into an inputdelay free system: z(t) = x(t) + f eA(thT)Bdu(T)dT (3.2) Differentiating equation (3.2) to obtain
ï¿½(t) = (t) + AJ eA(th')Bdu(T)dT + eAhBdu(t)  Bdu(t  h) and then substituting for ï¿½(t) from (3.1) gives i(t) = Az(t) + Adx(t  h) + B u(t) where B = B+e AhBd. In this work it is assumed that the pair (A, B) is controllable. Let's define the new state
v(t) := f (th )Bdu(T)d (3.3) Jth
Then, the transformed system becomes
(t) = Az(t) + Adz(t  h) + B u(t) + Adv(t  h) (3.4)
where Ad = Ad. Note that from (3.2), v(t) = z(t)  x(t) is interpreted as the difference between the original system x(t) and the transformed system z(t). A feedback matrix F is introduced such that A = A  BF is Hurwitz [47]. Treating the last term in (3.4) as an internal disturbance and defining f(t, v(t  h)) := Adv(t h), the system equation (3.4) can be rewritten as
(t) = (A + BF)z(t) + Adz(t  h) + Bu(t) + f(t, v(t  h)) (3.5) which is free from input delay.
3.3 Switching Function and Control Law Design
The first step in the design of a sliding mode controller is to define a switching function (manifold) along which the system possesses desired properties, such as stability. Various structures of the switching functions have been used in the SMC literature. The most common designs used, however, are the basic form s(t) = Cx(t), the integral form [47], and the dynamic or compensated form [55]. The basic form is best suited for systems having the general structure ,(t) = Ax(t) + Bu(t). The Integral form, adopted in our work, has the advantage of cancelling the delay terms, which allows obtaining a simple statefeedback equivalent control law as is shown later. The dynamic form is preferred when known disturbances and/or delays exist in the system, as it helps to cancel these terms hence yielding a simple equivalentcontrol law.
The sliding surface is defined by a scalar switching function s(t) E R of the integral form
s(t) = Cz(t)  I [CAz(r) + CAdz(T  h)] dr (3.6)
where C is a design matrix chosen such that CR is nonsingular. The structure of the control law is given by
u(t) = Ue(t) + Ud(t) (3.7) where ue(t) is the equivalent part and Ud(t) is the discontinuous part of the control law. The equivalent control is obtained by setting to zero the derivative of equation (3.6) with respect to time, and then solving for u(t) to yield
(t) = C (t)  CAz(t)  CAdz(t  h) = 0
Following the standard approach in SMC, the state derivative i(t) in the above equation is taken from (3.5) after ignoring the disturbance term f(t, v(t  h)). This gives the identity
C(A + BF)z(t) + CAdz(t  h) + CBu(t)  CAz(t)  CAdz(t  h) = 0 which reduces to
CB3Fz(t) + CBu(t) = 0
The solution to the above identity is u(t) = u,(t); hence, after recognizing that CR is invertible it is possible to conclude that the equivalent control law sought is
Ue(t) = Fz(t) (3.8)
The discontinuous control law proposed is
Ud(t) = (CB3)[ks(t) + p(t) sgn(s(t))] (3.9) where
p(t) = HJCII HAdII Iv(t  h)I +( (3.10) and where k > 0 and > 0 are design parameters, and v(th) = z(th)x(th). It must be noted that the discontinuous part is what matters; however, the linear term in equation (3.9) (i.e., ks(t)) helps smooth out the trajectories. Various structures of the discontinuous control law can be found in Hung et al. [26].
3.4 Existence of a Sliding Mode
By "Existence of a sliding mode" we mean that the system trajectories must be forced to reach the sliding surface in finite time and stay there forever. Defining a Lyapunov function V(t) = 1sT(t)s(t), then in order to assure reaching the manifold in finite time it suffice to show that V (t) < 0. The following Theorem provides the proof.
Theorem 1 The timedelay system (3.5) with control law (3.7)(3.10) reaches the sliding manifold within a finite time t8, where
1 n (1n+k Is(0)I) (3.11) ts= knl
Proof: Select V(t) = Is(t)2 as a candidate scalar Lyapunov function. Then,
V(t) = s(t).(t) = s(t)(Ci(t)  CAz(t)  CAdz(t  h))
= s(t){C(A + BF)z(t) + CAdz(t  h)
+ CR [Fz(t)  (CR)l (ks(t) + p(t) sgn(s))
+ Cf(t, v(t  h))  CAz(t)  CAdz(t  h) }
= s(t)(ks(t)  p(t) sgn(s)) + Cf(t, v(t  h))
= kls(t)12  p(t)Is(t)l + Cf(t, v(t  h))s(t)
Now, since JCf (t,v(t  h))I _ HCII JIAdI v(t  h), after invoking (3.10) it follows that
V(t) < k Is(t)12 1 s(t)l (3.12) Therefore, V(t) < 0 for all k > 0 and C > 0, and it can be concluded that the system trajectories attain sliding mode in finite time. An estimate for the upper bound of the reaching time t, can be obtained by integrating the differential equation V(t) = kls(t)12  CIS(t)l, where Is(t)I = 2V(t), under the initial condition V(O) 1s(0)2 . The result (3.11) is obtained after a simple transformation of variables and a simple algebraic manipulations. The derivation of
the reaching time for the cases k = 0 and k  0 is presented in Appendix A.
3.5 System Stability
After demonstrating reaching the sliding manifold in finite time, it remains to show that once the system trajectories are in the sliding phase the system is asymptotically stable. At sliding mode the control law (3.7) reduces to u(t) = ue(t). Then, from (3.8) it follows that the dynamic system (3.5) is given by the expression i(t) = Az(t) + Adz(t  h) + f(t, v(t  h)) (3.13) The developments in the suite make use of the inequality lv(t)I < q(h) lz(t)I (3.14) where
7(h) = h max IeAOII IBdII JIFila (3.15) 0<0
and where a > 1 is a constant derived using a Razumikhinlike argument. The purpose of this constant is to describe the evolution of Iz(t)l, i.e., Iz(O)l < a lz(t)I, 0 E [t  h, t]. The bound (3.14) follows from applying successive bounding operations to the righthand side of (3.3) and introducing the Razumikhin parameter. The complete derivation of the constant bound in (3.15) is given in Appendix B.
We are ready now to provide the sufficient conditions for the asymptotic stability of the system (3.13) which are introduced by the following theorem. Theorem 2 The time delay system (3.5) with control law (3.7)(3.10) is asymptotically stable at sliding mode if there exist positivedefinite matrices P E nx', R E Rnxn , and Q E RnXf such that
Amin(R) > Arnx(Q)
(3.16)
and
Ami(Q)(Amin(R)  A.a(Q)) > (1 + i7(h))2 I1PAdi2 (3.17) where P and R are solutions to the Lyapunov equation PAT + TP = R (3.18)
Proof: Consider a Lyapunov functional of the form
V(t) zT(t)Pz(t) + j zT(T)Qz()dr (3.19) The time derivative of V(t) with respect to time is given by
V(t) = 2zT(t)Pi(t) + (zT(t)Qz(t)  zT(t  h)Qz(t  h))
Substituting the expression for (t) given in (3.13) yields
V(t) 2zT(t)PAz(t) + 2zT(t)PAdz(t h) +2zT(t)Pf(t, v(t  h)) + zT(t)Qz(t)
zT(t h)Qz(t  h) (3.20) and then using (3.18) and bounding the righthand side of (3.20) yields
V(t) < Ami(R)Iz(t)12 + Ama(Q)IZ(t)12 +211PAdliz(t)I Iz(t  h)I
+2IIPAdII Iz(t)l Iv(t  h)I  Am in(Q)Iz(t  h)12 (3.21)
Invoking the bound (3.14) and rearranging terms, inequality (3.21) can be written in the form T
l(t) < Zt abZt (3.22) z(t h)T c d z(t h)
where
a b] [ Amax(Q)  Amin(R) (1 + 77(h)) [PAd (3.23)
c d (1 + 7(h)) l[PAdIl Amin(Q)
To prove asymptotic stability, the Lyapunov functional must satisfy V(t) < 0 which implies that the matrix (3.23) must be negativedefinite. This is ensured if and only if conditions (3.16) and (3.17) are satisfied. U
Theorem 2 can be reformulated to show explicitly the constraint on the size of the delay parameter imposed by design choices, such as the adopted Lyapunov matrices R and Q. This is given in the following corollary.
Corollary 1 The timedelay system (3.5) with control law (3.7)(3.10) is asymptotically stable in sliding mode for timedelay values satisfying
h max IleAol, 1 Amin (R) _ 1) 1 (3.24) 0<0
Amin(R) > max(Amax(Q), 211PAdI1) (3.25)
Proof: The proof consists of deriving conditions that ensure the existence of a feasible solution to (3.16) and (3.17). The proof also recognizes that Amax(Q) represents the maximum eigenvalue to the Lyapunov functional (3.19), hence Amin(Q) < Amax(Q). Using the latter inequality along with the constraint imposed on Amin(Q) by (3.17), it follows that
(1 + (h))211PAd12<
Amin
A solution Amin(Q) to (3.26) exists only if (1 + 7(h))2IJPAd112 < Amax(Q) (3.27) Amin(R)  Amax(Q)
which, using the fact that (3.16) requires that Amin(R)  Amra,(Q) > 0, is equivalent to
Amax(Q)2  Amin(R)Amax(Q) + (1 + ?7(h))2IIPAd12 < 0 (3.28) The analysis of the above inequality reduces to investigating the boundary defined by the equality
Amax(Q)2  Amin(R)Amax(Q) + (1 + 7?1(h))21IPAd12 = 0 (3.29) which can be readily solved to yield
Amx(Q) = 2mi(R) + Vy/Amin(R)2 4(1 + 7(h))2IIPAdl2
Given that only real solutions are meaningful, it follows that the discriminant must be nonnegative, i.e.,
Amin(R) > 2(1 + ?7(h))IPAdI (3.30) The presence of the factor (1 + r(h)) > 1 implies that a feasible solution to (3.30) exists only if
Amin(R) > 2IIPAdII (3.31) Furthermore, from (3.30) it follows that the set of feasible solutions is given by the equivalent inequality
2 2 IIPAdwhich establishes condition (3.24) of the corollary after using (3.15) and suitably rearranging the factors in the inequality. Moreover, since Amin(R) must simultaneously satisfy condition (3.16) and constraint (3.31), it follows that it must satisfy condition (3.25) of the corollary. E
It remains to show the asymptotic stability of system (3.1), as addressed in the following theorem.
40
Theorem 3 The timedelay system (3.1) with state x(t) is asymptotically stable if the transformed system (3.5) reaches the sliding manifold and is asymptotically stable on the manifold.
Proof: If z(t) reaches the sliding surface, then the control law reduces to u(t) =
Fz(t), and (3.2) can be rearranged in the form x(t) = z(t) + j eA(thr)BdFz(T)dT (3.32)
Now when (3.5) is asymptotically stable, it follows that z(t) + 0, then x(t) + 0 in (3.32), hence completing the proof. M
The ensuing discussion presents an example that illustrates the results. From Figure (3.1), the chattering phenomenon in the control action is obvious. Chattering refers to highfrequency finiteamplitude signals. It is mainly due to the discontinuous control law. Chattering is undesirable because it can excite neglected highfrequency components, and lead to premature wear of the actuators.
Several approaches have been proposed in the literature to alleviate or eliminate the effect of chattering. Slotine [49] [50] proposed the use of a boundary layer such that standard SMC is used outside the boundary, and an approximated version of it takes affect inside the boundary. The work of Bartolini [5] introduces a new scheme to chattering reduction. The system order is increased and an estimator based on the augmented plant is defined. Also, a suitable manifold is defined such that the derivative of the control law is discontinuous on this manifold. Finally, this control law is feed through an integrator placed in the plant to yield a continuous control law. In [34], the actuator dynamics are treated as unmodelled dynamics, and thus are not part of the control law. Instead, the pass filter characteristics of the actuators are utilized to smooth out the chattering introduced by the discontinuous control action.
3.6 Example
Consider the timedelay system (3.1) with h = 0.8, with an initialinput function '(T) = 0 for T E [h,0), and an initialstate function D(r) = [1 2]T for T E [h, 0] so that the initialstate vector is x(0) [1 2]T, and the following system parameters:
F1 0 [001 0042
A A 0 , B=[] Bd=
0.2 0.3 0.02 0 1 0
The control design considered is based on the following matrices associated with the Lyapunov equation (3.18) and the Lyapunov functional (3.19):
R= ,Q= ,P=
0 6 0 3 5.7537 48.4578 Using the feedback matrix F = [0.0166 0.2827] the eigenvalues of A are placed at {0.4, 0.45}. The controller parameters are k = 30 and ( = 3. The switching function's initial value is s(0) = 3, and its design matrix is chosen as C = [0.9 0.85]. Selecting a = 2.3391, calculating the norms of IjBdI = 1, IIFiI = 0.2832, and evaluating max IleAO1H = 2.2381 , then equation (3.15) gives q/(h) = 1.1858. It is now
0<0
straightforward to verify that conditions (3.16) and (3.17) of Theorem 2 are satisfied. First, condition (3.16) is met given that Am.i(R) = 6 is greater than Amaz(Q) = 3. Also, condition (3.17) is met since Amin(Q)(Ami,(R) A.rax(Q)) = 9 is greater than (1 + i7(h))2 IIPAd1I12 = 5.4770. It follows then that Theorem 2 ensures the asymptotic stability of the closed loop system. The conditions of Corollary 1 are also satisfied since this corollary is equivalent to Theorem 2. In fact, h max eAI = 1.7905 0<0
is less than (I1 1) B = 2.7208, and Amin(R) = 6 is greater than
2 IIPAd  IIBdl IfI oIF
max(A,,,a(Q), 211PAd11) = 3.
Figure 3.1 shows the results of a simulation study. Figure 3.1(a) depicts the asymptotic stability of the transformed system (3.5) with state variable z(t). The state trajectories for the original system (3.1) with state variable x(t) are shown in Figure 3.1(b). Equation (3.11) yields t, = 0.1145, a value that is consistent with the time at which s(t) becomes identically zero in Figure 3.1(d), given that at that instant z(t) has reached the sliding manifold. It is apparent that the states x(t) develop asymptotic behavior after a time t _ t, + h = 0.9145, which is a consequence of the fact that the original system has an input delay whereas the transformed system is free of input delay. Figure 3.1(c) shows the control action u(t) rising quickly from its initial value, and reaching the value of zero at approximately the same time that z(t) reaches the sliding manifold. Figure 3.1(c) also shows that the control scheme suffers from a chattering effect, as is to be expected from the presence of the signum function in the discontinuous control law (3.9).
The chattering of the signal can be alleviated by introducing the approximation
8
sgn(s) Is +Figure 3.2 shows that a value of f = 0.001 effectively makes the chattering disappear (see Figure 3.2(c)), while the state trajectories z(t), x(t), and the switching function s(t) remain virtually unchanged.
Remark 1 The negative definiteness of the constant matrix (3.23) can be checked directly through the Linear Matrix Inequality (LMI) toolbox of Matlab.
3.7 Conclusions
A sliding mode controller has been designed to stabilize a linear system with state and input delay. A key step is the use of a transformation to map the system inputdelay free. This transformation is also used to define a new state appears as
43
a disturbance in the transformed system. A SMC is then designed to stabilize the statedelay system. The controller is shown to successfully drive the system states to the sliding surface in finite time. Sufficient stability conditions are derived using Lyapunov techniques.
2 4
t
2 4
t
6 8
6 8
"l
0 2 4 6 8
t
3
(d)
2
0
1
0 2 4
t
6 8
Figure 3.1: Trajectories for the states of the transformed system (a), the states of the original system (b), the control law (c), and the switching function (d).
(a) i 1
Z,(0)
     
i
(b) [E 17 W
5
0       
5
0
5
5
10
15
20
(c)
o
n
ii ~
5
5
U
0 2 4 6 8
t
0
5
10
0 2 4 6 8
IU
0 2 4 6 8
3
(d)
2
0
1
0 2 4 6
Figure 3.2: Trajectories for the states of the transformed system (a), the original system (b), the control law (c), and the switching function imation to the signum function (d).
(a)
 2t
(b)  x1(t)
X2(0)
     
the states of
with approx
CHAPTER 4
STATE FEEDBACK CONTROL OF TIMEDELAY BILINEAR SYSTEMS
4.1 Introduction
This chapter considers the stabilization of a class of statedelayed bilinear system with constant delay. Much work has been done to derive sufficient conditions for the asymptotic stability of the closedloop bilinear systems via a variety of controllers, including state feedback [24], quadratic feedback [11], and nonlinear control [10, 12]. Work has also been done to derive stability conditions for timedelay bilinear systems. The combination, however, of the timedelay and the nonlinearity makes the design of stabilizing controllers as well as the analysis much more challenging. Stability conditions can be either delayindependent or delaydependent. Delayindependent conditions do not give any information regarding the size of the delay tolerable by the system and therefore they are generally more conservative. On the other hand, delaydependent conditions provide information about the bound of the delay, which leads to less conservatism [18]. The former provide no information about the delay tolerable by the system. Some results can be found in [13, 23, 24, 42] . In Chiang [13], the stability analysis of a class of inputdelay bilinear systems is considered. The derivations utilize the Razumikhin parameter in conjunction with matrix measure techniques [52]. The stabilization of a class of statedelay bilinear systems with saturating actuators is investigated in Niculescu et al. [42]. The work by Ho et al. [24] utilizes a memory statefeedback control law to yield global stability conditions for a class of timedelay bilinear systems. In Guojun [23], the stabilization of a class of timevarying bilinear systems with outputfeedback is studied. Delaydependent conditions are given in Liu [37], where a memoryless statefeedback control law is
used to derive stability conditions for a timedelay bilinear system with saturating actuators.
In this chapter, the stability analysis of a class of statedelay bilinear systems via state feedback control is investigated. Lemmas 2 and 3 are developed to facilitate the proof of the main theorem. The analysis utilizes the matrix measure [52], and a technique that allow expressing the stability conditions in terms of a bound on the system delay and an initial condition region of attraction. As a result, delaydependent stability conditions are derived.
The chapter is organized as follows. Section 4.2 presents the problem along with a controllability assumptions. Section 4.3 of the paper introduces preliminary results which will be utilized in the proof of the main result. The main result is presented in Section 4.4, followed by an example and conclusions in Sections 4.5 and
4.6, respectively.
4.2 Problem Statement
Consider the system
ï¿½(t) = Ax(t) + Adx(t  h) + Bu(t) + Nx(t)u(t)
x(ï¿½) = I(ï¿½),$E [h, 0] (4.1)
where t E R is the time variable, x(t) E R' is the state, u(t) E R is a scalar input, A, Ad, N, and B are matrices of appropriate dimensions, and T(O) is an initialstate function. The nonnegative system delay h is considered to be constant. In our development, the following conventions are used: the notation I" I is used to denote a vector pnorm, and the notation 11 I is used to denote the induced matrix pnorm. Finally, p(.) is used to denote the matrixmeasure function (see Appendix E for definition and useful properties) based on the induced matrix pnorm . Also, the following two assumptions are adopted:
(A1) the pair (A, B) is controllable.
(A2) the pair (A + Ad, B) is controllable. Our objective is to find a linear state feedback control of the form u(t) = Fx(t) (4.2) that renders the closedloop system asymptotically stable. We also aim at deriving delaydependent stability conditions which guarantee the asymptotic stability of the system for initial conditions lying in a specified region, the region of attraction. Under the feedback control (4.2) the closedloop system becomes ï¿½(t) = Ax(t) + Adx(t  h) + Nx(t)Fx(t) (4.3) where A = A + BF is Hurwitz stable.
4.3 Preliminary Results
The main stability results are given in Theorem 1 for the proof of which use is made of the following three lemmas.
Lemma 1 Consider the scalar differential equation
(t) = ay(t)2 + by(t) (4.4) where a > 0, and b 5 0. Then the analytic solution is given by [13] bebt y(0) (4.5) y(t) = b + a y(O) (1 ebt) where y(O) is the initial condition.
Proof: The derivation of the analytic solution (4.5) is given in Appendix C. Furthermore, a thorough discussion of the solution behavior is provided in Section 4.7
along with a graphical interpretation of the solution. Now, the finite escape time, tf, for which y(tf) + o can be found by setting the denominator of (4.5) to zero. In our development, the focus is on nonnegative initial conditions, y(O) > 0 that are not equilibrium points of (4.4) (i.e., y(O) : 0 and y(O) 5 s). If 0 < y(O) <  and b < 0, then there is no finite escape time, and from (4.5) the following observations are readily verified:
(i) y(t) > O V t < o
(ii) y(t + T) < y(t) V t < T < o
(iii) lirn y(t) = 0 U
t+0o
Remark 1 It should be noted that Claims (i) and (iii) can be verified from the analytical solution (4.5). The proof of Claim (i) is given in Appendix D. Claim (ii) can be proved by verifying that (4.5) implies the inequality y(t + T) < y(t) V t < T < co. The complete proof is given in Appendix D. Lemma 2 Consider the scalar differential equation i(t) = az(t)2 + bz(t) + co (4.6)
where a > 0, b < 0, and co > 0. Let k, and k2 be the roots of az(t)2 + bz(t) + co . If b2 > 4aco, and k, < z(O) < k2, then
(i) z(t) > O V t < o
(ii) z(t + T) < z(t) V t < T < oo
(iii) z(t) * ki as t + o
Proof: The condition b2 > 4aco implies that k, and k2 are real distinct roots. Introducing the state transformation
y(t) = z(t) + r
(4.7)
where r is a real constant to be determined, and combining the time derivative of (4.7) with equation (4.6) yields
y(t) = a y(t)2 + (b  2ar)y(t) + ar2  br + co (4.8) The quadratic form defined by the last three terms on the righthand side of equation (4.8) is set to zero by the values
r, r2 4ac (4.9) 2a
where r, < r2. One can easily verify that ki =r2 > 0 and k2 = rl > 0. The objective is to select a value of r that renders the coefficient b  2ar < 0 in equation (4.8). This is realized if and only if b < r, which as a consequence of equation (4.9) is satisfied by selecting r = r2, where r2 is the largest root. Therefore, substituting r = r2 into equation (4.8) yields
y(t) = a y(t)2 + (b  2ar2)y(t) (4.10) Now, equation (4.10) has the same form as equation (4.4); hence, Lemma 1 can be applied to conclude that y(t) + 0 as t + oo, provided that 0 < y(o) < b2ar2. From
a
the transformation equation (4.7), it follows that z(t) 4 r2 = ki as t + o, provided that the initial condition satisfies z(0) <  b+ak = k2. This proves claim (iii) of
a
the lemma. Claims (i) and (ii) can be readily verified from Lemma I and the transformation (4.7). First, since y(t) > 0 then z(t) > r2 > 0. Second, since y(t) is strictly monotonically decreasing it follows that its shifted version z(t) is also strictly monotonically decreasing. M Remark 2 The arrows on the zaxis of Figure 4.la show that the solution converges to the smaller equilibrium point k, whenever the initial condition satisfies z(O) < k2. Figure 4.lb shows the conceptual state trajectories for three different initial conditions.
i(t) z) k2:
+ + kl
 zt)  _ _ _
t
(a) (b) Figure 4.1: Graphical interpretation of the differential equation of Lemma 2: (a) derivative graph, (b) solution curves Lemma 3 Consider the scalar system
(t) = az(t)2 + bz(t) + hcl sup z(O) + hc2 sup z(O)2 (4.11) t2h 0, a > 0, cl > 0, c2 > 0, b < 0, and where the initialstate function 0(0) = z(ï¿½) > 0, ï¿½ E [2h, 0] (4.12) satisfies sup z(O) = z(O). If
2h<0
b  /b2 4aco ( + 4acot (413)
2aa + hc2' 2a
and
h < min{ c b } (4.14) where co = h(ciz(O) + c2z(0)2), and Co = then
(i) z(t) > O V t < 0o
(ii) z(t + T) < z(t) V t < T
(iii) lim z(t) = 0
Proof. Let  = 2h. Now, consider an initial condition that satisfies (4.13), hence z(0) > 0, and assume that at some time t2 < oc the state function satisfies z(t2) < 0,
and that the state changes sign for the first time at an instant tj < t2 such that t2  ti < T. This scenario can hold only if i(t) < 0 at some time t E (tl, t2]. The proof consists of showing that this is a contradiction. From equation (4.11) it follows that for all t E (tl, t2] the state derivative (t) is strictly positive because all the term in the right hand side are positive. This contradicts the hypothesis and hence proves claim (i). For claims (ii) and (iii), the proof is conducted in four steps. First, the system is shown to be strictly monotonically decreasing in the interval [0, T). Next, the system is shown to be strictly monotonically decreasing in the interval [T, 2T). In a third step, it is shown that the strict monotonic decrease is preserved in all subsequent intervals of length T. Finally, in step four it is shown that as t + oo both z(t) + 0 and i(t) + 0.
As a preliminary observation, note that at t = 0 equation (4.11) can be written as
i(0) = az(0)2 + bz(O) + hclz(O) + hc2z(O)2 = (a + hc2)z(0)2 + (b + hcl)z(O) (4.15)
The states that set (0) = 0 can be found by finding the roots of the expression (a hc2)z(0 b ï¿½ hc1
0 = (a + hc2)Z(0) b +c2 + z(0)] (4.16) Hence, the initial states z(0) = z1 and z(0) = z2, where z1 = 0 and z2  b+hcpr duce zero derivatives at the initial time t = 0. The focus now turns to determining conditions that ensure that z(t) is a decreasing function of time. This requires that the condition i(0) < 0 hold at all finite time.
The first step of the proof considers the interval t E [0 T). Depending on the parameter b, there are two scenarios of relevance. If b + hcl > 0, then since z(0) > 0, it
follows from (4.15) that i(O) > 0 and the solution is initially increasing. Obviously, this case is not desired. However, if
b + hcl < 0 (4.17)
then from (4.15) it can be concluded that if b + hc1
z(0) < b+hcl (4.18) a + hc2
then i(0) < 0 and it follows that the solution is initially decreasing. Therefore, in the interval [0, T) equation (4.11) can be written in a form similar to that given in Lemma 2, namely,
io(t) = azo(t)2 + bzo(t) + Co (4.19) where co = hcjz(O) + hc2z(O)2 and where zo(t) is the solution of (4.11) for t E [0, T). In order to ensure that the equilibrium points of (4.19) are real and distinct the discriminant must satisfy b2  4aco > 0, which implies that the system delay must satisfy h < hi b2 Furthermore, since b + hcl < 0 is the desired condition, it follows that h must also satisfy h < h2 :=  . This suggests that the system delay must satisfy h < min{hl, h2} which yields inequality (4.14) of the lemma. Let k = b2a and k2 = b+,2a respectively denote the smallest and
2a 2a
largest roots of the right hand side of (4.19). Since b < 0, Lemma 2 can be applied to (4.19) to conclude that the solution zo(t) is strictly monotonically decreasing, and that zo(t) + ki as t + oc provided that z(0) belongs to the region of attraction k, < z(0) < k2. In order to also satisfy the constraint (4.18), the region of converence is redefined as
kl < z(0)
which is equivalent to inequality (4.13) in the lemma.
The second step aims to show that z(t) decreases in [T, 2T). Since it has been established that z(t) decreases in the period [0, T), it follows that sup z(0) = r
r
where
c(t) = hclz(t  T) + hc2z(t  T)2 (4.22) and where z,(t) is the solution of (4.11) when t E [, 2T). Note that as a consequence of the results of the first step of the proof, c(t) is strictly decreasing in [T, 2T) which implies that the roots of the right hand side of (4.21), namely, K,(M)=b  VP  4ac(t) (4.23) Kl~t) =2a
and
Kt) b + b2  4ac(t)
K2(t = 2a
are such that the smaller root Ki(t) is decreasing and the larger roots K2(t) is increasing. This, in turn, implies that z(t) is strictly decreasing.
The third step involves extending the results of the second step to the subsequent intervals, [2T, 3T), [3T, 4T), ..., etc. Let z,,(t) represent the solution to system (4.11) in the interval [nT, (n + 1)T) where n > 2 is an integer. When t E [nT, (n + 1)T) system (4.11) can be written in the equivalent form inr(t) = azn,(t)2 + bzn,(t) + c(t)
where c(t) is given by (4.22). Note that when n = 2, the function c(t) is strictly monotonically decreasing in t E IT, 2T). Repeating the argument invoked in the second part of the proof, namely that the monotonicity of c(t) in the interval ensures
that K1 (t) given in (4.23) is strictly monotonically decreasing in that interval, leads to the conclusion that zn (t) is strictly monotonically decreasing in [n, (n + 1)T) when n = 2. The proof is completed by induction for n = 3, 4, ..., etc. Hence, zn7(t) is strictly decreasing in any interval of length . This implies that z(t) is strictly decreasing which proves claim (ii).
Step four of the proof is based on recognizing that from claim (i) z(t) is bounded from below, and using the fact that z(t) is strictly monotonically decreasing, it follows that z(t)  L as t  cc, where L < cc is a limit, and that i(t) + 0. Taking the limit as t + cc on each side of equation (4.21) yields
0 = (a + hc2)L2 + (b + hci)L (4.24)
Solving for the two limits of (4.24) yields L1 = 0 and L2 =  . Now, given that the initial condition (4.13) implies that z(0) <  b+hc L2 then as t c, the decreasing state z(t) must reach the lower limit L1 = 0. Thus, lim z(t) 0. This
toc
proves claim (iii). U
4.4 Main Result
Now, utilizing the developments in Lemma 3, we are ready to present the main result which concerns the asymptotic stability of the original system (4.1). The approach is to bound the norm of the solution of (4.1) (i.e., IIx(t)II) by a scalar function z(t) that is asymptotically stable. Thus, when z(t) + 0 then IIx(t)II + 0. An argument based on the comparison theorem [41] (see Appendix F) is utilized. Theorem 1 The timedelay bilinear system (4.1) under assumptions (Al) and (A2) and the state feedback control law (4.2) is asymptotically stable if 0< Ix(0)l~mi n b + hcl b b2  4aco}4.5 a + hc2 2a
and
h
where a = JINII IFiI, b = (A), Cl = lIAdAIll + IIAdAdII, C2 = lAdNII IFll, Co = hcjjx(O)l + hc21x(O)12, Eo = , and A A + Ad.
Proof: Since equation (4.3) is continuously differentiable, then
x(t)  x(t  h) = ï¿½(t + 0) dO (4.27) Substituting for i(t) from equation (4.3) and rearranging terms yields the expression
x(t  h) = x(t)  { IAx(t + 0) + Adx(t + 0  h) + Nx(t + O)Fx(t + 0) }dO
which can be substituted in equation (4.3) to get ,(t) = Ax(t)+Ad [x(t) Ax(t+9)+Adx(t+Oh)+Nx(t+O)Fx(t+O) }dO ]+Nx(t)Fx(t)
or
:ï¿½(t) = Ax(t)+ (Ad){ Atx(t+O)+AdX(t+Oh)+Nx(t+O)Fx(t+O) }dO +Nx(t)Fx(t)
(4.28)
where A = A + Ad. The solution to equation (4.28) has the form
x(t) = eJtrO+ eA(t8[] {(Ad)Ax(s + )  A X(s + 9  h)
AdNx(s + O)Fx(s + 0)} dO + Nx(s)Fx(s) ] ds (4.29) where xo = x(O) is the initial condition obtained by setting x(O) = 'I(O) in equation (4.1). Utilizing the matrix measure property IleAtllp < e"P(At) [14], and taking the
norm of both sides of equation (4.29) gives
t 0
Ix(t)I < e (A)tIXoI + ep(A)(ts) [J{ IIAdAll Ix(s + 0)1 + IIA'd1 jx(s + 0  h)I I ~hd
+IIAdN1I IIFIIIlx(s + 0)2 } dO + IINiI JIFL Ix(s)12 ] ds
Now, the inner integral can be bound using the supremum of its arguments and the length h of the integration interval, to yield
Ix(t)l < eI(A)tlxol + eA(A)(ts) [ h IlAdAll sup Ix(O)l + h IIA211 SUP Ix(O)l
 sh
+ hjAdNIj JIF11 sup Ix(9)12 + INI JIFII Ix(s)12 ] ds sh
Letting a = HINII IIFit, c1 = IIAdAII + IIAdAdII, and c2 = IIAdNiI IIFII, and adopting the largest interval s  2h < 0 < s gives
Ix(t)l _< ei(A)t jxol + j e()(ts) { hc sup Ix(O)I
I s2h
+hc2 sup Ix(O)12 + alx(s)12}ds (4.30)
s2h
Now, let a scalar function with initial condition zo = Ixol satisfy
z(t) = etj(A)t Zo + eA(A)(ts) I hc sup z(O) + hc2 sup z(0)2 + az(s)2}ds
I s2h
i(t) = az(t)2 + p/(A)z(t) + hcl sup z(O) + hc2 sup z(O)2 (4.31) t2h
The system equation (4.31) is of the same form as that of equation (4.11) with b = ji(A). Two cases are considered, depending on the norm of the initial condition.
Case 1: xo = x' where kl < Ixol < minj +hc k2}.
Case 2: xo = x" where 0 < ixo < kl.
b b 24coFrcae1
where k,  2a and k2 2a For case 1, let the solution to (4.3) be denoted as x'(t) such that inequality (4.30) applies with x(t) = x'(t). Invoking
Lemma 3 show that z(t) + 0 as t  oo. Hence, since Ix'(t) I : z(t) it follows that x'(t) + 0 which implies that x(t) is asymptotically stable. For case 2, let the solution to (4.3) be denoted as x"(t) such that inequality (4.30) applies with x(t) = x"(t). Since Ix"I < Ix then from (4.30) it follows that Ix"(t)l < Ix'(t)l. Finally, since x'(t) + 0 then x"(t) + 0, which implies that the system is asymptotically stable. This completes the proof. U
4.5 Example
Consider the timedelay bilinear system (4.1) with h = 0.5, an initialstate function T(O) = [1.5 1.5]T for ï¿½ E [2h, 0] so that the initialstate vector is x(0) [1.5 1.5]T, and the following parameters:
0.5 0.2 [0.2 0 1 0.02 0.061 A ,Ad= ,B= ,N=
0.8 2.1 0.5 0.1 0 0.01 0.03
Choosing the eigenvalues of A to be placed at {1.6 ,2.1} yields the feedback matrix F = [2.1 0.2]. Let us investigate the alternative pnorms to verify which norm satisfies the inequality conditions (4.25) and (4.26).
For the 1norm, neither condition is satisfied. First, the discriminant D = b24aco = 1.8 and therefore the roots are complex. Second, h = 0.5 > {0.36,0.06}. For the 2norm, both conditions are satisfied. Hence, adopting the 2norm gives the following values for the parameters in Theorem 1: a = 0.1492, fi2(A) = b = 1.32, and co = 1.4130. The attraction region (4.25) is given by 0 < Ix(0)l = 2.12 < min{3.8, 7.6}. Also, the bound on the delay is given by h = 0.5 < min{1.18, 1.03}. Finally, for the conorm, both conditions are satisfied. The values of the parameters in Theorem 1 are given by: a = 0.168, p,,(A) = b = 1.8, and co = 1.0415. The delaybound condition equation (4.26) is satisfied since h = 0.5 < min{1.44, 2.3148}. Furthermore, the domain of attraction (4.25) is given by 0 < Ix(0)l = 1.5 < min{5.4855, 10.1}.
58
Figures 4.2 and 4.3 illustrate the results of a simulation study. Figure 4.2 depicts the time evolution of the state trajectories which assume an asymptotic behavior. Figure 4.3 shows the norm of x(t) converging to the origin. Remark 3 The sufficient conditions given in (4.25) and (4.26) can vary depending on the norm and matrix measure chosen. While stability can be concluded for a certain norm, it may not be so for other norms. The stability conditions can, however, be tightened and therefore reducing conservatism by selecting other norms and matrix measures. One choice could be the following:
II1 = maxE wj LaijI
Wi
and
AW(A) = max{a,, + . Wj lajl}
i Wi
4.6 Conclusions
Delaydependent stability conditions are derived for a class of timedelay bilinear systems utilizing the comparison theorem and matrix measure techniques. The sufficient stability conditions provide a bound for the tolerable system delay as well as the domain of attraction for which asymptotic stability is guaranteed. These results are however conservative mainly due to applying a supremization over a large time interval which is twice the size of the delay.
0 1 2 3 4 5
t
Figure 4.2: Plot of the trajectories of the system states.
0 2 4 6 8 10
Figure 4.3: Plot of the trajectories of the system states.
4.7 Further Analysis of the System in Lemma 1
Further analysis of the system equation (4.4), and a discussion of the behavior of its solution (4.5) is presented in this section.
Theorem 2 Given the system (4.4) and the analytic solution (4.5), then the following observations are readily verified:
Case 1: b > 0
o if y(0) > 0 (4.32a) lim y(t) = 0 if y(O) =0 (4.32b) t+0(3
b if otherwise (4.32c)
Case 2: b < 0
0 if Y(O) < _ b i (4.33a) lim y(t) A if y(O) = _ (4.33b) t+ 00 a a (43a 00 if otherwise (4.33c) Proof: For Case 1, it is noted from (4.5) that the solution becomes unbounded at finite time when the denominator is zero. The finite escape time (FET) can be calculated as a function of the parameters a, b, and y(0), by setting the denominator to zero and solving for t. The result is tFET = 1 ln[1 + b (434) bFT= ay(O)
Now, to prove the first branch of Case 1, namely (4.32a), the denominator of (4.5) is set to zero, i.e.,
b + ay(O)(1 ebt) = 0
or
1a y)(Ie bt) =0(4.35)
b
Now, since as t + cc the quantity 1  ebt takes values between zero and co, and since y(0) > 0, then the equality (4.35) is satisfied for some t > 0. Hence, the denominator becomes zero and the solution blows up at finite time. The second branch of Case 1 (4.32b) is readily verified by substituting y(O) = 0 in (4.5). Finally, substituting for y(O) =  into (4.5) gives y(t) =  which proves the branch (4.32c). Graphs (a) and (b) of Figure 4.4 show the derivative graph and the solution curves of (4.4), respectively, for the case where b > 0.
(t) z(t)
0 t
+ +
Sz(t)
(a) (b)
Figure 4.4: Graphical interpretation of the differential equation(4.4) for the case where b > 0 : (a) derivative graph, (b) solution curves
For Case 2, the first branch (4.33a) indicates that the system is asymptotically stable when the initial condition satisfies y(0) < . Therefore, in this case the finite escape time is avoided. To verify this claim, consider the denominator expression (4.35). For t > 0 the quantity 1  ebt takes values in the interval (0 , 1). The goal is to show that for y(O) < A the equality (4.35) is never satisfied. First, it is trivial to verify this claim for y(0) = 0. Second, for 0 < y(0) < _b it follows that, since 1 < 6 y(O) < 0, the second term in (4.35) is never equal to 1. Finally, for y(O) < 0, equality (4.35) is again never satisfied since its second term in is always positive. This proves the first branch of Case 2. The second branch (4.33b) is readily verified by substituting y(0) = A in (4.4). Finally, branch (4.33c) indicates that
62
for y(O) >  there is a finite escape time, which means that (4.35) is satisfied for
a
some finite values of t. First, note that 1 < y(O). Next, since the values of the quantity 1  ebt belongs to the interval (0 , 1), then for some time t > 0 the second term in (4.35) can equal 1 such that (4.35) is satisfied. This completes the proof of Case 2. Figure 4.5 depicts the solution behavior for the case where b < 0. It is clear that when y(O) < a the solution converges asymptotically to the origin, and for y(O) >  the system becomes unstable.
yQ() yt
b
+ +
0
 _b y(t) t
a
(a) (b)
Figure 4.5: Graphical interpretation of the differential equation(4.4) for the case where b < 0: (a) derivative graph, (b) solution curves
CHAPTER 5
FUTURE WORK AND DISCUSSIONS
Future work may focus on three problems. First, designing a sliding mode controller for a class of timedelay bilinear systems. Second, extending the concept of the Nyquist robust sensitivity margin (NRSM) to a class of uncertain systems with multilinear uncertainty structure. Finally, the concept of the NRSM can be utilized to obtain a weighting function for an H.. design similar to work by Baowei [?].
5.1 Problem 1 : Sliding Mode Control for a Delayed Bilinear System
The system under consideration is of the form
(t) =Ax(t) + Adx(t  h) + Bu(t) + Nx(t)u(t)
x(T) = (T), T E[h, 0] (5.1)
where the system delay h is assumed constant.
The objective is to design a sliding mode control (SMC) that renders the system asymptotically stable. However, when nonlinearity is combined with timedelay, the problem becomes more challenging from the control design viewpoint as well as from the perspective of the stability analysis. Therefore, problems may arise when designing a controller to force the trajectories into the sliding manifold and to ensure they stay there for all subsequent time.
5.2 Problem 2 : Extending the NRSM
Multilinear Uncertainty. An interesting, yet challenging, problem is to extend the concept of the Nyquist robust sensitivity margin to include linear system
with multilinear uncertainty structure, i.e., systems of the form n(s,q) (5.2) g(s) d(s, r)
where r and q are multilinear vectors. In the polynomial case, since the multilinear uncertainty lacks edge results, the mapping theorem [7] introduces overbounding polynomials which of course can be conservative. For the transfer function (5.2), there are two situations. First, when the vectors r and q are independent, stability can be analyzed using the mapping theorem by considering a polytopic family g(s) such that any worstcase margin calculated for g(s) is considered a guaranteed margin for g(s) [7]. However, when the vectors r and q depend on each other, then there are no comparable results to analyze the robust stability of the system.
Hoo Design. Another future work project is to design an Hoo controller based on a weight function derived from kN,,. This project can follow the work of Baowei [29, 30] which is summarized as follow. Given the system in Figure 5.1, use of made of the M  A structure given in Figure 5.2 where it is known from the small gain theory [57] that the system in Figure xx is internally stable for any A(s) satisfying
1
I11(A11. < 1
IM(s)IlK
I          :
P(s):
+_ C(s) u, Po(S)
+
Figure 5.1: The negative feedback loop of the uncertain system p(s) with a controller c(s).
The transformation of the system in Figure 5.1 into the M  A formulation is given in Figure 5.3 where it follows that the system is stable for any 6(s) satisfying [(s)I < 1R(s) (5.3)
where R(s) c(s) Next, the system in Figure 5.3 is put into the mixed sensitivity framework where the stability conditions can be expressed by the inequality [IW2(s)R(s)kD < 1 (5.4) The problem now is to choose W2(s) to represent the effective part of 6(s). Finally, a weighting scheme, namely the effective critical perturbation radius (ECPR), is designed based on the critical direction theory.
, : e, FA1
Figure 5.2: The standard M  A loop for stability analysis.
Figure 5.3: A system with parametric uncertainty in the standard M  A loop.
APPENDIX A
DERIVATION FOR THE REACHING TIME
General case
Consider a Lyapunov function V(t) = ls(t)2. The standard condition that ensures reaching the sliding manifold in finite time is given by V(t) = s(t)A(t) :_ 71s(t)l (A.1)
where 77> 0 [49].
Theorem 1 Under the inequality given in (A.1), the time at which the sliding manifold is reached is given by t= Isol (A.2) 77
Proof: The proof utilizes Figure A.1. From the figure it is noted that the initial value of the sliding function is s(0), and that s(t,) = 0. The equality limit of (A.1) (i.e., s(t)(t) = ??s(t)l ) can be written as 1is
dt  ds (A.3)
 i Is(t)I
where s(t) $ 0. The reaching time t, is obtained by integrating (A.3) as follows: t, dt = fl t" ls(t)l ds 0rJ 0s(t)
which yields the result
= J ~ (s(t,)  s(0)) ï¿½ s(t) > 0, case(a) (A.4)
 (s(t.) + s(0)) : s(t) < 0, case(b)
67
Since s(t,) = 0, equation (A.4) can be written as { 8(0) ts = (o)
t9 = i{s(O)I
t
Case (a)
10 t
Case (b)
Figure A.1: Plot of the switching function for s(0) > 0 (a), and s(0) < 0 (b).
New condition
Consider the Lyapunov condition
y(t) = p812  disi
(A.5)
where p > 0 and d > 0. Using the fact that Is(t)I = v2V, equation (A.5) can be rewritten as
V'(t) = 2pV(t)  v/2dV(t)05
(A.6)
Let y(t)2 = V(t). Then equation (A.6) becomes
V(t) = 2y(t) i(t) = 2py(t)2  62dy(t)
(A.7)
'(t) + py(t) = 'd
68
Multiplying through by ept and rearranging terms yields d d pt (A.8) d(y(t)et)  __Integrating both sided of (A.8) gives
d
y(t)ept d ept + c (A.9)
The constant c can be found by evaluating (A.9) at t = 0, which gives C = y(O) + d
Substituting for the constant c, equation (A.9) can be written as d d (.0 y(t) = eP[y(0) + (A.10)
Now, using (A.6) equation (A.10) can be written as
d
ft) e~[ V (0) + d d Since at sliding mode V(t) = V(t) = 0 because s(t) = =(t) 0, it then follows that
0 = pt, + ln[V(O) + ]In d or
t, n[P+ (A.11) p d
Now, since V(t) = s(t)Ts(t), it follows that
1 V/) sO t= ln[1 + p] p d
APPENDIX B
DERIVATION OF THE BOUND ON v(t) OF CHAPTER 3
This appendix presents a derivation of the bound (3.14) Iv(t)l < ,q(h)jz(t)l
Consider the equation v(t) = eA(th')Bdu(T)dT (B.1) ith
Substituting for the control law u(t) = Fz(t) valid at sliding mode and taking the norm of both sides of (B.1) yields
v(t) _ IleA(thr) l IIII lIFIz(T)Idr (B.2) Let 0 = t  h  r). Then for T = t, 0 = h, and for T = t  h, 0 = 0. The inequality (B.2) then implies
h
Iv~tl __ IleA111 IIBdjl JIFgll max
 0<0
where 2(t,t  h) = max Iz () . Utilizing the Razumikhin concept [38], it follows th
that
2(t, t  h) <5 a jz(t) I where a is a Razumikhin parameter. Therefore, inequality (B.3) can be written as
lv(t)l < h max IleA0II liBdil IFli alz(t)l (B.4)
 0<0
which yields the bound Iv(t)l < q(h)jz(t)j, where q represent the righthand side of inequality (B.4).
APPENDIX C
SOLUTION FOR A DIFFERENTIAL EQUATION
Here, the derivation of the solution (4.5) of the system equation (4.4) given in Lemma 1 of Chapter 4 is presented. Theorem 1 Consider the scalar differential equation
(t) = ax(t)2 + bx(t) (C.1) then the analytic solution is given by x(t) = bebt x(O) (C.2)
t b + a x(O) (1 ebt) Proof: Equation (C.1) can be written as dx
=xdr (C.3) bx(t) + ax(t)2
Integrating both sides gives
J dx
x(t)(ax(t) + b) t + cl where cl is the integration constant. Working out the integral yields 1I[x(t) I=t+c
b ax(t)+ b
which after multiplying through by b, and taking the exponential of both sides gives x(t) = e btebei = ebtc2 (C.4) ax(t) + b
71
x (o) Substituting back into (C.4), where c2 = ebc,. At t = 0, the constant c2 = ax(o)+b" and rearranging terms yields x(t)[ax(O) + b] = ebtx(O)[ax(t) + b]
Finally, solving for x(t) gives the solution t = bebt x(0) b+ax(O) (1 ebt)
APPENDIX D
PROOF OF CLAIMS (i) & (ii) OF LEMMA 1 OF CHAPTER 4
For convenience we rewrite the system equation of Lemma 1. Given the scalar differential equation
(t) = ay(t)2 + by(t) (D.1) where a > 0 and b < 0. Its analytic solution is given in Appendix C and can be written as
ebt y(0) (D.2) yt) 1 + y(0) (1  ebt) Now, for the initial condition bound 0 < y(O) <  , we need to prove the following two claims:
Claim (i) y(t) > 0 V t < oo.
Claim (ii) y(t + T) < y(t) V t < T < co.
Claim (i) can be checked by verifying that both the numerator and the denominator of (D.2) are either positive or negative. Let's consider the numerator first. Since the exponential function is always positive (i.e., ebt > 0), and y(O) > 0, then the numerator is positive. Now, for the denominator, since 0 < y(O) < h then it can be readily verified that
1 < bY(0) < 0 (D.3) Furthermore, using the fact that 0 < 1  ebt < 1 Vt > 0 it follows that 0> y(0) (1 eb) >1 (D4)
b
Therefore, the denominator of (D.2) is easily seen to be positive. Hence, y(t) > OV t < co. This completes the proof of claim (i).
Claim (ii) implies that the solution is strictly monotonically decreasing. It can be checked by verifying that the ratio R < 1, where
R = y(t + T) (D.5) y(t)
is satisfied for all t < T < co. Using (D.2), the ratio (D.5) can be written as eb(t+T) 1 + fy(O)(1  ebt) R=e
1 + y(O)(l  eb(t+T)) ebt which after simple manipulations reduces to R= ebT[1 + ly(O)(1  eb)] (D.6)
1 + ay(O)(1  ebtebT)
Multiplying the numerator and the denominator of (D.6) through gives ee bb T y(0)  e (bt0a R = a(D.7)
1 + y(O)  ebTebtgy(O)
Ignoring the last term in the numerator and denominator of (D.7) since they are the same, then the truncated ration R' is given by R'= ebT(1 + y(O)) _ bT < 1 (D.8)
1 + ay0
f1,y(O)
for all T > 0. This shows that R < 1, and hence, proves claim (ii) of Lemma 1.
APPENDIX E
MATRIX MEASURE The following definition of the matrix measure and its properties are found in Vidysagar [52].
Definition 1 The matrix measure, also known as the logarithmic derivative, of an inducedmatrix norm 11 " lip on Cxn, is a function 0p : Cxn  7Z defined by yup(A) = lim III + eAIIp  1 (E.1)
The matrix measure of A E Cnxn corresponding to the 1, 2, and oo norms is given, respectively, by I~pi(A) = max{ajj+y lapjl} poi
iip2(A) = Amax[(A* + A)]/2 #to(A) = max{app+E lapjl} p isp
Some useful properties of matrix measure include the following:
" yp (A + B) = p (A) + t (A).
" pp(A) < Re A < pp(A) where A is an eigenvalue of A.
" jzp(.) is a convex function on Cnxn.
APPENDIX F
THE COMPARISON THEOREM Let a vectorvalued function v(t, s, z) : J x J x m + R, J [to, co] has the following property. For any fixed t, s, z1 _< z2 + v(t, s, z1) < v(t, s, z2) Let z(t) be the solution to the inequality z(t) < z() + v(t,s,z(s h)) ds Then the maximal solution r(t) of w(t) < z(O) + v(t, s, w(s  h)) ds satisfies z(t) _ r(t) for t > to.
REFERENCES
[1] Z. Artstein. Linear systems with delayed controls: A reduction. IEEE Transactions on Automatic Control, AC27:869879, 1982.
[2] C. T. Baab, J. C. Cockburn, H. A. Latchman, and 0. D. Crisalle. Generalization
of the nyquist robust stability margin and its applications to system with real affine parametric uncertainties. International Journal of Robust and Nonlinear
Control, 11:14151434, 2001.
[3] B. R. Barmish. A generalization of Kharitonov's fourpolynomial concept for robust stability problems with linearly dependent coefficient perturbations. IEEE
Transactions on Automatic Control, 34:157165, 1989.
[4] B. R. Barmish. New Tools for Robustness of Linear Systems. McMillan, New
York, 1994.
[5] G. Bartolini. Chatteing phenomena in discontinous control systems. International Journal of Systems Science, 30(12):24712481, 1989.
[6] V. R. Basker, K. Hrissagis, and 0. D. Crisalle. Variable structure control design for reduced chatter in uncertain time delay systems. In Proc. 36th IEEE
Conference on Decision and Control, volume 4, pages 32343236, 1997.
[7] S. P. Bhattacharyya. Robust ControlThe Parametric Approach. PrenticeHall,
New Jersey, 1995.
[8] H. Chapellat and S. Bhattacharyya. A generalization of Kharitov's theorem:
Robust stability of interval plants. IEEE Transactions on Automatic Control,
34:306311, 1989.
[9] H. Chapellat, M. Dahleh, and S. Bhattacharyya. On robust nonlinear stability of interval control systems. IEEE Transactions on Automatic Control, 36:5967,
1991.
[10] M. S. Chen. Exponential stabilization of a constrained bilinear system. Automatica, 34:989992, 1998.
[11] M. S. Chen and Y. Z. Chen. Normalised quadratic controls for a class of bilinear
systems. In IEE Proceedings on Control Theory Application, volume 149, pages
520524, 2002.
[12] M. S. Chen and S. T. Tsao. Exponential stabilization of a class of unstable
bilinear system. IEEE Transactions of Automatic Control, 45:989992, 2000.
[13] C. Chiang and F. Kung. Stability analysis of continuous bilinear systems. Journal of the Chinees Institute of Engineers, 17:569576, 1994.
[14] W. A. Coppel. Stability and Asymptotic Behavior of Differential Equations.
Health, Boston, 1965.
[15] T. Cormen, C. Leiserson, and R. Rivest. Introductoin to Algorithms. McGrawHill, New York, 1990.
[16] R. A. DeCarlo, S. H. Zak, and G. P. Matthews. Variable structure control
of nonlinear multivariable systems: A tutorial. In Proceedings of the IEEE,
volume 76, pages 212232, 1988.
[17] J. Doyle. Analysis of feedback systems with structured uncertainties. In IEE
Proceedings Part D, volume 129, pages 242250, 1982.
[18] L. Dugard and E. I. Verriest. Stability and Control of TimeDelay Systems.
SpringerVerlag, London, 1998.
[19] D. L. Elliott. Bilinear systems. Wiley Encyclopedia of Electrical Engineering,
2:308323, 1999.
[20] D. Feiqi, L. Youngqing, and F. Zhaoshu. Variable structure control of timedelay systems with retarded state and retarded control. In IEEE International Conference on Systems, Man and Cybernetics, volume 1, pages 102106, 1996.
[21] M. Fu. Computing the frequency response of linear systems with parametric
perturbations. Systems & Control Letters, 15:4552, 1990.
[22] F. Gouaisbaut, W. Perruquetti, and J. P. Richard. A sliding mode control for
linear systems with input and state delays. In Proceedings of the 38th IEEE
Conference on Decision and Control, volume 4, pages 42344239, 1999.
[23] J. Guojun and S. Wenzhong. Stability of bilinear timedelay systems. IMA
Journal of Mathematical Control and Information, 18:5360, 2001.
[24] D. W. Ho, G. Lu, and Y. Zheng. Global stabilisation for bilinear systems with
time delay. volume 149, pages 8994, 2002.
[25] K. J. Hu, V. R. Basker, and 0. D. Crisalle. Sliding mode control of uncertain
inputdelay systems. In Proc. of the Americal Control Conference, volume 1,
pages 564568, 1998.
[26] J. Y. Hung, W. Gao, and J. C. Hung. Variable structure control : A survey.
IEEE Transactions on Indusrial Electronics, 40(1):221, 1993.
[27] S. R. Inamdar, V. R. Kumar, and N. D. Kulkarni. Dynamics of reacting systems
in the presence of timedelay. Chemical Engineering Science, 46(3):901908,
1991.
[28] E. M. Jafarov. Design of sliding mode control for multiinput systems with
multiple state delays. In Proc. of the Americal Control Conference, volume 2,
pages 11391143, 2000.
[29] B. Ji, H. A. Latchman, and 0. D. Crisalle. Interpretation of staticweight liinfinity design approaches for interval plants. In Proc. of the 41st IEEE Conference on Decision and Control, volume 2, pages 14341439, 2002.
[30] B. Ji, H. A. Latchman, and 0. D. Crisalle. Robust Hinfinity stabilization
for interval plants. In IEEE Conference Control Applications/Computer Aided
Control System Design, volume 2, pages 11121117, 2002.
[31] H. Khalil. Nonlinear Systems. PrenticeHall, Inc., New Jersey, 1996.
[32] J. Kharitonov. Asymptotic stability of an equilibrium position of a family of
systems of linear differential equations. Differential Equations, 14:14831485,
1979.
[33] A. J. Koshkouei and A. S. Zinober. Sliding mode timedelay systems. In IEEE
International Workshop on Variable Structure Control, pages 97101, 1996.
[34] D. Krupp and Y. B. Shtessel. Chatteringfree sliding mode control with unmodeled dynamics. In Proceedings of the American Control Conference, volume 1,
pages 530534, 1999.
[35] H. A. Latchman and 0. D. Crisalle. Exact robustness analysis for highly structured frequency domain uncertainties. In Proc. of American Control Conference,
volume 6, pages 39823987, 1995.
[36] H. A. Latchman, 0. D. Crisalle, and V. R. Basker. The Nyquist robust stability margin A new metric for the stability of uncertain systems. International
Journal of Robust and Nonlinear Control, 7:211226, 1997.
[37] P. Liu and H. Hung. Stability for bilinear timedelay systems with saturating
actuators. In IEEE Proc. of International Symposium on Industrial Electronics,
volume 3, pages 10821086, 1999.
[38] M. S. Mahmoud. Robust Control and Filtering for TimeDelay Systems. Marcel
Dekker, Inc., New York, 1996.
[39] R. R. Mohler. Bilinear Control Processes. McGrawHill, New York, 1973.
[40] R. R. Mohler. Nonlinear Systems: Application to Bilinear Control. PrenticeHall, New Jersey, 1991.
[41] T. Mori, N. Fukuma, and M. Kuwahara. Simple stability criteria for single and
composite linear systems with time delays. International Journal of Control,
34:11751184, 1981.
[42] S. Niculescu, S. Tarbouriech, J. Dion, and L. Dugard. Stability criteria for
bilinear systems with delayed state and saturating actuators. In Proceedings of the 34th IEEE Conference on Decision and Control, volume 2, pages 20642069,
1995.
[43] S. Oucheriah. Dynamic compensation of uncertain timedelay systems using
variable structure approach. IEEE Transactions on Circuits And Systems: Fundamental Theory and Applications, 42(8):466469, 1995.
[44] M. Poijak and J. Rohn. Checking robust nonsingularity is nphard. Mathematics
of Control, Signals, and Systems, 6:19, 1993.
[45] Y. Roh and J. Oh. Sliding mode control with uncertainty adaptation for uncertain inputdelay systems. In Proc. of the American Control Conference, volume 1, pages 636640, 2000.
[46] M. G. Safonov. Stability margins of diagonally perturbed multivariable feedback
systems. In IEE Proceedings Part D, volume 129, pages 251256, 1982.
[47] K. Shyu and J. Yan. Robust stability of uncertain timedelay systems and its
stabilization by variable structure control. International Journal of Control,
57(1):237246, 1993.
[48] A. Sideris. An efficient algorithm for checking the robust stability of a polytope
of polynomials. Math. Control Signals & Systems, 4:315337, 1991.
[49] J. E. Slotine. Sliding controller design for nonlinear systems. International
Journal of Control, 40(2):421434, 1984.
[50] J. E. Slotine and W. Li. Applied Nonlinear Control. PrinticeHall, Inc., New
Jersey, 1991.
[51] V. Utkin. Variable structure systems with sliding modes. IEEE Transactions
on Automatic Control, AC22(2):212222, 1977.
[52] M. Vidysagar. Nonlinear Systems Analysis. PrenticeHall, New Jersey, 1978.
[53] L. Wang. Robust strong stabilizability of interval plants: It suffices to check two
vertices. Systems & Control Letters, 26:133136, 1995.
[54] L. Wang. Kharitonovlike theorems for robust performance of interval systems.
Journal of Mathematical Analysis Applications, 279:430441, 2003.
80
[55] Y. Xia, J. Han, and Y. Jia. A sliding mode control for linear systems with input
and state delays. In Proceedings of the 41st IEEE Conference on Decision and
Control, volume 3, pages 33323337, 2002.
[56] K. D. Young, V. Utkin, and U. Ozguner. A control engineer's guide to sliding
mode control. IEEE Transactions on Control Systems Technology, 7(3):328342,
1999.
[57] K. Zhou, J. Doyle, and K. Glover. Robust and Optimal Control. PrenticeHall,
New Jersey, 1996.
BIOGRAPHICAL SKETCH
Saleh AlShamali was born in Kuwait City in 1973. He obtained his bachelor's degree in electrical and computer engineering at the University of MissouriColumbia in December 1996. He worked for a year at Kuwait Oil Company (KOC) as a technical engineer. In 1998, he decided to pursue a master's and Ph.D. degree in the controls and systems area. He joined the Electrical and Computer Engineering Department at the University of Florida in Fall 1998. He is aiming to graduate in December 2004.
STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS Saleh A. AlShamali
(352) 3922584
Department of Electrical and Computer Engineering Chair: Haniph A. Latchman Cochair: Oscar D. Crisalle Degree: Doctor of Philosophy Graduation Date: December 2004
This dissertation develops methodologies that enable analysis and control design for real linear and bilinear systems subject to uncertainty and time delay. An indicator for the robust stability of uncertain systems is proposed, namely, the Nyquist Robust Sensitivity Margin, a tool indicates how large a parameter perturbation cab be before causing instability. Moreover, new control designs to stabilize linear and bilinear systems under the influence of timedelay are proposed. A sliding mode control law is designed to stabilize a linear plant affected by time delay. Also, a state feedback control is proposed to stabilize a timedelay bilinear system. The results obtained by the two designs provide quantitative information regarding the largest delay the plant can handle.
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Haniph A. Latchman, Chair
Professor of Electrical and Computer Engineering
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Oscar D. Crisalle, Cochair
Professor of Chemical Engineering
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Tan FNong (7
Assistant Professor of Electrical and Computer Engineering
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Norm L
Associate Professor of Mechanical and Aerospace Engineering
This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy.
December 2004 P " ""/
Dean, College of Engineering
Kenneth J. Gerhardt
Interim Dean, Graduate School

Full Text 
PAGE 1
STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS By SALEH A. ALSHAMALI A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004
PAGE 2
Copyright 2004 by Saleh A. AlShamali
PAGE 3
I dedicate this work to my parents and my wife Muna.
PAGE 4
ACKNOWLEDGMENTS I wish to express my deep gratitude to my advisors, Dr. Haniph Latchman and Dr. Oscar Crisalle, for their support and guidance during my Ph.D. study. They gave me a lot of freedom and flexibility to choose my research topic. I am thankful for their encouragement which gives me confldence as I begin my career in academia as an assistant Professor, shortly after I graduate with my Ph.D. I also wish to thank Dr. Tan Wong and Dr. Norman FitzCoy for serving on my committee, and for providing constructive ideas to further develop my research. I am very thankful to Dr. William Hager and Dr. Sergi Pilyugin for their help on some of the mathematical difficulties I ran into during my research. I would also like to thank my List lab colleagues, in particular. Dr. Baowei Ji, who offered his help and shared his extensive knowledge in the area of controls with me, and Mr. Minkyu Lee for his recognized role in administrating the List lab, and for taking the time to solve the many technical problems I ran into while he worked on his Ph.D. dissertation. I also wish to thank my List lab colleagues, Mr. YuJu Lin, Mr. Kartikeya Tripathi, Mr. Suman Srinivasan, and the rest. I had a wonderful time and enjoyed being around them. I am grateful to my parents, sisters, and brothers back in Kuwait for their support, and to my wife for standing behind me and for taking a long leave from her job to stay with me and take care of our son, Mohammed. My family has always been a source of inspiration and support for me throughout the course of my Ph.D. research. IV
PAGE 5
TABLE OF CONTENTS Page ACKNOWLEDGMENTS iv LIST OF FIGURES vii ABSTRACT ix CHAPTER 1 INTRODUCTION 1 1.1 Robustness Analysis 1 1.2 Sliding Mode Control 2 1.3 Bilinear Systems 3 1.4 Thesis Structure 3 2 THE NYQUIST ROBUST SENSITIVITY MARGIN 5 2.1 Introduction 5 2.2 Background 7 2.3 The Nyquist Robust Sensitivity Margin 11 2.4 Application to Systems with Affine Uncertainty Structure .... 16 2.5 Examples 21 2.5.1 Example 1 21 2.5.2 Example 2 22 2.5.3 Example 3 24 2.6 Conclusions 26 2.7 Supplementary Calculation Algorithms 27 2.7.1 Supporting Circle of an Arc 27 2.7.2 Minimum Distance between a Line and a Point 28 2.7.3 Identifying Points on the Arc 29 3 SLIDING MODE CONTROL FOR TIMEDELAY SYSTEMS 30 3.1 Introduction 30 3.2 Problem Formulation 32 3.3 Switching Function and Control Law Design 33 3.4 Existence of a Sliding Mode 35 3.5 System Stability 36 3.6 Example 41 3.7 Conclusions 42 V
PAGE 6
4 STABILIZATION OF TIMEDELAY BILINEAR SYSTEMS 45 4.1 Introduction 45 4.2 Problem Statement 46 4.3 Preliminary Results 47 4.4 Main Result 54 4.5 Example 57 4.6 Conclusions 58 4.7 Further Analysis of the System in Lemma 1 60 5 FUTURE WORK AND DISCUSSIONS 63 5.1 Problem 1 : Sliding Mode Control for a Delayed Bilinear System 63 5.2 Problem 2 : Extending the NRSM 63 APPENDIX 66 A DERIVATION FOR THE REACHING TIME 66 B DERIVATION OF A BOUND USED IN CHAPTER 3 69 C PROOF OF LEMMA 1 OF CHAPTER 4 70 D PROOF OF CLAIM (i) OF LEMMA 1 OF CHAPTER 4 72 E THE MATRIX MEASURE DEFINITION AND PROPERTIES .... 74 F THE COMPARISON THEOREM 75 REFERENCES 76 BIOGRAPHICAL SKETCH 81 VI
PAGE 7
LIST OF FIGURES Figure 2.1 The uncertain system g{s) = go{s) + S{s) in a unityfeedback configuration 8 2.2 Uncertainty value sets at a frequency cjj: (a) convex critical value set Vc(i^i), (b) nonconvex critical value set V{u>i). Both figures show the worstsensitivity plant gsU^^i), located closest to the point Â—1 + jO. 10 2.3 Illustration of the inversesensitivity circle of radius r]{u) introduced in definition (2.11) 12 2.4 The center Zq and radius r of the supporting circle of the arc A{pi,P 2 ,Pz) are determined from the intersection of the auxiliary lines Li and L 2 18 2.5 Frame for the value set of system (2.23) at a; = 9, and the corresponding inversesensitivity circle. The nominal plant go{j(^) is indicated by the marker 23 2.6 Value of and k^iuj) as a function of frequency for the first example 23 2.7 Frame for the value set of system (2.25) at w = 4.72 and ca = 1.86. The nominal plant g^ijoj) is indicated by the Â’fÂ’ marker 25 2.8 Plot of the Nyquist robust sensitivity margin kj^^s = [maxkN^si'^) as (jJ a function of the blowup factor ci. The parametric robust stability margin is a = 1.89, which corresponds to the value of the blowup factor a that makes k^^g approximately equal to unity 26 3.1 Trajectories for the states of the transformed system (a), the states of the original system (b), the control law (c), and the switching function (d) 43 3.2 Trajectories for the states of the transformed system (a), the states of the original system (b), the control law (c), and the switching function with approximation to the signum function (d) 44 4.1 Graphical interpretation of the differential equation of Lemma 2: (a) derivative graph, (b) solution curves 50 vii
PAGE 8
4.2 Plot of the trajectories of the system states 59 4.3 Plot of the trajectories of the system states 59 4.4 Graphical interpretation of the differential equation(4.4) for the case where 6 > 0 : (a) derivative graph, (b) solution curves 61 4.5 Graphical interpretation of the differential equation(4.4) for the case where b < 0 : (a) derivative graph, (b) solution curves 62 5.1 The negative feedback loop of the uncertain system p(s) with a controller c(s) 64 5.2 The standard M A loop for stability analysis 65 5.3 A system with parametric uncertainty in the standard M Â— A loop. . 65 A.l Plot of the switching function for s(0) > 0 (a), and s(0) < 0 (b). ... 67 viii
PAGE 9
Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS By Saleh A. AlShamali December 2004 Chair: Haniph A. Latchman Cochair: Oscar D. Crisalle Major Department: Electrical and Computer Engineering Uncertainty and timedelay in real systems constitute the two major challenges that face control engineers since both can contribute to instability or poor performance. In this dissertation three analysis and control design problems are addressed. These problems involve linear systems with parametric uncertainty structure, and linear and bilinear systems with time delay. In the first problem, the Nyquist robust sensitivity margin is proposed as a scalar metric for robust stability and robust performance. The work was motivated by the critical direction theory (CDT) in which attention was given to plants that lie along the critical direction. The advantage of the new metric, however, is that it takes into account plants that are in close proximity to the critical point Â— 1 h jO but that do not lie along the critical direction. The approach introduced has therefore the advantage of capturing the worst case sensitivity as well as providing a more meaningful indication of robust stability. The concept has been applied successfully to a class of linear systems with affine uncertainty structure. IX
PAGE 10
The second problem involves designing a sliding mode control (SMC) to stabilize a class of timedelay linear systems. The delay is assumed to exist in both the control variable as well as in the state vector. The system is first rendered inputdelay free through an appropriate transformation. Then an SMC is designed for the statedelay system. Sufficient stability conditions ensuring the asymptotic stability of the closedloop system have been derived. The third problem addresses the stabilization of a class of timedelay bilinear systems. A statefeedback control law is designed to ensure the asymptotic stability of the delayed bilinear system. The work builds on two simple scalar systems and utilizes the results to prove a more complicated system. The analysis allowed us to obtain a bound on the maximum value of the delay that the system can tolerate. Furthermore, a region of attraction based on the initial condition of the systems states is established. X
PAGE 11
CHAPTER 1 INTRODUCTION 1.1 Robustness Analysis The robustness analysis problem investigates the behavior of a dynamical system under uncertainty, namely, how the system stability and performance are influenced by the uncertainty. Many robust stability tools have been developed along the years, among which are the wellknown scalar stability margins: the structured singular value introduced by Doyle [17] and the multivariable stability margin given by Safanov km{u)) [46]. The critical direction theory introduced by Latchman and Crisalle [35] and later generalized by Baab et al. [2] also provides an effective tool for analyzing the robust stability of uncertain systems, namely, the Nyquist robust stability margin ki^{uj). The concept was applied successfully to a class of linear systems with affine and ellipsoidal uncertainty structure, and it works for the case of convex and nonconvex value sets. Uncertainties are classified depending on their source as nonparametric (unstructured) and parametric (structured) [7]. The nonparametric uncertainties do not have a well defined structure and are represented by a disk which over bounds the actual uncertainty. Therefore, this type of uncertainty description usually introduces conservatism. Examples of uncertainties that are represented as unstructured include nonlinearities and unmodelled dynamics. Parametric uncertainties, on the other hand, have a structure that reflects the variation of the system parameters. Thus, they are less conservative. Examples of such uncertainties include interval and ellipsoidal uncertainty. 1
PAGE 12
2 1.2 Sliding Mode Control A variable structure system (VSS) is a dynamical system composed of distinct structures. A VSS switches between the different structures based on the value of its states and according to a switching logic which takes into account the desired properties in each structure. In fact, a variable structure system can have properties that are not existent in its individual structures [51]. A sliding mode control system (SMC) is a specific case of VSS in which the system trajectories exhibit a sliding behavior. The design of an SMC consists of two stages. The first stage is the design of a switching surface such that once the trajectories are confined to the surface the system demonstrates the desired properties (be., tracking, regulation, etc.) The second stage involves the design of a control law that forces the trajectories into the sliding manifold (discontinuous control), and a linear feedback control that guarantees closedloop stability (equivalent control). The latter is derived by setting the time derivative of the switching function equal to zero and solving for the control law. The former is proposed with appropriate gains to allow the system to overcome uncertainties. The system motion in SMC runs through two phases. The first phase (reaching phase) is characterized by a fast motion. The system during this phase is robust against uncertainties (matched and unmatched) and external disturbances. This is mainly due to the discontinuous control law which acts as a high gain feedback control that counteracts high frequency signals. The second phase (sliding phase) is characterized by a slow motion. The system is however robust only against matched uncertainty. The theory of sliding mode control has been covered comprehensively in the literature. Utkin [51] presents a survey for the early contributions in SMC. The survey by Hung et al. [26] presents a tutoriallike paper for variable structure control (VSC) with sliding mode. An interesting tutorial paper by DeCarlo et al. [16]
PAGE 13
3 provides an introduction to variable structure control for multivariable nonlinear timevarying systems. Finally, a useful guide to SMC is also given by Young et al. [56]. 1.3 Bilinear Systems Bilinear systems occupy an intermediate level between linear and nonlinear systems in terms of their complexity, the general form of a bilinear systems is given as x{t) = Ax{t) + Bu{t) + Nx{t)u{t) (1.1) where it is clear that the control action enters the system linearly through the term {Bu{t)), and nonlinearly through the term (Nx{t)u{t)), hence the name bilinear system. A special form of the system (1.1) is the homogenous bilinear system x{t) = Ax{t) A Nx{t)u{t) where the linear part is omitted. A formal definition of a bilinear system is given in Elliott [19]. Many natural as well as manmade systems can be represented as bilinear models [19, 39, 40] . Examples of bilinear systems can be found in economics, industrial processes, and biochemistry, just to mention a few. 1.4 Thesis Structure The thesis is organized as follows. In Chapter 2, a new metric to the robust stability of closedloop systems with affine uncertainty structure is presented. The new concept is motivated by the fact that the critical direction theory considers only plants that lie along the critical direction defined as the ray starting at the nominal plant and pointing towards the critical point Â—1 + jO. Hence, plants that are very close to the critical point but that do not lie on the critical ray are ignored. Therefore, the Nyquist robust sensitivity margin, k^^s, is proposed to take into accounts such plants. Chapter 3 considers the stabilization of a class of timedelay linear systems
PAGE 14
4 via sliding mode control. The delayed system is assumed to have a constant delay in both the input and the state. In Chapter 4, a state feedback control design for a class of bilinear systems with statedelay is presented. The stability conditions derived provide a bound on the system delay, and define an attraction region based on the initial condition. The future work proposed for consideration is presented in Chapter 5.
PAGE 15
CHAPTER 2 THE NYQUIST ROBUST SENSITIVITY MARGIN 2.1 Introduction The critical direction theory introduced by Latchman and Crisalle [35], Latchman et al. [36], and later generalized by Baab et al. [2] is an effective approach for analyzing the robust stability of uncertain systems with convex and nonconvex uncertainty value sets. A key concept introduced by the theory is the Nyquist robust stability margin, which provides a measure of robustness. The approach has proven useful in characterizing the robust stability of singleinput /singleoutput systems with real affine parametric uncertainty, among others, and has recently been applied to the design of robustly stabilizing i/oo controllers by identifying an appropriate weighting function for the controller sensitivity function [29, 30]. This chapter proposes an alternative robuststability analysis which has the benefit of also capturing the concept of robust sensitivity, hence directly incorporating the notion of performance robustness. The resulting Nyquist robust sensitivity margin kN,s{oj) is inspired on the critical direction theory framework, but is formulated to take into account in an explicit fashion the effect of the uncertain systems that have the worstcase sensitivity. The earlier critical direction theory involving the margin k^iuj) considers only a subset of the uncertain systems in the robustness analysis, namely, those uncertain systems whose image on the Nyquist plane lie along a prespecified oriented line. Although the restricted criticalset of systems considered leads to nonconservative conditions for robust stability, the approach ignores all perturbed systems that have a poor sensitivity (he., systems located close to the critical point Â— 1tjO on the Nyquist 5
PAGE 16
6 plane) whenever these lie outside the oriented line. The new paradigm involving the margin seeks to quantify the effect of the systems located closest to the critical point through the introduction of a sensitivity perturbation radius that is calculated at each frequency by solving an optimization program. To illustrate the approach, the robust stability analysis proposed is developed for uncertain systems described by rational transfer functions with real affine parametric perturbations. More specifically, the numerator and denominator polynomials depend affinely on a set of real parameters that are known to belong to a given uncertainty description. A systematic algorithm for the calculation of kN,s{(^) is developed by taking advantage of simple geometrical features adopted by the Nyquistplane images of such systems [21]. The analysis is carried out in detail in Section 4. The robust stability of the real affine uncertain systems considered in Section 4 can be analyzed using alternative approaches, for example, based on generalizations of KharitonovÂ’s methodology [32]. In particular, one may adopt the approach in Barmish [3] which proposes a strict positivity condition that must be evaluated at a finite number of frequencies, or the box theorem [8], or the worstedge algorithm of Sideris [48]. Furthermore, for the robust stability of interval plants, Wang [53] has shown that it suffices to check two vertices. Some results concerning the robust stability of control systems under unstructured as well as parametric uncertainty have been addressed in Chapellat [9]. These alternative results successfully reveal whether the system is robustly stable; however, in contrast to the Nyquist robust sensitivity margin proposed here, they do not provide a scalar indicator of the closeness to instability. Hence, the scalar k^s can be used to compare alternative closedloop designs and determine a hierarchy of robust stability among the alternatives. A recent result by Wang [54] concerning interval plants shows that the maximum i?oo norm of the sensitivity function is achieved at twelve (out of sixteen) Kharitonov vertices. The result, however, applies to interval polynomials while our approach
PAGE 17
7 applies to transfer functions. A systematic algorithm for the calculation of is developed by taking advantage of the simple geometrical features documented in Fu [21] adopted by the Nyquistplane images of such systems. The chapter is organized as follows. In Section 2, the classical critical direction theory is briefly reviewed for contextual reference. Section 3 presents the definition of the new Nyquist robust sensitivity margin, discusses its properties and computational challenges, and compares and contrasts the new margin with its Nyquist robust stability margin predecessor. The application of the Nyquist robust sensitivity margin to systems with affine uncertainty structure is presented in Section 4, including the details of a systematic algorithm for the efficient calculation of the margin. Section 5 presents examples, including an illustrative case showing how to utilize the proposed method for calculating a parametric robuststability margin that is interpreted as a blowup factor. 2.2 Background A general linear time invariant system (LTI) can be represented in a state space form as follows: x{t) = Ax{t) + Bu{t) y{t) = Cx{t) + Du{t) (2.1) Furthermore, the representation (2.1) can be expressed in the following transfer function form: + D = + D (2.2) provided there are no cancellations between the numerator and denominator polynomials. When the system matrices {A, B, C, D) are uncertain, the transfer function form is given, for the MIMO case, by G{s) Â— Go('S) + ^(^)) where (?o('5) is a known transfer matrix, and A(s) is the transfer matrix representing the uncertainty. The
PAGE 18
8 transformation of system (2.1) into (2.2) allows us to use frequency domain techniques to analyze the stability and performance of the closedloop system. Since the development of the Nyquist robust sensitivity margin in this chapter requires frequency domain techniques such as the Nyquist theorem, the transfer function form is the appropriate environment to use in assessing the robust stability of the system. Consider the uncertain singleinput/singleoutput transfer function g{s) = go{s) + 6{s) (2.3) shown in Figure 2.1, where go{s) is a nominal system, 5(s) 6 A is an unknown perturbation belonging to a known set of allowable perturbations A. The closedloop system of Figure 2.1 is said to be robustly stable if stability is ensured for all (i(s) G A. The problem under consideration is the analysis of the robust stability of the uncertain closedloop system (2.3) under negative unity feedback. The developments assume the following standard premises that are commonly used in Nyquistbased robustness analysis: (Al) the nominal transfer function go{ju}) is stable under negativeunity feedback, and (^12) the uncertain system g{ju}) and the nominal system go{jco) have the same number of openloop unstable poles. XO g(s) f Figure 2.1: The uncertain system g{s) = go{s) F 5{s) in a unityfeedback configuration. The key concepts and definitions pertaining to the critical direction theory are readily summarized utilizing Figure 2.2. First, the critical line is the oriented line {i.e., a ray) in the Nyquist plane originating at the nominal point go{juj) and passing through the critical point Â—1 f jO. The critical direction
PAGE 19
9 (2.4) is a unitlength vector with origin at go{juj) and pointing towards the critical point. Then, the critical ray is characterized by r{u) = go{ju;) +a ddjuj) for a G . The uncertainty value set represents the Nyquistplane mapping 5 f( jo;) = go{j^)+d{ju}) of the uncertain system. The boundary of the uncertainty value set (2.5) is denoted as dV{u>). Finally, the critical value set Vc{oj) Â— V(a^) p] r{co) is the subset of V(u.)) that lies on the critical line. The the critical value set Vc(u;) may be convex, i.e., a set described as a single point or as straightline segment (such as the straightline segment go{juJi)gsU^i) joining the points go{jiOi) and gsijoJi) shown is Figure 2.2a), or nonconvex, i.e., a union of isolated points and straightline segments (such as the union of the disjoint segments go{j^^i)gi{j<^i) g 2 {j^i)g'i{j^i) Figure 2.2b). Note that it is possible to encounter an uncertain system with a highly nonconvex value set V(w) that nevertheless features a convex critical value set Vc(o;), as illustrated in Figure 2.2a. For the general case of convex or nonconvex critical value sets, Baab et al. [2] V(w) = {g{juj) : g{ju) = goijuj) + S{juj), 5(s) G A} (2.5) define the critical perturbation radius ( 2 . 6 ) where (2.7)
PAGE 20
10 d^UcOi) Img(;) intercepts the critical direction. Finally, the Nyquist robust stability margin is defined as kN{u) Pc{uj) \i + go{ji^)\ The main result of Baab et al. [2] is restated in the following theorem. (2.9) Theorem 1 Consider the uncertain system (2.3) with assumptions (Al) and (A2). Then, the closedloop system is robustly stable under unity feedback if and only if ki^{uj) < 1 Vo;.
PAGE 21
11 Proof: See Baab et al. [2]. B Note that the theorem is valid in general for convex as well as nonconvex critical value sets Vc(ci;). Since control design is often carried out under sufficientonly conditions, for control synthesis purposes it may be acceptable to adopt the definition (2.8) instead of (2.6) when working with nonconvex critical value sets. Then the resulting condition k;^{uj) < 1 Vw, where is calculated through (2.9), is only sufficient for robust stability. 2.3 The Nyquist Robust Sensitivity Margin The main drawback of definition (2.6) is that the resulting Nyquist robust stability margin value kN{uj) obtained through (2.6) and (2.9) may convey no information about the worstcase sensitivity in the value set. Figure 2.2b shows that at the frequency u) = uji the plant gs{j^i) is the element of V{uJi) that is closest to the point I + jO. Hence the sensitivity is the largest among all the plants in the value set. Note that since gsijuJi) ^ Vc(wj), this plant is ignored in the classical critical direction analysis presented in Section 2 which focuses only on plants that lie along the critical direction. In this section an alternative approach is presented to include sensitivity effects in the robustness margin. To this end we define the sensitivity perturbation radius 1 + 5o(j^^)l ^(t^) if l+jO^V{u) 11 + go{jÂ‘^)\ + otherwise ( 2 . 10 ) where niw) = min l + z (211) zedV{<.) represents the minimum distance between the critical point Â— lfjO and the boundary set dV{w). Then, in a fashion analogous to (2.9), the Nyquist robust sensitivity margin
PAGE 22
12 is defined as (2.12) Img( 70 ) Reg(jco) Figure 2.3: Illustration of the inversesensitivity circle of radius t}{uj) introduced in definition (2.11). Figure 2.3 gives an interpretation of r]{u) defined in (2.11) as the radius of the inversesensitivity circle, namely, the smallest circle with center at 1 \jO that contains a point belonging to the boundary dV{io). Furthermore, the definition (2.11) and Figure 2.3 can be used to conclude that r]{oj) = 1 + 5s(j^)) where Qsij^) is the perturbation in V(o;) that has the worst sensitivity. It is also of interest to note that = 1 corresponds to the case where Â—1 \j0 G dV{Lo). This follows from the fact that fcjv,s(n;) = 1 if and only if r]{oj) = 0, and the latter equality is realized from the optimization problem (2.11) only when 1 h jO G dV{uj). Finally, it is of utility for the suite to note that at all frequencies oo This inequality is derived as follows. Since the valueset boundary dV{u)) contains as a subset the set of critical boundary intersections Bc{io), the optimization problem (2.11) is carried out over an optimization domain that is a superset of the optimization T]{uj) < ^{uj) (2.13)
PAGE 23
13 domain used in the optimization problem (2.7). Consequently, the solutions to the respective optimization problems must follow the relationship (2.13). Theorem 2 Consider the uncertain system (2.3) with assumptions (Al) and (A2). Then, the closedloop system is robustly stable under unity feedback if and only if kN,s{^) < 1 Proof: From the zeroexclusion principle [4] it can be claimed that the uncertain system (2.3) under assumptions (Al) and (A2) is robustly stable if and only if Â—1 + jO ^ V(aÂ»). Therefore, it must be shown that under the definitions (2.10)(2.12) for ps{to), the condition < 1 is equivalent to the set membership condition 1+jO^ V(a;). First, to prove sufficiency one must show that kN,s{'^) < 1 Vw implies that Â—1 IjO ^ V(o;). The proof proceeds by contradiction. Assume that fc;v,s(Â‘^) < 1 and that there exists a frequency w such that Â—1 + jO G V(o;). Invoking the sensitivityperturbation radius expression (2.10) for the case where Â—1 + jO G V{lo) and the definition (2.12) it follows that kN,s{pj) _ i+go(jt^)l+y?(t*^) l+ 9 o(jo)) 1 + ll+ 9 o(jw) 11+90(11^)1 ' l+9o(li^) Since by definition 77(0;) > 0, then the equation above implies that kM,s{<^) > 1, which is a contradiction. This proves sufficiency. Second, to prove necessity one must show that at any frequency u) the condition 1+jO i V(w) implies that fc^,,(a;) < 1. Assume that 1+jO ^ V(o;). Invoking the sensitivityperturbation radius expression (2.10), now for the case where Â— 1 h jO ^ V(o;), and the definition (2.12), it follows that kN,s{<^) Â— lY Ps{i^) + 9oU^)\ 1 + yo(ja;) 77(07) 1 + 5o(iw)l 77(0;) 11 + 5o(i^^) (2.14) Since in this case 1 + jO ^ V(o;), it follows that 1 + jO ^ dV{u), and hence from (2.11) it is concluded that t]{uj) > 0. Furthermore, from (2.11) it is obvious that
PAGE 24
14 r]{uj) < 1 + 5o(w). Hence, it follows that 0 < H which can be used in to conclude that /cjv,s(<^) < 1* Figure 2.2a illustrates a special situation where /cAr,s(o;) = /cat ( a;). This follows from a simple argument using the elements shown in the figure, where is clear that in this case t]{u) Â— ^{to) = 1 + gs{j^)\Hence Ps{co) = Pc(^) from (2.10) and (2.6), and therefore it follows from (2.12) and (2.9) that fcAr,s(ct)) = k!^{uj). It is also straightforward to verify that the robustness margins satisfy the following two properties: (PI) if kN^s{<^) < 1) then /cjv,s(^) ^ ^jv(w), and (P2) if kN^s{<^) > then kN,s{^) Â— ^iv(t^)These two properties follow in a straightforward fashion after using the inequality (2.13), the perturbation radius definitions (2.10) and (2.6), and the robustness margin definitions (2.12) and (2.9) . Although in general kN,s{Â‘^) 7^ kN{uj), as suggested in Figure 2.2b, both margins are nevertheless equivalent as indicated in the following theorem. Theorem 3 The Nyquist robust sensitivity margin kN,s{Â‘^) the Nyquist robust stability margin A)Ar(w) are equivalent in the sense that (i) fciv,s(<^) < 1 kN{uj) < 1, (a) kN,s{i^) Â— 1 kN{(^) = I, and (Hi) kN,s{oj) > 1 k^ita) > 1Proof: The proof of sufficiency is developed below for cases (i)(iii). The proof of necessity for the three cases in question follows an analogous argument, and is therefore omitted here for brevity. For case (i), assume kf^^s{oj) < 1 and utilize (2.12) to conclude that Ps{uj) < \1 + go{jaj)\ (2.15) Also, from Theorem 2 and from the zeroexclusion principle [4], the condition k^^sioj) < 1 implies that Â— 1ljO ^ V(a;); hence from equations (2.6) and (2.10) the appropriate expressions for the respective perturbation radii are Pc{oj) = 1 + go{joj) \ Â— ^(w) and Ps{tv) = 1 IgoU^) \ ~ From the latter two equations and inequality (2.13) it follows that Pc{0j) < Ps{0}) ( 2 . 16 )
PAGE 25
15 Inequalities (2.15) and (2.16) imply that pdco) < 1 + 9o{jÂ‘^)\ which yields the result kM{u}) < 1 after invoking (2.9). For case (ii), assume Â— 1 and utilize (2.12) to conclude that Ps(w) = 1 + Po(^), which in turn from equation (2.10) implies that T]{u;) = 0. Since ry(a;) = 0 solves the optimization problem (2.11), it follows that 1+jO G dV{uj). Now, the fact that 1+jO G dV{ij) implies that 1+jO G Bc{oo), where Bc{co) is the optimization domain in (2.7). Since Â—1 + jO G Bc{uj), it follows that the solution to the optimization problem (2.7) is ^(w) = 0, which can be used to conclude from (2.6) that pc{uj) = 1 + Po(<^)Substitution the latter equality into (2.9) yields kN{uj) = 1. For case (iii), assume A:;v,s(<^) > 1 and utilize (2.12) to conclude that Ps{uj) > \1 + go{ju})\ (2.17) Also, from Theorem 2 and from the zeroexclusion principle [4], the condition kpf^si^) > 1 implies that l+jO G V(o;); hence from equations (2.6) and (2.10) the appropriate expressions for the respective perturbation radii are Pc{oj) = 1 + 9o{j^) \ +^(^) and Ps{uj) = 1 + 9o{ji^) \ + v{^)From the latter two equations and inequality (2.13) it follows that Pc{i^) > Ps{i^) (218) Inequalities (2.17) and (2.18) imply that Pc{uj) > 1 Ipo(i<^) which yields the result k^ii^) > 1 after invoking (2.9). The Nyquist robust sensitivity margin serves a role analogous to that of the structured singular value p{co) [17] or the multivariable stability margin km{i^) [46], as a scalar indicator of robust stability. Given that the optimization problem (2.11) must be solved, the deployment of an analysis approach based on /cw,s(^^) requires knowledge the valueset boundary dV{uj). Fortunately this information is available in a number of problems of interest, such as the case of systems with real affine uncertainty structure discussed in the following section.
PAGE 26
16 2.4 Application to Systems with Affine Uncertainty Structure The robust stability analysis approach proposed is applied to a class of uncertain systems with real affine uncertainty structure of the form _ no{s) + Y7i=iQMs) do{s) + EUQ^d^{s) (2.19) where rii{s) are numerator polynomials of known order i and known real coefficients Uik, k = 0, i = 0,1,..., p, and where d,(s) are denominator polynomials of known order m and known real coefficients dik, k = 0,1,..., m, i Â— 0, l,...,p. The element q G Q is a vector of real perturbation parameters, where the real uncertainty domain Q = {q Â€ 7^P : qf < q < q,^, i = 1, 2, . . ., p} (2.20) is a bounded rectangular polytope. In this case the uncertainty value set V(o;) is simply the map g{juj, Q) : TV' x Q ^ C. The objective is to calculate the value of /cAr,s(w) as a function of frequency using the expression (2.12). This in turn requires the calculation of the sensitivity perturbation radius Ps{oo) through its defining equation (2.10). Note that in order to apply (2.10) two problems must be addressed, namely, the optimization program (2.11) must be solved to find the inversesensitivity radius r]{u) {Problem I), and the setmembership clause 1 + jO ^ V(u;) must be assessed as true or false {Problem II) so that the appropriate branch of equation (2.10) can be identified. It is shown in [21] that the mapping g{joj, E{Q) (denoted in the suite as the valueset frame at the frequency lo), where E{Q) represents the set of edges of Q, spans the boundary set dV{oo). Furthermore, the frame g{juj, E{Q)) is a set comprised of arcs of circles and straightline segments [21]. More precisely, let Ei{Q) and its corresponding extreme points q~ and qf represent the i Â— th edge of the rectangular polytope Q. Then the frame g{ju),E{Q)) is composed of a set of frameelements g{ju),Ei{Q)), and each frame element is either a straightline segment or an arc
PAGE 27
17 of a circle. These simple geometric properties of the frame allow the development of a precise solution of Problem I. In fact, the minimization problem (2.11), which is equivalent to finding the minimum distance between the point Â—1 + jO and the boundary of the value set, reduces to a simple geometric problem: finding the shortest distance between the point 1 + jO and an arc of circle or a straightline segment. Problem (2.11) can then be posed for each frame element, and the smallest solution found after considering all the edges of Q yields the value of 77(0;) sought. For completeness it is convenient to briefly summarize relevant geometrical concepts regarding lines and arcs of circles. The interested reader is referred to [15] for further details. A line passing through points pi,p2 ^ C is defined by L{Pi,P2) :={z eC : z = pi +u{p2 pi),w G 77} and a circle with radius r and center zq is given by C{r, Zq) := { 2 : G C : z 2:qP == The arc A{p\,p2,pz) of a supporting circle C{r,Zo) is described by three points. One important issue to resolve is whether the map g{juj, Ei{Q)) of a given edge Ei{Q) is a straightline or an arc of a circle. This can be resolved by taking advantage of the cross product Pi X P 2 Re{pi) Rc{p2) Im{pi) Im{p2) where  Â•  represents the determinant operator. Selecting three distinct points Pi,P2, and p3 of the map g{jco, Ei{Q)), if follows that if {p^pi) x {p2~Pi) < 0 (> 0) it can be concluded that the segment is an arc of a circle turning to the right (left). If the cross product equals zero, then the three points are collinear and the segment is a straight line. Two of the points in question should be pi Â— g{jco, q~) and p^ = g{jco, q/)The
PAGE 28
18 ?2 Figure 2 . 4 : The center zq and radius r of the supporting circle of the arc A(pi,p2,P3) are determined from the intersection of the auxiliary lines Li and L2 third point can be taken as the image of a distinct point on the edge Bi(Q) that can be selected arbitrarily, say for example, p2 ~ g{juj, {q^ + q^)/ 2 ). For the case where the map g{ju), Ei{Q)) of an edge Ei{Q) is an arc, the minimum distance from Â—1 + jO to the arc can be given by either (z) the distance to one of the end points of the arc, or (ii) the distance to an internal point of the arc. Clearly, if the ray originating at the center of the supporting circle of the arc and passing through Â— 1 + jO does not intersect the arc, then the minimum distance can be determined from one of the two end points of the arc. On the other hand, if the ray intersects the arc, then the distance between Â— 1 + jO and the point of intersection defines the minimum distance sought. Finally, the procedure described requires finding the supporting circle of an arc. From Figure 2 . 4 , the center zq of the supporting circle of an arc that passes through three distinct points pi,P2, and ps on the complex plane can be determined from the intersection of the lines Li := (pi + P2)/2 + jui{p2 Â— Pi), and L2 := (ps + P2)/2 + ju2{pz Â— P2), where U2 G TZ. The radius r is then found in an obvious fashion, say for example as r = \p2 Â— zq\. For the case where the map g{jui, Ei{Q)) of an edge Ei{Q) is a straightline segment, the minimum distance to the point Â—1 + jO is found using a procedure formally similar to the case of the arcs. First a supporting line is found. Then, one finds the point of intersection between the supporting line and a normal line that
PAGE 29
19 passes through Â— 1+jO. The intersection point gives the minimal distance to Â— 1+jO if the intersection point is also an element of the straightline segment. Otherwise, the minimum distance is defined as the distance between Â— 1 IjO and one of the two endpoints of the straightline segment. The procedure described above solves Problem I, yielding a numerical value for the inversesensitivity radius t]{uj) at each frequency. Problem II can be addressed efficiently through the assistance of the following theorem, which is a restatement of an equivalent theorem derived in Baab et al. [2]. A detailed proof is given in the original reference. Theorem 4 Consider the realaffine uncertain system (2.19)(2.20) configured in the unityfeedback form given in Figure 2.3 under the assumptions (Al) and (A2). Then 1 jO ^ V(o;) if and only if at frequency u the following linear equality/inequality problem is infeasible: Aq = b (2.21) subject to Bq< b+ ( 2 . 22 ) where ~^n,R^0,R ^%R^0,R sljdog Â€ A :=
PAGE 30
20 1 0 0 .. o 1 C} 1 0 0 .. . 0 
PAGE 31
21 doi do,/ := do3 dn ^21 Â• Â• Â• dpi where [] represents the greatestinteger function. Proof: See Baab et al. [2], B In summary, for the system (2.19) with parametric uncertainty (2.20) it is possible to solve Problem I and calculate with very high numerical precision the sensitivity radius rj{uj) because the solution to (2.11) is given by a set of simple algebraic equations. In addition, it is possible to solve Problem II in a numerically efficient fashion because the condition 1 IjO G V(w) can be determined via a simple feasibility problem involving linear equalities and inequalities. Hence, the sensitivity perturbation radius ps{^) in (2.10) and the Nyquist robust sensitivity margin fc;v,s(a;) in (2.12) can be computed precisely and efficiently. Three examples are presented. The first one calculates the margins and and to compare and contrast their values and to shed light into their interpretation. The second example illustrates in a dramatic fashion the fact that kN,s{^) provides a more meaningful indication of the degree of robust sensitivity of the closed loop. Finally, the last example is designed to illustrate how the concepts proposed here can be utilized to formulate and calculate an alternative robustness measure, namely a parametric stability margin. 2.5.1 Example 1 Consider the affine system of the form (2.19) with the structure [29] 2.5 Examples g{s,q) c(s) 5 s h Qi (2.23) s^hQ 2 shqs
PAGE 32
22 where 3603.7935 s + 18018.9673 Â“ fi2 + 1434.5016 s 2312.4499 is a feedback controller. Let the real perturbation parameters belong to the uncertainty domain Q = {(71,72,93) G 77^ : 0 < < 8, 2 < 92 < 6, 19 < 93 < 11} (2.24) Figure 2.5 shows the uncertainty value set for (2.23) at the frequency a; = 9, including the corresponding sensitivity circle centered at Â—1 IjO. Note that Figure 2.5 also shows the frame of the value set of (2.23), namely, the straightline segments and arcs of circles that result from the mapping of all the edges of Q. The problem is to analyze the robust stability of the feedback loop involving the uncertain system (2.23) subject to the uncertainty description (2.24). The margin fciv,s(w), calculated following the algorithm given in Section 4, and the margin /cat ( a;), calculated using the technique described in Baab et al. [2], are plotted in Figure 2.6 for frequencies lo G [10~^, 10]. Given that fcAr,s(a;) < 1 Vo;, it can be concluded from Theorem 2 that the closedloop system is robustly stable. Since the two margins are equivalent, the values of < 1 reported in the figure correspond to values kN{u) < 1 at the same frequency, consistent with Theorem 3. Note that Figure 2.6 shows that in this particular case kN,s{<^) is an upper observation is consistent with property (PI) which implies that is an upper bound for A:jv(a>) when the system is robustly stable. 2.5.2 Example 2 Consider the system [2] 9(s,g)=c(s)^ (2.25) where n{s) = f (4 IO .491 10 . 292)5 I(20 f 91 Â— 93 ), d{s) = I(9.5 IO .591 Â— 0.592 + 0 . 593 ) 5 ^ h (27 +291 b 92 ) 5 ^ b (22.5 9 i b 93)5 + 0.1, c{s) = 0.3s b 1, and where
PAGE 33
23 Figure 2.5: Frame for the value set of system (2.23) at cj = 9, and the corresponding inversesensitivity circle. The nominal plant is indicated by the Â’f Â’ marker. Figure 2.6: Value of and as a function of frequency for the first example.
PAGE 34
24 the real perturbation parameters belong to the polytope Qi = {(91,92,93) en^:3 0.
PAGE 35
25 Figure 2.7: Frame for the value set of system (2.25) at a; = 4.72 and a Â— 1.86. The nominal plant go{juj) is indicated by the Â’+Â’ marker. Figure 2.8 shows the Nyquist robust sensitivity margin kM,s '= maxkM,s{^) that UJ results when magnifying the original perturbation polytope Q by different blowup factor values. The numerical study shows that when the blowup factor has the value a = 1.89 then the Nyquist robust sensitivity margin kN,s is approximately equal to unity, hence reaching the limit of robust stability. The limiting value a = 1.89 is the parametric robust stability margin for the uncertain closed loop. In other words, the controller c(s) introduced in Example 2 can robustly stabilize the closedloop system subject to a parametric blowup factor of the parametric uncertainty domain (2.27) less than a. Note that the blowup factor used in Example 2 is o; = 1.86 < a; hence, the uncertain closedloop of Example 2 is robustly stable.
PAGE 36
26 Figure 2.8: Plot of the Nyquist robust sensitivity margin = maxkM,s{(^) as a function of the blowup factor a. The parametric robust stability margin is a = 1.89, which corresponds to the value of the blowup factor a that makes k^^g approximately equal to unity. 2.6 Conclusions The new concept of a Nyquist robust sensitivity margin can be used to quantify the robust stability of uncertain closedloop systems while at the same time producing a meaningful indication of the worstcase sensitivity that is realized. Hence, in this sense the approach is more attractive than the classical Nyquist robust stability margin framework, which ignores all systems that do not lie along the critical direction, and that may therefore exclude from the analysis perturbed systems that have the worst sensitivity. On the other hand, in general the calculation of the Nyquist robust sensitivity margin may involve more numerically intensive optimization work, since the program (2.11) is a superset of the program (2.7). In other words, the calculation of kN^g{uj) requires knowledge of the entire valueset boundary 5V(u;), whereas the calculation
PAGE 37
27 of kN{u}) requires knowledge of only those points of dV{u) that lie along the critical direction. The examples presented illustrate the ability of the Nyquist robust sensitivity margin methodology to produce meaningful quantitative measures of robustness for uncertain systems, even in the case where the uncertainty value set is originated by a real parametric uncertainty description. The examples also shows how the proposed paradigm can be used to characterize alternative robustness measures, such as a parametric blowup factor for a real uncertainty description comprised of a rectangular polytope. The numerical algorithm used to solve the problem of Section 4 calls for a modest computational requirement because the calculation of the sensitivity radius can be carried out in a straightforward fashion. It is anticipated, however, that the computational cost associated with other particular real parametric uncertainty descriptions may become significantly more expensive given that the general parametric uncertainty analysis problem is found to be NP hard [44]. 2.7 Supplementary Calculation Algorithms Further detail on the computational techniques discussed in Section 2.4 are presented in this section. First, an alternative algorithm to finding the supporting circle of an arc is introduced. Second, a simple algorithm to find the minimum distance between the critical point and a line is discussed. Finally, a technique is discussed for determining whether the intersection point of the line through the critical point and the supporting circle of the arc segment actually lies on the arc. 2.7.1 Supporting Circle of an Arc An alternative method for finding the supporting circle on an Arc defined by three points, A(rri, X 2 , X 3 ), is to utilize the equation of the circle. A circle that passes
PAGE 38
28 through the point x and centered at the point C is given by {Xr Crf + {Xi CiY = (2.28) where the subscript r refers to the real part of the complex point x, and the subscript i refers to its imaginary part. Now, using the three points that define the Arc, namely Xi,X 2 , and X 3 , into equation (2.28), the following three equations are obtained: Â— Â‘lXy\Cf + Â— 2xiiCj )X ^2 ~ 2Xr2Cr + X ^2 Â— 2 Xi 2 Cj + Cf = X ^3 Â— 2XrzCr + + X ^3 Â— 2Xi^C{ + Cf = (2.29) The three equations (2.29) have three unknowns, namely C'r,Cj, and r, which precisely identify the supporting circle of the Arc defined by Xi,X 2 , and X 3 . 2.7.2 Minimum Distance between a Line and a Point Given that uncertain systems of the form (2.19) produce either a linesegment or an Arc, it is important to be able to find the minimum distance between the critical point Â—1 + jO and a linesegment. The projection technique can be utilized to do just that. The following steps describe the procedure: First, given the critical point cp, and the two endpoints ps and pe of the linesegment, three vectors are defined as follows: Re{cp) Re{ps) Re{pe) ^cp Â— Im{cp) Vps Â— Im{ps) '^pe Â— Im{pe) Next, define the vector v = Vpe Â— Vps, and the the direction vector Vd Â— pe Â— ps. Finally, the projection is computed as p {pcp '^ps) ^
PAGE 39
29 where the refers to the dot product operator. Now, the minimum distance is calculated based on the sign of the projection as follows: if P < 0 then dmin = \cp Â— ps\ if P > 1 then dmin = \cp Â— pe\ if 0 > P < 1 then d^in = \cp {ps + proj{pe ps)) 2.7.3 Identifying Points on the Arc One decision that has to be made in Section 2.4 is whether the intersection point of the ray originating from the center of the supporting circle of the Arc and passing through the critical point 1 + jO lies on the Arc. The key step is to identify the valid Arc phase range, /arc, such that if a point p G larc then p lies on the Arc. The algorithm can be summarized as follows: Â• Shift the supporting circle and the intersection point p to the origin. Â• Find the phase of the start and end points of the Arc, and convert them such that the phase ranges between [0 , 27 t]. Â• Denote the smallest phase (j)min and the largest maxÂ• Find the phase of a mid point on the Arc and denote it (j)mIf (prnin ^ 0m ^ 0mai then larc ~ [0min ) 0max]Furthermore, if j? G larc then p lies on the Arc. Else if (f>nfi ^ 0mm) and (j)jji ^ 0mai) then larc [0 ) 0mm] U[0mai ) OjThus, if p G larc then p lies on the Arc. Else larc = [0mai , 4>m] U[0m , 0] J[0 , 0mm]Therefore, if p G larc then p lies on the Arc.
PAGE 40
CHAPTER 3 SLIDING MODE CONTROL FOR TIMEDELAY SYSTEMS 3.1 Introduction Delay is inherit in some control systems such as processes involving heat or mass transport. The presence of delay in a dynamic system can have a destabilizing effect or can cause poor performance. Furthermore, delay can pose a significant challenge to ensure closed loop stability [27]. Throughout the literature, a variety of linear and nonlinear controllers have been used to stabilize timedelay systems, where the delay may appear in the state, input, or in both. Local and global stability conditions have also been derived to guarantee the asymptotic stability of the closed loop system. The emphases of this work in on sliding mode control (SMC), a technique known for its robustness with respect to perturbations and system uncertainties, that has been used to stabilize systems with time delays; however, most of the literature focuses on systems with either state delay [6, 33, 43, 28] or with input delay [25, 45]. Some work has been done regarding systems with simultaneous state and input delays [22, 20, 55]. In Gouaisbaut et al. [22] a sliding mode controller is designed to stabilize a linear systems with input and state delay. The technique is based on transforming the system into a regular form [31], and then a memory control law that depends on previous values of the input is designed the ensure reaching the manifold as well as the asymptotic stability of the closedloop system. LyapunovKharsovski methods are employed to derived the stability conditions. The method presented in Feiqi et al. [20] incorporates a dynamic compensator into the switching function (manifold) in order to simplify the equivalent control law. Then, a control law that is a function of the switching manifold and the system state is utilized to stabilize a system that 30
PAGE 41
31 features a constant delay in both a state and the input. The work by Xia et al. [55] considers the derivation of delayindependent as well as delaydependent stability conditions for a class of linear systems with simultaneous delay in the state and the input. An integral switching function with a compensator is utilized to obtain a simple equivalent control law. The stability conditions are given in terms of LMIs. This chapter introduces a new approach to address the problem of stabilizing a linear system featuring both state and input delay. First, a state transformation is used to map the original system into an inputdelay free form where only state delays are present. Then, a new state defined as the difference between the original state and the transformed state is incorporated into the transformed system equation. Introducing an integral switching function in terms of the transformed states allows the derivation of a simple state feedback equivalent control law. This control action along with a proposed discontinuous control law are shown to drive the states to the switching manifold in finite time. Finally, using a bound on the new state, and utilizing Lyapunov techniques, the development delivers sufficient stability conditions that ensure the asymptotic stability of the closedloop systems in terms of a constant LMI that depends on the delay of the system. The chapter is structured as follows. In Section 3.2 the problem is formulated along with the transformation that eliminates the input delay. The design of the control law which consists of an equivalent control action and a discontinuous control action is discussed in Section 3.3. In section 3.4 the control law is shown to drive the system states to the sliding surface in finite time. Derivation of sufficient conditions for the asymptotic stability of the transformed and original systems is given in Section 3.5. A bound on the time delay tolerable by the systems is also given. The chapter concludes with an illustrative example that verifies the results in Section 3.6, and presents a summary in Section 3.7.
PAGE 42
32 3.2 Problem Formulation The timedelay system considered is of the form x{t) = Ax{t) + Adx{t Â— h) + Bu{t) f Bdu{t Â— h) x{t) = $(r), rÂ€[Â— h, 0] u { t ) = ^'(r), tG[Â— / i, 0] (3.1) where, x{t) G 5R" is the state, u{t) G 3?"* is the control input, and A,Ad,B, and Bd are matrices of appropriate dimensions. The system delay h is considered to be constant, $(r) is an initialstate function, and ^(r) is an initialinput function. The notation  Â•  is used to indicate, dependent on the scalar or vector nature of the argument, an absolute value of a scalar quantity or a vector norm, and 1 Â•  is used to indicate an induced matrix norm. System Transformation. The following state transformation is introduced as suggested in [1] to map (3.1) into an inputdelay free system: Differentiating equation (3.2) to obtain z{t) = x{t) + A f e^^^^^^Bdu{r)dr + e^^Bdu{t) Bdu{t h) Jth and then substituting for x{t) from (3.1) gives z{t) = Az{t) + Adx{t Â— h) + Bu{t) where B = B+e~"^^BdIn this work it is assumed that the pair {A, B) is controllable. LetÂ’s define the new state z{t) Â— x{t) + ( ^ '^^Bdu{T)dr (3.2) J tÂ—h v{t) := f ^ '^^Bdu{T)dr J tÂ—h (3.3)
PAGE 43
33 Then, the transformed system becomes z{t) = Az{t) + Adz{t h) + Bu{t) + Adv{t h) (3.4) where Ad = AdNote that from (3.2), v{t) = z{t) x{t) is interpreted as the difference between the original system x{t) and the transformed system z{t). A feedback matrix F is introduced such that A = ABF is Hurwitz [47]. Treating the last term in (3.4) as an internal disturbance and defining f{t,v{th)):= Adv{th), the system equation (3.4) can be rewritten as z{t) = {A + BF)z{t) + Adz{t h) + Bu{t) + f{t, v{t h)) (3.5) which is free from input delay. 3.3 Switching Function and Control Law Design The first step in the design of a sliding mode controller is to define a switching function (manifold) along which the system possesses desired properties, such as stability. Various structures of the switching functions have been used in the SMC literature. The most common designs used, however, are the basic form s{t) = Cx{t), the integral form [47], and the dynamic or compensated form [55]. The basic form is best suited for systems having the general structure x{t) = Ax{t) + Bu{t). The Integral form, adopted in our work, has the advantage of cancelling the delay terms, which allows obtaining a simple statefeedback equivalent control law as is shown later. The dynamic form is preferred when known disturbances and/or delays exist in the system, as it helps to cancel these terms hence yielding a simple equivalentcontrol law. The sliding surface is defined by a scalar switching function s{t) G 5R of the s{t) = Cz{t) 1CAdz{r Â— h)] dr integral form (3.6)
PAGE 44
34 where C is a design matrix chosen such that CB is nonsingular. The structure of the control law is given by u{t) = Ue{t) + Ud{t) (3.7) where Ue{t) is the equivalent part and Ud{t) is the discontinuous part of the control law. The equivalent control is obtained by setting to zero the derivative of equation (3.6) with respect to time, and then solving for u{t) to yield s{t) = Cz{t) Â— CAz{t) Â— CAdz{t Â— h) = 0 Following the standard approach in SMC, the state derivative z{t) in the above equation is taken from (3.5) after ignoring the disturbance term Â— h)). This gives the identity C{A + BF)z{t) + CAdz{t h) + CBu{t) CAz{t) CAdz{t h) = 0 which reduces to CBFz(t) + CBu(t) = 0 The solution to the above identity is u(t) = Ue(t); hence, after recognizing that CB is invertible it is possible to conclude that the equivalent control law sought is Ue(t) = Fz(t) (3.8) The discontinuous control law proposed is UdB) = (CB)~^[ks(t) + p(t) sgn(s(t))] (3.9) where pit) = \\C\\ i, \v{th)\ + C (3.10) and where A: > 0 and C > 0 are design parameters, and v{tÂ—h) = z{tÂ—h)Â—x{tÂ—h). It must be noted that the discontinuous part is what matters; however, the linear term in equation (3.9) {i.e., ks{t)) helps smooth out the trajectories. Various structures of the discontinuous control law can be found in Hung et al. [26].
PAGE 45
35 3.4 Existence of a Sliding Mode By Â” Existence of a sliding modeÂ” we mean that the system trajectories must be forced to reach the sliding surface in finite time and stay there forever. Defining a Lyapunov function V{t) = ^s'^{t)s{t), then in order to assure reaching the manifold in finite time it suffice to show that V{t) <0. The following Theorem provides the proof. Theorem 1 The timedelay system (3.5) with control law (3.7)(3.10) reaches the sliding manifold within a finite time tg, where ts k g(0)k C ^ (3.11) Proof: Select V{t) as a candidate scalar Lyapunov function. Then, V{t) = s{t)s{t) = s{t){Cz{t) CAz{t) CAdz{t h)) = s{t){C{A + BF)z{t) + CAaz{t h) + CB [Fz{t) {CB)~^{ks{t) + p{t) sgn{s)) ] + Cf{t, v{t h)) CAz{t) CAdzft h) } = s{t){ks{t) p{t) sgn{s)) + Cf{t, v{t h)) = k\s{t)\^ p{t)\s{t)\ + Cf{t,v{t h))s{t) Now, since \Cf{t,v{t h))\ < HCH id ^)1, after invoking (3.10) it follows that V{t)<k\s{t)\^C\s{t)\ (3.12) Therefore, V{t) < 0 for all /c > 0 and C > 0, and it can be concluded that the system trajectories attain sliding mode in finite time. An estimate for the upper bound of the reaching time tg can be obtained by integrating the differential equation V{t) = k\s{t)\^ Cls(i), where \s{t)\ = y/2V(t), under the initial condition V"(0) = s(0)^. The result (3.11) is obtained after a simple transformation of variables and a simple algebraic manipulations. The derivation of
PAGE 46
36 the reaching time for the cases k = 0 and k ^ 0 is presented in Appendix A. 3.5 System Stability After demonstrating reaching the sliding manifold in finite time, it remains to show that once the system trajectories are in the sliding phase the system is asymptotically stable. At sliding mode the control law (3.7) reduces to u{t) Â— Then, from (3.8) it follows that the dynamic system (3.5) is given by the expression z{t) = Az{t) + Adz{t h) + f{t, v{t h)) (3.13) The developments in the suite make use of the inequality \v{t)\ < rj{h) \z{t)\ (3.14) where T]{h) h max le~^Â®l 11^^11 \\F\\a (3.15) o 1 is a constant derived using a Razumikhinlike argument. The purpose of this constant is to describe the evolution of  2 :(t)l, i.e., \z{9)\ < o. \z{t)\, 0 Â€ [th,t]. The bound (3.14) follows from applying successive bounding operations to the righthand side of (3.3) and introducing the Razumikhin parameter. The complete derivation of the constant bound in (3.15) is given in Appendix B. We are ready now to provide the sufficient conditions for the asymptotic stability of the system (3.13) which are introduced by the following theorem. Theorem 2 The time delay system (3.5) with control law (3.7)(3.10) is asymptotically stable at sliding mode if there exist positivedefinite matrices P G R G and Q G 3?"^" such that ^min{F) > ^max (Q) (3.16)
PAGE 47
37 and Xrmn{Q){^rmn{R) ~ XmaxiQ)) > (1 + VW? \\PAÂ£ where P and R are solutions to the Lyapunov equation PA + A^P = R (3.17) (3.18) Proof: Consider a Lyapunov functional of the form V{t) = z^{t)Pz{t) [ z'^{T)Qz{T)dT (3.19) Jth The time derivative of V{t) with respect to time is given by V{t) = 2z'^(t)Pz{t) + {z^{t)Qz{t) z^(t h)Qz{t h)) Substituting the expression for z{t) given in (3.13) yields V{t) = 2z^{t)PAz{t) + 2z^{t)PAdz{t h) +2z^{t)Pf{t, v{t h)) + z'^{t)Qz{t) Â—z^{t Â— h)Qz{t Â— h) (3.20) and then using (3.18) and bounding the righthand side of (3.20) yields V{t) < Xrmn{R)\z{t)\'^ + ^max {Q)\z{t)\'^ \2\\PAdW\z{t)\ \z{th)\ +2\\PAd\\ \z{t)\ \v{t h)\ Xmin{Q)\z{t /i)^ (3.21) Invoking the bound (3.14) and rearranging terms, inequality (3.21) can be written in the form ^ z(t)'^ a b z{t) V{t) < (3.22) 1 T a b 1 40 z{t Â— hY c d z{t Â— h)
PAGE 48
38 where a b (1 + vW) \\PA,\\ c d (1 + ,(/.)) IIP.4,11 Xmin (Q) (3.23) To prove asymptotic stability, the Lyapunov functional must satisfy V{t) <0 which implies that the matrix (3.23) must be negativedefinite. This is ensured if and only if conditions (3.16) and (3.17) are satisfied. B Theorem 2 can be reformulated to show explicitly the constraint on the size of the delay parameter imposed by design choices, such as the adopted Lyapunov matrices R and Q. This is given in the following corollary. Corollary 1 The timedelay system (3.5) with control law (3.7)(3.10) is asymptotically stable in sliding mode for timedelay values satisfying 1 h max e '^^ < (Â— 1 )t7j 3 ,, ,, rp,, o max(Aniax(Q)? 2T*yld (3.25) Proof: The proof consists of deriving conditions that ensure the existence of a feasible solution to (3.16) and (3.17). The proof also recognizes that XmaxiQ) represents the maximum eigenvalue to the Lyapunov functional (3.19), hence Xmin{Q) < XmaxiQ)Using the latter inequality along with the constraint imposed on XminiQ) by (3.17), it follows that A solution XminiQ) fo (3.26) exists only if il + gih))VAdr XminiF) Xmax (Q) ^ XmaxiQ) (3.27)
PAGE 49
39 which, using the fact that (3.16) requires that Xmin{R) ~ XmaxiQ) > 0> is equivalent to XmaxiQ? X^UR)>^max{Q) + (1 + V{h)nPAÂ£ < 0 (3.28) The analysis of the above inequality reduces to investigating the boundary defined by the equality XmaxiQ)^ Xmin{R)Xmax{Q) + (1 + T]{h)f\\PAÂ£ = 0 (3.29) which can be readily solved to yield AÂ™.(0) = 4(1 + >,(/.) )^Pyl,P Given that only real solutions are meaningful, it follows that the discriminant must be nonnegative, i.e., Xrmn{R)>2{l + rjih))\\PA,\\ (3.30) The presence of the factor (1 + r]{h)) > 1 implies that a feasible solution to (3.30) exists only if \min{R) > 2\\PAd\\ (3.31) Furthermore, from (3.30) it follows that the set of feasible solutions is given by the equivalent inequality T)(h) < Â— 1 which establishes condition (3.24) of the corollary after using (3.15) and suitably rearranging the factors in the inequality. Moreover, since Xmin{R) must simultaneously satisfy condition (3.16) and constraint (3.31), it follows that it must satisfy condition (3.25) of the corollary. * It remains to show the asymptotic stability of system (3.1), as addressed in the following theorem.
PAGE 50
40 Theorem 3 The timedelay system (3.1) with state x{t) is asymptotically stable if the transformed system (3.5) reaches the sliding manifold and is asymptotically stable on the manifold. Proof: If z{t) reaches the sliding surface, then the control law reduces to u{t) Fz{t), and (3.2) can be rearranged in the form Now when (3.5) is asymptotically stable, it follows that z{t) Â— > 0, then x(t) Â— ^ 0 in (3.32), hence completing the proof. The ensuing discussion presents an example that illustrates the results. From Figure (3.1), the chattering phenomenon in the control action is obvious. Chattering refers to highfrequency finiteamplitude signals. It is mainly due to the discontinuous control law. Chattering is undesirable because it can excite neglected highfrequency components, and lead to premature wear of the actuators. Several approaches have been proposed in the literature to alleviate or eliminate the effect of chattering. Slotine [49] [50] proposed the use of a boundary layer such that standard SMC is used outside the boundary, and an approximated version of it takes affect inside the boundary. The work of Bartolini [5] introduces a new scheme to chattering reduction. The system order is increased and an estimator based on the augmented plant is defined. Also, a suitable manifold is defined such that the derivative of the control law is discontinuous on this manifold. Finally, this control law is feed through an integrator placed in the plant to yield a continuous control law. In [34], the actuator dynamics are treated as unmodelled dynamics, and thus are not part of the control law. Instead, the pass filter characteristics of the actuators are utilized to smooth out the chattering introduced by the discontinuous control action. (3.32)
PAGE 51
41 3.6 Example Consider the timedelay system (3.1) with h = 0.8, with an initialinput function ^(r) = 0 for r G [Â—h,0), and an initialstate function $(r) = [Â—1 2]^ for r 6 [h, 0] so that the initialstate vector is x(0) = [1 2]^, and the following system parameters: 1 0 0.01 0.04 2 1 A = 0.2 0.3 , Ad Â— 0.02 0 , B = 1 , Bd = 0 The control design considered is based on the following matrices associated with the Lyapunov equation (3.18) and the Lyapunov functional (3.19): 6 0 3 0 4.5435 5.7537 R = , Q = , P = 0 6 0 3 5.7537 48.4578 Using the feedback matrix F = [0.0166 0.2827] the eigenvalues of A are placed at {Â—0.4, Â—0.45}. The controller parameters are A: = 30 and C = 3The switching functionÂ’s initial value is s(0) = 3, and its design matrix is chosen as C = [0.9 0.85]. Selecting a = 2.3391, calculating the norms of \\Bd\\ = 1, F = 0.2832, and evaluating maxleÂ“^Â® = 2.2381 , then equation (3.15) gives rj{h) = 1.1858. It is now 0<9
PAGE 52
42 Figure 3.1 shows the results of a simulation study. Figure 3.1(a) depicts the asymptotic stability of the transformed system (3.5) with state variable z{t). The state trajectories for the original system (3.1) with state variable x{t) are shown in Figure 3.1(b). Equation (3.11) yields tg = 0.1145, a value that is consistent with the time at which s{t) becomes identically zero in Figure 3.1(d), given that at that instant z{t) has reached the sliding manifold. It is apparent that the states x{t) develop asymptotic behavior after a time t ^ tg + h = 0.9145, which is a consequence of the fact that the original system has an input delay whereas the transformed system is free of input delay. Figure 3.1(c) shows the control action u{t) rising quickly from its initial value, and reaching the value of zero at approximately the same time that z(t) reaches the sliding manifold. Figure 3.1(c) also shows that the control scheme suffers from a chattering effect, as is to be expected from the presence of the signum function in the discontinuous control law (3.9). The chattering of the signal can be alleviated by introducing the approximation Figure 3.2 shows that a value of e = 0.001 effectively makes the chattering disappear (see Figure 3.2(c)), while the state trajectories z{t), x{t), and the switching function s{t) remain virtually unchanged. Remark 1 The negative definiteness of the constant matrix (3.23) can be checked directly through the Linear Matrix Inequality (LMI) toolbox of Matlab. 3.7 Conclusions A sliding mode controller has been designed to stabilize a linear system with state and input delay. A key step is the use of a transformation to map the system inputdelay free. This transformation is also used to define a new state appears as
PAGE 53
43 a disturbance in the transformed system. A SMC is then designed to stabilize the statedelay system. The controller is shown to successfully drive the system states to the sliding surface in finite time. Sufficient stability conditions are derived using Lyapunov techniques. Figure 3.1: Trajectories for the states of the transformed system (a), the states of the original system (b), the control law (c), and the switching function (d).
PAGE 54
44 Figure 3.2: Trajectories for the states of the transformed system (a), the states of the original system (b), the control law (c), and the switching function with approximation to the signum function (d).
PAGE 55
CHAPTER 4 STATE FEEDBACK CONTROL OF TIMEDELAY BILINEAR SYSTEMS 4.1 Introduction This chapter considers the stabilization of a class of statedelayed bilinear system with constant delay. Much work has been done to derive sufficient conditions for the asymptotic stability of the closedloop bilinear systems via a variety of controllers, including state feedback [24], quadratic feedback [11], and nonlinear control [10, 12]. Work has also been done to derive stability conditions for timedelay bilinear systems. The combination, however, of the timedelay and the nonlinearity makes the design of stabilizing controllers as well as the analysis much more challenging. Stability conditions can be either delayindependent or delaydependent. Delayindependent conditions do not give any information regarding the size of the delay tolerable by the system and therefore they are generally more conservative. On the other hand, delaydependent conditions provide information about the bound of the delay, which leads to less conservatism [18]. The former provide no information about the delay tolerable by the system. Some results can be found in [13, 23, 24, 42] . In Chiang [13], the stability analysis of a class of inputdelay bilinear systems is considered. The derivations utilize the Razumikhin parameter in conjunction with matrix measure techniques [52]. The stabilization of a class of statedelay bilinear systems with saturating actuators is investigated in Niculescu et al. [42]. The work by Ho et al. [24] utilizes a memory statefeedback control law to yield global stability conditions for a class of timedelay bilinear systems. In Guojun [23], the stabilization of a class of timevarying bilinear systems with outputfeedback is studied. Delaydependent conditions are given in Liu [37], where a memoryless statefeedback control law is 45
PAGE 56
46 used to derive stability conditions for a timedelay bilinear system with saturating actuators. In this chapter, the stability analysis of a class of statedelay bilinear systems via state feedback control is investigated. Lemmas 2 and 3 are developed to facilitate the proof of the main theorem. The analysis utilizes the matrix measure [52], and a technique that allow expressing the stability conditions in terms of a bound on the system delay and an initial condition region of attraction. As a result, delaydependent stability conditions are derived. The chapter is organized as follows. Section 4.2 presents the problem along with a controllability assumptions. Section 4.3 of the paper introduces preliminary results which will be utilized in the proof of the main result. The main result is presented in Section 4.4, followed by an example and conclusions in Sections 4.5 and 4.6, respectively. 4.2 Problem Statement Consider the system x{t) = Ax{t) IAdx{t h) + Bu(t) tNx{t)u{t) x{(t>) = ^(0),0G [h,0] (4.1) where t G 5R is the time variable, x{t) G 3?" is the state, u{t) G 3? is a scalar input, A,Ad,N, and B are matrices of appropriate dimensions, and is an initialstate function. The nonnegative system delay h is considered to be constant. In our development, the following conventions are used: the notation  Â•  is used to denote a vector pÂ— norm, and the notation  Â•  is used to denote the induced matrix pÂ— norm. Finally, p() is used to denote the matrixmeasure function (see Appendix E for definition and useful properties) based on the induced matrix pnorm . Also, the following two assumptions are adopted:
PAGE 57
47 (yll) the pair {A^ B) is controllable. (A2) the pair {A + Ad, B) is controllable. Our objective is to find a linear state feedback control of the form u{t) = Fx{t) (4.2) that renders the closedloop system asymptotically stable. We also aim at deriving delaydependent stability conditions which guarantee the asymptotic stability of the system for initial conditions lying in a specified region, the region of attraction. Under the feedback control (4.2) the closedloop system becomes x{t) = Ax{t) \Adx(t Â— h) + Nx{t)Fx{t) (4.3) where A = A + BF is Hurwitz stable. 4.3 Preliminary Results The main stability results are given in Theorem 1 for the proof of which use is made of the following three lemmas. Lemma 1 Consider the scalar differential equation y{t) =ay(tf +by{t) (4.4) where a > 0, and b ^ 0. Then the analytic solution is given by [13] fee*'* ?/(0) y{t) = where y(0) is the initial condition. b + a 2/(0) (1 Â— e***) (4.5) Proof: The derivation of the analytic solution (4.5) is given in Appendix C. Furthermore, a thorough discussion of the solution behavior is provided in Section 4.7
PAGE 58
48 along with a graphical interpretation of the solution. Now, the finite escape time, tj, for which y{tf) oo can be found by setting the denominator of (4.5) to zero. In our development, the focus is on nonnegative initial conditions, y{0) > 0 that are not equilibrium points of (4.4) {i.e., y{0) ^ 0 and y(0) Â“Â„)Â• If 0 < y(0) < Â— ^ and b < 0, then there is no finite escape time, and from (4.5) the following observations are readily verified: fij y{t) > 0 V t < oo (ii) y{t + T) < y{t) V t < T < oo (in) lim y(t) = 0 t^oo Remark 1 It should be noted that Claims (i) and (Hi) can be verified from the analytical solution (45). The proof of Claim (i) is given in Appendix D. Claim (ii) can be proved by verifying that (45) implies the inequality y{t \T) < y{t) V t < T < oo. The complete proof is given in Appendix D. Lemma 2 Consider the scalar differential equation z{t) = az{t)^ bz{t) ICo (4.6) where a > 0, 6 < 0, and Cq > 0. Let ki and k 2 be the roots of az{ty bz{t) ICq . If 5^ > 4aco, and ki < ^(0) < k 2 , then (i) z{t) > 0 V t < oo (ii) z{t tr) < z{t) y t /ci as t oo Proof: The condition 5^ > 4aco implies that ki and k 2 are real distinct roots. Introducing the state transformation v(t) = 2(<) + r (4.7)
PAGE 59
49 where r is a real constant to be determined, and combining the time derivative of (4.7) with equation (4.6) yields y{t) Â— a y{t)^ + {b Â— 2ar)y{t) + ar^ Â— br + Cq (4.8) The quadratic form defined by the last three terms on the righthand side of equation (4.8) is set to zero by the values b T Vb^ ~ 4aco n\ ri,r2 = Â— (4.9) where r\ < V 2 One can easily verify that ki = Â—r 2 > 0 and k 2 Â— Â—r\ > 0. The objective is to select a value of r that renders the coefficient b Â— 2ar < 0 in equation (4.8) . This is realized if and only if ^ < r, which as a consequence of equation (4.9) is satisfied by selecting r Â— r 2 , where T 2 is the largest root. Therefore, substituting r = T 2 into equation (4.8) yields y{t) = a y(t)^ + {b 2ar2)y{t) (4.10) Now, equation (4.10) has the same form as equation (4.4); hence. Lemma 1 can be applied to conclude that y{t) > 0 as t Â— > oo, provided that 0 < y{0) < Â— Â• From the transformation equation (4.7), it follows that z{t) Â—r 2 = fci as t Â— > oo, provided that the initial condition satisfies z(0) < Â— Â— = k 2 This proves claim (in) of the lemma. Claims (i) and (ii) can be readily verified from Lemma 1 and the transformation (4.7). First, since y{t) > 0 then z{t) > Â—T 2 > 0. Second, since y{t) is strictly monotonically decreasing it follows that its shifted version z{t) is also strictly monotonically decreasing. Remark 2 The arrows on the zaxis of Figure 4io, show that the solution converges to the smaller equilibrium point ki whenever the initial condition satisfies z(0) < k 2 Figure f.lb shows the conceptual state trajectories for three different initial conditions.
PAGE 60
50 Figure 4.1: Graphical interpretation of the differential equation of Lemma 2: (a) derivative graph, (b) solution curves Lemma 3 Consider the scalar system z{t) = az{t)^ + bz{t) + hci sup z{9) + hc 2 sup z{9)^ (4H) t2h 0, a > 0, Ci > 0, C 2 > 0, 5 < 0, and where the initialstate function = z{(j)) > 0 , (f)e [2 / 1 , 0 ] (4.12) satisfies sup z{9) = ^(0). If 2h<0 0 V / < oo (ii) z{t + T) < z{t) y t < T < oo (iii) lim z{t) Â— 0 t >00 Proof: Let r = 2h. Now, consider an initial condition that satisfies (4.13), hence 2 ( 0 ) > 0, and assume that at some time t 2 < oo the state function satisfies 2 (/ 2 ) < 0,
PAGE 61
51 and that the state changes sign for the first time at an instant ti < ^2 such that h Â— t\ < T. This scenario can hold only if z{t) < 0 at some time t G (^ 1 ,^ 2 ]The proof consists of showing that this is a contradiction. From equation (4.11) it follows that for all t G (^ 1 ,^ 2 ] the state derivative z{t) is strictly positive because all the term in the right hand side are positive. This contradicts the hypothesis and hence proves claim (i). For claims (ii) and (in), the proof is conducted in four steps. First, the system is shown to be strictly monotonically decreasing in the interval [0,r). Next, the system is shown to be strictly monotonically decreasing in the interval [r, 2r). In a third step, it is shown that the strict monotonic decrease is preserved in all subsequent intervals of length r. Finally, in step four it is shown that a,st ^ 00 both z{t) Â— >Â• 0 and z{t) Â— >Â• 0. As a preliminary observation, note that at t = 0 equation (4.11) can be written as i(0) = az{^Y + ^2^(0) + hciz{0) + hc2z{0Y = {a + hc2)z{0)^ + {b + hci)z{0) (415) The states that set i(0) = 0 can be found by finding the roots of the expression 0 = (a + hc2)z(0) [ 2^(0)] (416) a + nc 2 Hence, the initial states z(0) = Zi and 2:(0) = Z 2 , where = 0 and Z 2 = Â— a+hc 2 PÂ’Â®" duce zero derivatives at the initial time t = 0. The focus now turns to determining conditions that ensure that z{t) is a decreasing function of time. This requires that the condition i(0) < 0 hold at all finite time. The first step of the proof considers the interval t G [Or). Depending on the parameter b, there are two scenarios of relevance. If 6 + hci > 0, then since ^(0) > 0, it
PAGE 62
52 follows from (4.15) that i(0) > 0 and the solution is initially increasing. Obviously, this case is not desired. However, if b + hci 0, which implies that the system delay must satisfy h < hi := Furthermore, since b + hci < 0 is the desired condition, it follows that h must also satisfy h < /12 := This suggests that the system delay must satisfy h < min{hi,/i 2 } which yields inequality (4.14) of the lemma. Let ki = ~'Â’~^ 2 'a~^ ~ k 2 = respectively denote the smallest and largest roots of the right hand side of (4.19). Since b < 0, Lemma 2 can be applied to (4.19) to conclude that the solution Zo{t) is strictly monotonically decreasing, and that zo{t) ki a,s t ^ 00 provided that 2:(0) belongs to the region of attraction ki < 2;(0) < /c 2 In order to also satisfy the constraint (4.18), the region of converence is redefined as ki < 2 ( 0 ) < min{^~^^^\ fc2} (4.20) a + hc 2 which is equivalent to inequality (4.13) in the lemma.
PAGE 63
53 The second step aims to show that z(t) decreases in [r, 2r). Since it has been established that z(t) decreases in the period [0,r), it follows that sup z{9) = t<9<2t z{t t) and sup z{6)Â‘^ = z{t r)^ in the period [r, 2r). Hence, for t G [r, 2r) t<6<2t equation (4.11) can be written as Zr{t) = aZr{t)'^ + bZr{t) + c{t) (421) where c{t) = hciz{t t) + hc 2 z{t tY (4.22) and where Zr{t) is the solution of (4.11) when t G [r, 2r). Note that as a consequence of the results of the first step of the proof, c{t) is strictly decreasing in [r, 2r) which implies that the roots of the right hand side of (4.21), namely. Ki{t) Â—b Â— y/b'^ Â— 4ac{t) 2a (4.23) and b + 4ac(() Â“ 2a are such that the smaller root Ki{t) is decreasing and the larger roots K 2 Y) is increasing. This, in turn, implies that z{t) is strictly decreasing. The third step involves extending the results of the second step to the subsequent intervals, [2r, 3r), [3r, 4r), . . . , etc. Let zÂ„^(t) represent the solution to system (4.11) in the interval [nr, (n + l)r) where n > 2 is an integer. When t G [nr, (n + l)r) system (4.11) can be written in the equivalent form ^miY) Â— T bZjij{Y) T c(t) where c{t) is given by (4.22). Note that when n = 2, the function c{t) is strictly monotonically decreasing in t G [r,2r). Repeating the argument invoked in the second part of the proof, namely that the monotonicity of c{t) in the interval ensures
PAGE 64
54 that Ki{t) given in (4.23) is strictly monotonically decreasing in that interval, leads to the conclusion that Znrit) is strictly monotonically decreasing in [nr, (n + l)r) when n = 2. The proof is completed by induction for n = 3, 4, . . . , etc. Hence, Znrit) is strictly decreasing in any interval of length r. This implies that z{t) is strictly decreasing which proves claim (ii). Step four of the proof is based on recognizing that from claim (i) z{t) is bounded from below, and using the fact that z{t) is strictly monotonically decreasing, it follows that z{t) )Â• L as t ^ oo, where L < oo is a limit, and that z{t) > 0. Taking the limit as t Â— ) oo on each side of equation (4.21) yields 0 = (o + hc2)L^ + (5 + hci)L (4.24) Solving for the two limits of (4.24) yields Li = 0 and Now, given that the initial condition (4.13) implies that 2:(0) < Â— ^+^^2 Â“ as t Â— > oo, the decreasing state z{t) must reach the lower limit Li = 0. Thus, z{t) = 0. This proves claim (in). Â® 4.4 Main Result Now, utilizing the developments in Lemma 3, we are ready to present the main result which concerns the asymptotic stability of the original system (4.1). The approach is to bound the norm of the solution of (4.1) (Le., a;(f)) by a scalar function z{t) that is asymptotically stable. Thus, when z{t) 0 then a;(t) Â— > 0. An argument based on the comparison theorem [41] (see Appendix F) is utilized. Theorem 1 The timedelay bilinear system (41) under assumptions (Al) and (A2) and the state feedback control law (4.2) is asymptotically stable if 0 < x(0) < min{ b + hci a + hc 2 Â’ Â—b + Â— 4aco } 2a (4.25)
PAGE 65
55 (4.26) 4aco Cl where a = \\N\\ F, b = Ci = ^d^l + ^dAdl, C2 = \\AdN\\ F 1 , Cq = /icix( 0 ) + hc2\x{0)\'^, Co = f, and A = A + AdProof: Since equation (4.3) is continuously differentiable, then ro x(t) Â— x{t Â— h)= x(t + 9) dO (4.27) Jh Substituting for x{t) from equation (4.3) and rearranging terms yields the expression x{t h) x{t) { Ax{t + 6>) + Adx{t + 9 h)+ Nx{t + 9)Fx{t + 9) }d9 Jh which can be substituted in equation (4.3) to get i{t) = Ax{t)+Ad [a:(t)/ { Ax{t+9)+Adx{t+9h)+Nx{t+9)Fx{t+9) }d9 ]+Nx{t)Fx{t) Jh X or /Â•o x{t) = Ax{t)+ / (Ad){ Ax{t+9)+Adx{t+9h)+Nx{t+9)Fx{t+9) }d9 +Nx{t)Fx{t) Jh (4.28) where A = A + AdThe solution to equation (4.28) has the form x{t) = e^^xoF [ [ {{Ad)Ax{s + 9) Alx{s + 9 h) Jo Jh AdNx{s + 9)Fx{s + 0)} d9 + Nx{s)Fx{s) ] ds (4.29) where Xq = a;(0) is the initial condition obtained by setting x(0) = 4'(0) in equation (4.1). Utilizing the matrix measure property e^*p < [14]^ and taking the
PAGE 66
56 norm of both sides of equation (4.29) gives x(t) < ^ f { \\AdA\\\x{s + e)\ + \\Al\\\x{s + e h)\ Jo Jh +\\AaN\\ 1Fx(s + 0)M^^+ Ill'll ll^ll Now, the inner integral can be bound using the supremum of its arguments and the length h of the integration interval, to yield a:(t) < [ h \\AdA\\ sup \x{e)\ + h\\Al\\ sup la;(0) Jq sÂ—hK9
PAGE 67
57 Lemma 3 show that z{t) ^ 0 as t > oo. Hence, since \x'{t)\ < z{t) it follows that x'{t) Â— > 0 which implies that x{t) is asymptotically stable. For case 2, let the solution to (4.3) be denoted as x"{t) such that inequality (4.30) applies with x{t) = x"{t). Since a:d < a;[) then from (4.30) it follows that \x"{t)\ < x'(t). Finally, since x'{t) > 0 then x"{t) 0, which implies that the system is asymptotically stable. This completes the proof. 4.5 Example Consider the timedelay bilinear system (4.1) with h = 0.5, an initialstate function 4'(0) = [Â—1.5 1.5]^ for (j) G [Â— 2/i, 0] so that the initialstate vector is x(0) = [Â—1.5 1.5]^, and the following parameters: 0.5 0.2 1 O CM O 1 1 0.02 0.06 ) Ad Â— , B = , N = 0.8 2.1 0.5 0.1 0 0.01 0.03 Choosing the eigenvalues of A to be placed at {Â—1.6 ,Â—2.1} yields the feedback matrix F = [2.1 0.2]. Let us investigate the alternative pnorms to verify which norm satisfies the inequality conditions (4.25) and (4.26). For the 1Â— norm, neither condition is satisfied. First, the discriminant D = bÂ‘^ Â— Aaco = Â— 1.8 and therefore the roots are complex. Second, /i = 0.5 > {0.36, 0.06}. For the 2Â— norm, both conditions are satisfied. Hence, adopting the 2Â— norm gives the following values for the parameters in Theorem 1: a = 0.1492, /i2(A) = b = Â—1.32, and Co = 1.4130. The attraction region (4.25) is given by 0 < rc(0) = 2.12 < min{3.8, 7.6}. Also, the bound on the delay is given by h = 0.5 < min{1.18, 1.03}. Finally, for the ooÂ— norm, both conditions are satisfied. The values of the parameters in Theorem 1 are given by: a Â— 0.168, )Uoo(A) = b = Â—1.8, and Cq = 1.0415. The delaybound condition equation (4.26) is satisfied since /i = 0.5 < min{l. 44, 2.3148}. Furthermore, the domain of attraction (4.25) is given by 0 < a;(0) = 1.5 < min{5.4855, 10.1}.
PAGE 68
58 Figures 4.2 and 4.3 illustrate the results of a simulation study. Figure 4.2 depicts the time evolution of the state trajectories which assume an asymptotic behavior. Figure 4.3 shows the norm of x{t) converging to the origin. Remark 3 The sufficient conditions given in (425) and (4.26) can vary depending on the norm and matrix measure chosen. While stability can be concluded for a certain norm, it may not be so for other norms. The stability conditions can, however, be tightened and therefore reducing conservatism by selecting other norms and matrix measures. One choice could be the following: II^IU = max'^Â— \aij\ 3 and \ ^ ^3 I n p,Â„,[A) = max{au + 2_^ Â— 10^1 1 * ... 4.6 Conclusions Delaydependent stability conditions are derived for a class of timedelay bilinear systems utilizing the comparison theorem and matrix measure techniques. The sufficient stability conditions provide a bound for the tolerable system delay as well as the domain of attraction for which asymptotic stability is guaranteed. These results are however conservative mainly due to applying a supremization over a large time interval which is twice the size of the delay.
PAGE 69
59 Figure 4.2; Plot of the trajectories of the system states. Figure 4.3: Plot of the trajectories of the system states.
PAGE 70
60 4.7 Further Analysis of the System in Lemma 1 Further analysis of the system equation (4.4), and a discussion of the behavior of its solution (4.5) is presented in this section. Theorem 2 Given the system ( 4 4 ) ^he analytic solution (45), then the following observations are readily verified: Case 1: b > 0 OO if 2/(0) > 0 (4.32a) lim y{t) = < tÂ—^oo 0 if 2/(0) = 0 (4.32b) _b a V if otherwise (4.32c) 0 if 2/(0) < (4.33a) lim y{t) = < tâ€¢ca _b a if 11 1 (4.33b) OO if otherwise (4.33c) Proof: For Case 1, it is noted from (4.5) that the solution becomes unbounded at finite time when the denominator is zero. The finite escape time (FET) can be calculated as a function of the parameters a,b, and y(0), by setting the denominator to zero and solving for t. The result is Ifet = T (434) b ay{0) Now, to prove the first branch of Case 1, namely (4.32a), the denominator of (4.5) is set to zero, i.e., 5 + ay(0)(leÂ“) = 0
PAGE 71
61 or 1 + ^ y{0) (1 = 0 (4.35) Now, since as t Â— )Â• oo the quantity 1 Â— e*** takes values between zero and Â— oo, and since f?/(0) > 0, then the equality (4.35) is satisfied for some t > 0. Hence, the denominator becomes zero and the solution blows up at finite time. The second branch of Case 1 (4.32b) is readily verified by substituting y{0) = 0 in (4.5). Finally, substituting for y(0) = into (4.5) gives y{t) = which proves the branch (4.32c). Graphs (a) and (b) of Figure 4.4 show the derivative graph and the solution curves of (4.4), respectively, for the case where b > 0. Figure 4.4: Graphical interpretation of the differential equation(4.4) for the case where b > 0 : (a) derivative graph, (b) solution curves For Case 2, the first branch (4.33a) indicates that the system is asymptotically stable when the initial condition satisfies ^(0) < Â— Therefore, in this case the finite escape time is avoided. To verify this claim, consider the denominator expression (4.35). For t > 0 the quantity 1 e*** takes values in the interval (0 , 1). The goal is to show that for ^(0) < the equality (4.35) is never satisfied. First, it is trivial to verify this claim for y(0) = 0. Second, for 0 < y{0) < Â— Mt follows that, since Â— 1 <  y(0) < 0, the second term in (4.35) is never equal to 1. Finally, for y(0) < 0, equality (4.35) is again never satisfied since its second term in is always positive. This proves the first branch of Case 2. The second branch (4.33b) is readily verified by substituting y{0) ^ in (4.4). Finally, branch (4.33c) indicates that
PAGE 72
62 for y{0) > Â—^ there is a finite escape time, which means that (4.35) is satisfied for some finite values of t. First, note that 1 < f y(0). Next, since the values of the quantity 1 e*** belongs to the interval (0 , 1), then for some time t > 0 the second term in (4.35) can equal 1 such that (4.35) is satisfied. This completes the proof of Case 2. Figure 4.5 depicts the solution behavior for the case where b < 0. It is clear that when y(0) < the solution converges asymptotically to the origin, and for y(0) > Â—^ the system becomes unstable. Figure 4.5: Graphical interpretation of the differential equation(4.4) for the case where b < 0 : (a) derivative graph, (b) solution curves
PAGE 73
CHAPTER 5 FUTURE WORK AND DISCUSSIONS Future work may focus on three problems. First, designing a sliding mode controller for a class of timedelay bilinear systems. Second, extending the concept of the Nyquist robust sensitivity margin (NRSM) to a class of uncertain systems with multilinear uncertainty structure. Finally, the concept of the NRSM can be utilized to obtain a weighting function for an design similar to work by Baowei [?]. 5.1 Problem 1 : Sliding Mode Control for a Delayed Bilinear System The system under consideration is of the form x{t) = Ax{t) + Adx(t h)\Bu{t) + Nx{t)u{t) x{t) = $(r), rÂ€[Â— h, 0] (5.1) where the system delay h is assumed constant. The objective is to design a sliding mode control (SMC) that renders the system asymptotically stable. However, when nonlinearity is combined with timedelay, the problem becomes more challenging from the control design viewpoint as well as from the perspective of the stability analysis. Therefore, problems may arise when designing a controller to force the trajectories into the sliding manifold and to ensure they stay there for all subsequent time. 5.2 Problem 2 : Extending the NRSM Multilinear Uncertainty. An interesting, yet challenging, problem is to extend the concept of the Nyquist robust sensitivity margin to include linear system 63
PAGE 74
64 with multilinear uncertainty structure, i.e., systems of the form (5.2) where r and q are multilinear vectors. In the polynomial case, since the multilinear uncertainty lacks edge results, the mapping theorem [7] introduces overbounding polynomials which of course can be conservative. For the transfer function (5.2), there are two situations. First, when the vectors r and q are independent, stability can be analyzed using the mapping theorem by considering a polytopic family g{s) such that any worstcase margin calculated for g{s) is considered a guaranteed margin for g{s) [7]. However, when the vectors r and q depend on each other, then there are no comparable results to analyze the robust stability of the system. H oo Design. Another future work project is to design an Hoo controller based on a weight function derived from A)jv,sThis project can follow the work of Baowei [29, 30] which is summarized as follow. Given the system in Figure 5.1, use of made of the M A structure given in Figure 5.2 where it is known from the small gain theory [57] that the system in Figure xx is internally stable for any A(s) satisfying r u Figure 5.1: The negative feedback loop of the uncertain system p{s) with a controller c(s).
PAGE 75
65 The transformation of the system in Figure 5.1 into the M Â— A formulation is given in Figure 5.3 where it follows that the system is stable for any 5(s) satisfying 5(s)loo < 1 (5.3) where R{s) = Next, the system in Figure 5.3 is put into the mixed sensitivity framework where the stability conditions can be expressed by the inequality lllF2(s)i?(s)oo < 1 (5.4) The problem now is to choose W 2 {s) to represent the effective part of S{s). Finally, a weighting scheme, namely the effective critical perturbation radius (ECPR), is designed based on the critical direction theory. Figure 5.3: A system with parametric uncertainty in the standard M Â— A loop.
PAGE 76
APPENDIX A DERIVATION FOR THE REACHING TIME General case Consider a Lyapunov function V{t) Â— The standard condition that ensures reaching the sliding manifold in finite time is given by V{t) = s{t)s{t) < ?7sWI (A.1) where t] > 0 [49]. Theorem 1 Under the inequality given in (A.l), the time at which the sliding manifold is reached is given by ts so (A.2) V Proof: The proof utilizes Figure A.l. From the figure it is noted that the initial value of the sliding function is s(0), and that s{ts) = 0. The equality limit of (A.l) (f.e., s{t)s{t) Â— Â— 77s(t) ) can be written as dt = Â— 77s(t) ds (A.3) where s{t) ^ 0. The reaching time tg is obtained by integrating (A.3) as follows: which yields the result I J(s(^s) s(0)) : s(t) > 0, case{a) I ^(s(ts) + s(0)) : s(t) < 0, case{b) (A.4) 66
PAGE 77
67 Since s{ts) = 0, equation (A.4) can be written as ts = ^( 0 ) or ts = ^ls(0) 40 s{t) Figure A.l: Plot of the switching function Case (b) for s(0) >0 (a), and s(0) <0 (b). New condition Consider the Lyapunov condition y(t) = psp ds (A. 5) where p > 0 and d > 0. Using the fact that \s{t)\ = vW, equation (A.5) can be rewritten as V{t) = 2pV{t) y/2dV{tf^ (A.6) Let y{t)Â‘^ = U(t). Then equation (A.6) becomes V{t) = 2y{t)y{t) = 2py{tf V2dy{t) (A.7) or y{t) + py{t) = j^d
PAGE 78
68 Multiplying through by and rearranging terms yields Integrating both sided of (A. 8) gives y{t)eP^ = Â— e'Â’* + c The constant c can be found by evaluating (A. 9) at t = 0, which gives ^ = 2^(0) + Wp Substituting for the constant c, equation (A.9) can be written as y{t) = eÂ“'Â’*[y(0) + \/2p \/2p Now, using (A. 6) equation (A. 10) can be written as v/V(i) = eo'iym + Since at sliding mode V(t) Â— yJV{t) = 0 because s{t) = s{t) = 0, it then that 0 = pt, + ln\yW) + ^]\n^^ or , 1 , *S = In 1 H 1 p a Now, since V{t) = s(t)^s(t), it follows that 1, tg Â— Â— ln[l H s(0)^s(0) 2 (A.8) (A.9) (A.10) follows (A.11) d
PAGE 79
APPENDIX B DERIVATION OF THE BOUND ON v{t) OF CHAPTER 3 This appendix presents a derivation of the bound (3.14) \v{t)\ < v{h)\z{t)\ Consider the equation v{t) = [ (B.l) J tÂ—h Substituting for the control law u{t) = Fz{t) valid at sliding mode and taking the norm of both sides of (B.l) yields k(i) < [ ^ ^^11 \\Bd\\ \\F\\\z{T)\dr (B.2) Jth Let 0 = (t h r). Then for r = t, 0 = h, and for r = t h, 0 = 0. The inequality (B.2) then implies where z{t, t Â— h) k(i) < f le \\Bd\\ \\F\\^maxJz{i;)\de JO Â— < h maxlle^"^Â® \\Bd\\ Fl z{t,th) (B.3) = max \z(ib)\. Utilizing the Razumikhin concept [38], it follows that z{t, t Â— h) < a \z{t)\ where a is a Razumikhin parameter. Therefore, inequality (B.3) can be written as \v{t)\ < h max e^'l Rd 1F a\z{t)\ (B.4) OKd^h which yields the bound u(t)l < r]{h)\z{t)l where r] represent the righthand side of inequality (B.4). 69
PAGE 80
APPENDIX C SOLUTION FOR A DIFFERENTIAL EQUATION Here, the derivation of the solution (4.5) of the system equation (4.4) given in Lemma 1 of Chapter 4 is presented. Theorem 1 Consider the scalar differential equation x(t) Â— ax{tff + bx{t) (C.l) then the analytic solution is given by x{t) fee*** x(0) b + a a;(0) (1 Â— e^*) Proof: Equation (C.l) can be written as dx Integrating both sides gives / bx{t) + ax{tff dx = dt Â— t p C\ (C.2) (C.3) x{t){ax{t) + b) where ci is the integration constant. Working out the integral yields 1 . r 1 . Â— Inf 7Â— r r] Â— t Cl b ^ax{t) + b which after multiplying through by b, and taking the exponential of both sides gives = e''*C2 ax{t) + b (C.4) 70
PAGE 81
71 where C 2 = At i = 0, the constant C 2 = Substituting back into (C.4), and rearranging terms yields x{t)[ax{0) + b] = eÂ’Â’^x{Q)[ax{t) + b] Finally, solving for x{t) gives the solution x{t) beÂ’Â’* a;(0) 6 f a x(0) (1 Â— e***)
PAGE 82
APPENDIX D PROOF OF CLAIMS (i) & (ii) OF LEMMA 1 OF CHAPTER 4 For convenience we rewrite the system equation of Lemma 1. Given the scalar differential equation y{t) = ay{tf + hy{t) (D.l) where a > 0 and b < 0. Its analytic solution is given in Appendix C and can be written as y{t) = e*'* y{0) 1 + f y(0) (1 eW) (D.2) Now, for the initial condition bound 0 < y(0) < we need to prove the following two claims: Claim (i) : y{t) > 0 V t < oo. Claim (ii) : y{t + T) < y{t) V t < T < oo. Claim (i) can be checked by verifying that both the numerator and the denominator of (D.2) are either positive or negative. LetÂ’s consider the numerator first. Since the exponential function is always positive (z.e., e*** > 0), and y(0) > 0, then the numerator is positive. Now, for the denominator, since 0 < y(0) < ^ then it can be readily verified that 1 < MO) < 0 (D.3) b Furthermore, using the fact that 0<1 Â— e*Â’^0it follows that 0 > ^ y( 0 ) (1 > 1 (D.4) Therefore, the denominator of (D.2) is easily seen to be positive. Hence, y{t) > OV t < oo. This completes the proof of claim (i). 72
PAGE 83
73 Claim (ii) implies that the solution is strictly monotonically decreasing. It can be checked by verifying that the ratio i? < 1, where y{t + T) R 2/W l + fy(0)(le''{Â‘+^)) obt which after simple manipulations reduces to 1 + y(0)(l Â— R (D.5) is satisfied for all t < T < oo. Using (D.2), the ratio (D.5) can be written as (D.6) (D.7) Multiplying the numerator and the denominator of (D.6) through gives gftr ^ e'Â’^fy(0) e^^e'Â’*fy(0) 1 + fy(0) e'Â’^e^*f?/(0) Ignoring the last term in the numerator and denominator of (D.7) since they are the same, then the truncated ration R! is given by Â„ eÂ‘^(l + Ivm i + !Â»(o) (D.8) for all r > 0. This shows that i? < 1, and hence, proves claim (ii) of Lemma 1.
PAGE 84
APPENDIX E MATRIX MEASURE The following definition of the matrix measure and its properties are found in Vidysagar [52]. Definition 1 The matrix measure, also known as the logarithmic derivative, of an inducedmatrix norm  Â• p on is a function fip : Â— > TZ defined by Hp{A) = lim eyoo J + eAp 1 e (E.l) The matrix measure of A G corresponding to the 1,2, and oo norms is given, respectively, by //pi (A) = max{ajj \y \apj\} hp2{A) = \jnax[{A* A)\l 2 /^poo(A) Â— max^Opp T ^ Some useful properties of matrix measure include the following: Â• /ip(A B) = fjp(A) + fJ^p{A). Â• Â—iXp{Â—A) < Re X < //p(A) where A is an eigenvalue of A. Â• //p() is a convex function on 74
PAGE 85
APPENDIX F THE COMPARISON THEOREM Let a vectorvalued function v{t, s,z) : J x J x > 5?"*, J := [to, oo] has the following property. For any fixed t, s, Zi < Z 2 ^ V{t, S, Zi) < V{t, S, Z2) Let z{t) be the solution to the inequality < z(O) + f v{t,s,z{s Â— h)) ds Jto Then the maximal solution r{t) of w{t) < 2:(0) + f v{t, s, w{s Â— h)) ds Jto satisfies z{t) < r{t) for t > to75
PAGE 86
REFERENCES [1] Z. Artstein. Linear systems with delayed controls: A reduction. IEEE Transactions on Automatic Control, AC27;869879, 1982. [2] C. T. Baab, J. C. Cockburn, H. A. Latchman, and 0. D. Crisalle. Generalization of the nyquist robust stability margin and its applications to system with real affine parametric uncertainties. International Journal of Robust and Nonlinear Control, 11:14151434, 2001. [3] B. R. Barmish. A generalization of KharitonovÂ’s fourpolynomial concept for robust stability problems with linearly dependent coefficient perturbations. IEEE Transactions on Automatic Control, 34:157165, 1989. [4] B. R. Barmish. New Tools for Robustness of Linear Systems. McMillan, New York, 1994. [5] G. Bartolini. Chatteing phenomena in discontinous control systems. International Journal of Systems Science, 30(12):24712481, 1989. [6] V. R. Basker, K. Hrissagis, and 0. D. Crisalle. Variable structure control design for reduced chatter in uncertain time delay systems. In Proc. 36th IEEE Conference on Decision and Control, volume 4, pages 32343236, 1997. [7] S. P. Bhattacharyya. Robust ControlThe Parametric Approach. PrenticeHall, New Jersey, 1995. [8] H. Chapellat and S. Bhattacharyya. A generalization of KharitovÂ’s theorem: Robust stability of interval plants. IEEE Transactions on Automatic Control, 34:306311, 1989. [9] H. Chapellat, M. Dahleh, and S. Bhattacharyya. On robust nonlinear stability of interval control systems. IEEE Transactions on Automatic Control, 36:5967, 1991. [10] M. S. Chen. Exponential stabilization of a constrained bilinear system. Automatica, 34:989992, 1998. [11] M. S. Chen and Y. Z. Chen. Normalised quadratic controls for a class of bilinear systems. In lEE Proceedings on Control Theory Application, volume 149, pages 520524, 2002. 76
PAGE 87
77 [12] M. S. Chen and S. T. Tsao. Exponential stabilization of a class of unstable bilinear system. IEEE Transactions of Automatic Control, 45:989992, 2000. [13] C. Chiang and F. Kung. Stability analysis of continuous bilinear systems. Journal of the Chinees Institute of Engineers, 17:569576, 1994. [14] W. A. Coppel. Stability and Asymptotic Behavior of Differential Equations. Health, Boston, 1965. [15] T. Cormen, C. Leiserson, and R. Rivest. Introductoin to Algorithms. McGrawHill, New York, 1990. [16] R. A. DeCarlo, S. H. Zak, and G. P. Matthews. Variable structure control of nonlinear multivariable systems: A tutorial. In Proceedings of the IEEE, volume 76, pages 212232, 1988. [17] J. Doyle. Analysis of feedback systems with structured uncertainties. In lEE Proceedings Part D, volume 129, pages 242250, 1982. [18] L. Dugard and E. I. Verriest. Stability and Control of TimeDelay Systems. SpringerVerlag, London, 1998. [19] D. L. Elliott. Bilinear systems. Wiley Encyclopedia of Electrical Engineering, 2:308323, 1999. [20] D. Feiqi, L. Youngqing, and F. Zhaoshu. Variable structure control of timedelay systems with retarded state and retarded control. In IEEE International Conference on Systems, Man and Cybernetics, volume 1, pages 102106, 1996. [21] M. Fu. Computing the frequency response of linear systems with parametric perturbations. Systems & Control Letters, 15:4552, 1990. [22] F. Gouaisbaut, W. Perruquetti, and J. P. Richard. A sliding mode control for linear systems with input and state delays. In Proceedings of the 38th IEEE Conference on Decision and Control, volume 4, pages 42344239, 1999. [23] J. Guojun and S. Wenzhong. Stability of bilinear timedelay systems. IMA Journal of Mathematical Control and Information, 18:5360, 2001. [24] D. W. Ho, G. Lu, and Y. Zheng. Global stabilisation for bilinear systems with time delay, volume 149, pages 8994, 2002. [25] K. J. Hu, V. R. Basker, and O. D. Crisalle. Sliding mode control of uncertain inputdelay systems. In Proc. of the Americal Control Conference, volume 1, pages 564568, 1998. [26] J. Y. Hung, W. Gao, and J. C. Hung. Variable structure control : A survey. IEEE Transactions on Indusrial Electronics, 40(1):221, 1993.
PAGE 88
78 [27] S. R. Inamdar, V. R. Kumar, and N. D. Kulkarni. Dynamics of reacting systems in the presence of timedelay. Chemical Engineering Science, 46(3):901908, 1991. [28] E. M. Jafarov. Design of sliding mode control for multiinput systems with multiple state delays. In Proc. of the Americal Control Conference, volume 2, pages 11391143, 2000. [29] B. Ji, H. A. Latchman, and O. D. Crisalle. Interpretation of staticweight Hinfinity design approaches for interval plants. In Proc. of the ^Ist IEEE Conference on Decision and Control, volume 2, pages 14341439, 2002. [30] B. Ji, H. A. Latchman, and O. D. Crisalle. Robust Hinfinity stabilization for interval plants. In IEEE Conference Control Applications /Computer Aided Control System Design, volume 2, pages 11121117, 2002. [31] H. Khalil. Nonlinear Systems. PrenticeHall, Inc., New Jersey, 1996. [32] J. Kharitonov. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differential Equations, 14:14831485, 1979. [33] A. J. Koshkouei and A. S. Zinober. Sliding mode timedelay systems. In IEEE International Workshop on Variable Structure Control, pages 97101, 1996. [34] D. Krupp and Y. B. Shtessel. Chatteringfree sliding mode control with unmodeled dynamics. In Proceedings of the American Control Conference, volume 1, pages 530534, 1999. [35] H. A. Latchman and 0. D. Crisalle. Exact robustness analysis for highly structured frequency domain uncertainties. In Proc. of American Control Conference, volume 6, pages 39823987, 1995. [36] H. A. Latchman, 0. D. Crisalle, and V. R. Basker. The Nyquist robust stability marginA new metric for the stability of uncertain systems. International Journal of Robust and Nonlinear Control, 7:211226, 1997. [37] P. Liu and H. Hung. Stability for bilinear timedelay systems with saturating actuators. In IEEE Proc. of International Symposium on Industrial Electronics, volume 3, pages 10821086, 1999. [38] M. S. Mahmoud. Robust Control and Filtering for TimeDelay Systems. Marcel Dekker, Inc., New York, 1996. [39] R. R. Mohler. Bilinear Control Processes. McGrawHill, New York, 1973. [40] R. R. Mohler. Nonlinear Systems: Application to Bilinear Control. PrenticeHall, New Jersey, 1991.
PAGE 89
79 [41] T. Mori, N. Fukuma, and M. Kuwahara. Simple stability criteria for single and composite linear systems with time delays. International Journal of Control, 34:11751184, 1981. [42] S. Niculescu, S. Tarbouriech, J. Dion, and L. Dugard. Stability criteria for bilinear systems with delayed state and saturating actuators. In Proceedings of the 34th IEEE Conference on Decision and Control, volume 2, pages 20642069, 1995. [43] S. Oucheriah. Dynamic compensation of uncertain timedelay systems using variable structure approach. IEEE Transactions on Circuits And Systems: Fundamental Theory and Applications, 42(8):466469, 1995. [44] M. Poljak and J. Rohn. Checking robust nonsingularity is nphard. Mathematics of Control, Signals, and Systems, 6:19, 1993. [45] Y. Roh and J. Oh. Sliding mode control with uncertainty adaptation for uncertain inputdelay systems. In Proc. of the American Control Conference, volume 1, pages 636640, 2000. [46] M. G. Safonov. Stability margins of diagonally perturbed multivariable feedback systems. In lEE Proceedings Part D, volume 129, pages 251256, 1982. [47] K. Shyu and J. Yan. Robust stability of uncertain timedelay systems and its stabilization by variable structure control. International Journal of Control, 57(l):237246, 1993. [48] A. Sideris. An efficient algorithm for checking the robust stability of a polytope of polynomials. Math. Control Signals & Systems, 4:315337, 1991. [49] J. E. Slotine. Sliding controller design for nonlinear systems. International Journal of Control, 40(2):421434, 1984. [50] J. E. Slotine and W. Li. Applied Nonlinear Control. PrinticeHall, Inc., New Jersey, 1991. [51] V. Utkin. Variable structure systems with sliding modes. IEEE Transactions on Automatic Control, AC22(2):212222, 1977. [52] M. Vidysagar. Nonlinear Systems Analysis. PrenticeHall, New Jersey, 1978. [53] L. Wang. Robust strong stabilizability of interval plants: It suffices to check two vertices. Systems & Control Letters, 26:133136, 1995. [54] L. Wang. Kharitonovlike theorems for robust performance of interval systems. Journal of Mathematical Analysis Applications, 279:430441, 2003.
PAGE 90
80 [55] Y. Xia, J. Han, and Y. Jia. A sliding mode control for linear systems with input and state delays. In Proceedings of the 41st IEEE Conference on Decision and Control, volume 3, pages 33323337, 2002. [56] K. D. Young, V. Utkin, and U. Ozguner. A control engineerÂ’s guide to sliding mode control. IEEE Transactions on Control Systems Technology, 7(3):328342, 1999. [57] K. Zhou, J. Doyle, and K. Glover. Robust and Optimal Control. PrenticeHall, New Jersey, 1996.
PAGE 91
BIOGRAPHICAL SKETCH Saleh AlShamali was born in Kuwait City in 1973. He obtained his bachelorÂ’s degree in electrical and computer engineering at the University of MissouriColumbia in December 1996. He worked for a year at Kuwait Oil Company (KOC) as a technical engineer. In 1998, he decided to pursue a masterÂ’s and Ph.D. degree in the controls and systems area. He joined the Electrical and Computer Engineering Department at the University of Florida in Fall 1998. He is aiming to graduate in December 2004. 81
PAGE 92
STABILITY ANALYSIS AND CONTROL DESIGN FOR UNCERTAIN AND TIMEDELAY SYSTEMS Saleh A. AlShamali (352) 3922584 Department of Electrical and Computer Engineering Chair: Haniph A. Latchman Cochair: Oscar D. Crisalle Degree: Doctor of Philosophy Graduation Date: December 2004 This dissertation develops methodologies that enable analysis and control design for real linear and bilinear systems subject to uncertainty and time delay. An indicator for the robust stability of uncertain systems is proposed, namely, the Nyquist Robust Sensitivity Margin, a tool indicates how large a parameter perturbation cab be before causing instability. Moreover, new control designs to stabilize linear and bilinear systems under the influence of timedelay are proposed. A sliding mode control law is designed to stabilize a linear plant affected by time delay. Also, a state feedback control is proposed to stabilize a timedelay bilinear system. The results obtained by the two designs provide quantitative information regarding the largest delay the plant can handle.
PAGE 93
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. HaniphTA.. Latchman, Chair Professor of Electrical and Computer Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Oscar D. Crisalle, Cochair Professor of Chemical Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Tan FoiVong Assistant Professor of Electrical and Computer Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. ZC. ^ rz: * NormliirQ. FitzCoy Associate Professor of Mechanical and Aerospace Engineering This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. December 2004 Pramod P. Khargonekar Dean, College of Engineering Kenneth J. Gerhardt Interim Dean, Graduate School

