Citation
Gaussian Mixture Model Based System Identification and Control

Material Information

Title:
Gaussian Mixture Model Based System Identification and Control
Creator:
LAN, JING ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Control systems ( jstor )
Dynamical systems ( jstor )
Linear models ( jstor )
Modeling ( jstor )
Neural networks ( jstor )
Noise control ( jstor )
Parametric models ( jstor )
Polynomials ( jstor )
Signals ( jstor )
System identification ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Jing Lan. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
8/31/2006
Resource Identifier:
476186462 ( OCLC )

Downloads

This item has the following downloads:

lan_j ( .pdf )

lan_j_Page_045.txt

lan_j_Page_103.txt

lan_j_Page_128.txt

lan_j_Page_120.txt

lan_j_Page_097.txt

lan_j_Page_033.txt

lan_j_Page_030.txt

lan_j_Page_102.txt

lan_j_Page_065.txt

lan_j_Page_001.txt

lan_j_Page_067.txt

lan_j_Page_112.txt

lan_j_Page_076.txt

lan_j_Page_022.txt

lan_j_Page_063.txt

lan_j_Page_074.txt

lan_j_Page_050.txt

lan_j_Page_005.txt

lan_j_Page_051.txt

lan_j_Page_096.txt

lan_j_Page_015.txt

lan_j_Page_027.txt

lan_j_Page_109.txt

lan_j_Page_032.txt

lan_j_Page_125.txt

lan_j_Page_101.txt

lan_j_Page_086.txt

lan_j_Page_073.txt

lan_j_Page_010.txt

lan_j_Page_020.txt

lan_j_Page_062.txt

lan_j_Page_090.txt

lan_j_Page_080.txt

lan_j_Page_088.txt

lan_j_Page_040.txt

lan_j_Page_057.txt

lan_j_Page_013.txt

lan_j_Page_025.txt

lan_j_Page_093.txt

lan_j_Page_123.txt

lan_j_Page_072.txt

lan_j_Page_041.txt

lan_j_Page_117.txt

lan_j_Page_098.txt

lan_j_Page_084.txt

lan_j_Page_007.txt

lan_j_Page_014.txt

lan_j_Page_078.txt

lan_j_Page_119.txt

lan_j_Page_089.txt

lan_j_Page_127.txt

lan_j_Page_039.txt

lan_j_Page_055.txt

lan_j_Page_104.txt

lan_j_Page_031.txt

lan_j_Page_006.txt

lan_j_Page_029.txt

lan_j_Page_019.txt

lan_j_Page_107.txt

lan_j_Page_009.txt

lan_j_Page_038.txt

lan_j_Page_094.txt

lan_j_Page_068.txt

lan_j_Page_004.txt

lan_j_Page_095.txt

lan_j_pdf.txt

lan_j_Page_126.txt

lan_j_Page_069.txt

lan_j_Page_114.txt

lan_j_Page_043.txt

lan_j_Page_122.txt

lan_j_Page_071.txt

lan_j_Page_054.txt

lan_j_Page_048.txt

lan_j_Page_105.txt

lan_j_Page_092.txt

lan_j_Page_085.txt

lan_j_Page_075.txt

lan_j_Page_113.txt

lan_j_Page_046.txt

lan_j_Page_012.txt

lan_j_Page_056.txt

lan_j_Page_047.txt

lan_j_Page_064.txt

lan_j_Page_106.txt

lan_j_Page_044.txt

lan_j_Page_083.txt

lan_j_Page_003.txt

lan_j_Page_035.txt

lan_j_Page_024.txt

lan_j_Page_018.txt

lan_j_Page_060.txt

lan_j_Page_052.txt

lan_j_Page_121.txt

lan_j_Page_066.txt

lan_j_Page_016.txt

lan_j_Page_036.txt

lan_j_Page_049.txt

lan_j_Page_077.txt

lan_j_Page_061.txt

lan_j_Page_081.txt

lan_j_Page_042.txt

lan_j_Page_058.txt

lan_j_Page_028.txt

lan_j_Page_037.txt

lan_j_Page_087.txt

lan_j_Page_059.txt

lan_j_Page_008.txt

lan_j_Page_070.txt

lan_j_Page_099.txt

lan_j_Page_021.txt

lan_j_Page_011.txt

lan_j_Page_082.txt

lan_j_Page_079.txt

lan_j_Page_026.txt

lan_j_Page_034.txt

lan_j_Page_002.txt

lan_j_Page_110.txt

lan_j_Page_017.txt

lan_j_Page_111.txt

lan_j_Page_108.txt

lan_j_Page_118.txt

lan_j_Page_100.txt

lan_j_Page_116.txt

lan_j_Page_115.txt

lan_j_Page_053.txt

lan_j_Page_023.txt

lan_j_Page_091.txt

lan_j_Page_124.txt


Full Text










GAUSSIAN MIXTURE MODEL BASED SYSTEM IDENTIFICATION AND
CONTROL
















By
JING LAN


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2006
































Copyright 2006

by

Jing Lan





































To my family.
















ACKNOWLEDGMENTS

I would like to express my sincere gratitude to my advisor, Dr. .Josii C. Principe,

for his continuous guidance, support, and help throughout the past five years of

my Ph.D. study. Without his invaluable encouragement and patience, this work

would have been impossible. He gave me the privilege to work in the Computational

NeuroEngfineeringf Lab and allowed me the freedom to explore the unknown world,

and was akr- 0-< attentive and pertinent in critical moments. My deep appreciation is

also extended to Dr. .John G. Harris, Dr. .John AI. Shea, and Dr. Loc Vu-Quoc for

their interests and participation on my supervisory coninittee. My deep recognition

goes to Dr. Deniz Erdognius and Dr. Mark A. Alotter for their help and informative

coninents during my stay in CNEL. I also owe many thanks to all my colleagues in

the CNEL group for their discussion of ideas and friendship.

My wife, Yun Zhu, has my deepest appreciation for her great love and uncondi-

tional support. Finally, I am grateful to my parents and my brother for their love.


















TABLE OF CONTENTS

page

ACK(NOWLEDGMENTS ......... .. iv

LIST OF TABLES ......... .. .. vii

LIST OF FIGURES ......... . .. viii

ABSTRACT ......... .... .. x

CHAPTER

1 INTRODUCTION . ...... ... .. 1

1.1 Mission of System Identification and Control .. .. .. 1
1.2 Local Modeling with Improved Gaussian Mixture Model .. .. 5
1.3 Dissertation Outline ......... .. 8

2 NONLINEAR DYNAMICAL SYSTEM MODELING .. .. .. 10

2.1 Nonlinear Dynamical Systems ...... .... 10
2.2 Time Embedding State Space Reconstruction .. .. 11
2.3 Global Dynamical System Modeling .... .... 13
2.3.1 Polynomial Modeling . .... .. 14
2.3.2 Echo State Network . .... .. 15
2.4 Local Dynamic Modeling . .... .. 20
2.5 Summary ........ ... .. 22

3 GAUSSIAN MIXTURE MODELS AND LOCAL LINEAR MODELS 24

3.1 Introduction . . .. .. .. .. 24
3.2 Maximum Likelihood Estimation Training -EM Algorithm .. .. 29
3.3 Estimation of Local Linear Model ... .. .. .. 32
3.4 Improvement on GMM . ... .. .. 36
3.5 Comparison of the GMM with the SOM .. .. .. .. 38
3.6 Summary ........ ... .. 41

4 GAUSSIAN MIXTURE MODEL BASED CONTROL SYSTEM .. .. 42

4. 1 Introduction of Mixture Model Controller ... .. .. .. 43
4.2 Inverse Error Dynamic Controller Using GMM .. .. .. .. 46
4.3 PID Controller ........ ... 50
4.4 Optimal Robust Controller . .... .. 52











4.4. 1 LLM-based Optimal Set-Point Controller .. .. .. .. 55
4.4.2 LLM-based Optimal Tracking Controller .. .. .. 59
4.5 Summary ........ ... .. 61

5 APPLICATIONS AND DISCUSSIONS ... .. .. .. 63

5.1 C'!s Istic Systems ......... ... 63
5.1.1 The Lorenz System . .. .. .. 64
5.1.2 The Duffing Oscillator System .... .... .. 70
5.2 SISO System ......... .. .. 75
5.2.1 Linear SISO Mass-Spring System ... .. .. 75
5.2.2 Nonlinear SISO Missile System .... .. .. 78
5.3 Nonlinear MIMO System: LoFLYTE System .. .. .. 85
5.3.1 System Introduction . .... .. 85
5.3.2 System Identification and Control experiment .. .. .. 88

6 CONCLUSIONS ......... .. .. 97

6.1 Summary ......... .. .. 97
6.2 Future Work ......... .. 98
6.3 Conclusion ......... . 99

APPENDIX

A OPTIMIZING PARAMETERS OF ESN .... .... .. 100

B ESN-BASED DAC CONTROLLER ..... ... .. 104

REFERENCES ......... . .. .. 110

BIOGRAPHICAL SK(ETCH ......... .. .. 117

















LIST OF TABLES

Table page

:31 Performance on :32-D LoFLYTE data .... .... :38

5-1 Performance on the Lorenz system ..... .. 67

5-2 Performance on the Duffing oscillator ..... .... 71

5-3 Performance on the linear mass system .... .. 76

5-4 Performance on the missile model ...... .... 80

5-5 Control performance comparison on missile system considering noise .. 84

5-6 Performance on the LoFLYTE simulation data .. ... 91

5-7 Control performance comparison on the LoFlyte system considering noise 92

















Figfure

1-1 1-D function estimation by GMM-LLM. .........

2-1 Block diagram of the ESN. ..........

2-2 Distribution of ESN internal weights with maximum spectral radius being
0.9, with unit circle as a reference.


pagfe

5

17


Structure of GMM with LLM.

Explanation of G-SOM algorithm..

Soft combination region using GMM (between dash

Structure of Control System hased on GMM-LLM.

T-S fuzzy logic membership function

Structure of Inverse Control System .

pole-placement PID control system..

Optimal robust control system schematic diagram.

Dynamics of the Lorenz system


line)


system.


)


5-2 System identification performance on the "x" of the Lorenz


-3 Distribution of the Lorenz system (dash line) and GMM weights (star

-4Weights change of 8 GMM kernels;

-5 Performance comparison between SOM and GMM.

-6 Compensation performance

-7 Dynamics of the Duffing oscillator as an autonomous system..

-8 System identification performance on the Duffing system.

-9 Sample of weights change of 8 GMM kernels;.

-10 Control performance of different controllers on the Duffing system.

-11 Dynamics of the mass system.


LIST OF FIGURES










5-12 Sample of system identification performance on the Mass system. .. 77

5-13 Sample of weights change of 6 GMM kernels for mass system .. .. .. 78

5-14 Set-point control performance on the mass system. .. .. .. 79

5-15 Missile system dynamics. ......... ... 79

5-16 Sample of system identification performance on the missile system. .. 80

5-17 Sample of weights change of 8 GMM kernels for missile system .. .. 81

5-18 Performance of PID controller with different modeling approaches. .. 82

5-19 GMM-LLM hased control performance on the missile system. .. .. 8:3

5-20 ESN control performance on the missile system. .. .. .. 84

5-21 General description of an aircraft ...... .. 86

5-22 Dynamics of the LoFLYTE system. ..... .. 87

5-23 System Identification performance on the LoFLYTE system. .. .. .. 89

5-24 Sample of weights change of GMM kernels for the LoFLYTE system .. 90

5-25 Control performance on the LoFLYTE system (set-point, noise free). 9:3

5-26 Control performance on the LoFLYTE system (tracking, noise free). .. 94

5-27 Robust control performance on the LoFLYTE system (set-point, :30 dB
SNR noise). ......... . 95

5-28 Robust control performance on the LoFLYTE system (tracking, :30 dB
SNR noise). ......... . 96

A-1 Distribution of improved ESN internal weights with maximum spectral
radius is 0.9, with unit circle as a reference. ... .. .. .. 101

B-1 Demonstration system with ESN reference model and controller .. .. 105

B-2 Schematic diagram of control system with ESN as controller based on
Gaussian models ......... . .. 108















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

GAUSSIAN MIXTURE MODEL BASED SYSTEM IDENTIFICATION AND
CONTROL

By

Jing Lan

August 2006

C'I ny~: Josi4 C. Principe
Major Department: Electrical and Computer Engineering

In this dissertation, we present a methodology of combining an improved Gaussian

mixture models (GMM) with local linear models (LLM) for dynamical system identi-

fication and robust predictive model control. In order to understand the advantage of

the mixture model, its structure and training are discussed in detail. A growing self-

organizing nmap is utilized to improve the random initialization of mixture models,

which makes the GMM convergence more stable. To increase local modeling capa-

hility and decrease modeling error, local linear models are trained based on GMM as

one-step predictors. Following the local modeling approach, a series of controllers are

designed to realize a tracking application, among which the optimal robust control

shows better robustness over other controllers. Five application systems with differ-

ent dynamics are simulated in order to verify the modeling and control capability of

the improved Gaussian mixture model. Through experiments and comparison with

self-organizing maps, radial basis functions, and other methodologies, it is shown that

the improved GMM-LLM approach is a more flexible modeling approach with higher










computation efficiency than its competitors. The Gaussian model algorithm shares

with the self-organizing maps the ease of robust controllers design.















CHAPTER 1
INTRODUCTION

1.1 Mission of System Identification and Control

System theory, which has evolved into a powerful scientific discipline of wide

applicability, deals with the analysis and synthesis of dynamical systems [1]. In

particular, the analysis of linear identification and control for nonlinear dynamical

systems has been studied by many researchers leading to a better understanding of

various mechanisms which includes observability, controllability, stability, and robust-

ness [1, 2, 3]. The best developed aspect of the theory treats systems defined hv linear

operators using well established techniques based on linear algebra, and the theory of

linear differential equations. Nevertheless, academic research has been concentrating

around problems involving the identification, stabilization, and control of nonlinear

dynamical systems in the last several decades. These efforts have now matured into

a broad group theories on nonlinear dynamical systems applications. Initial efforts in

this area pursued parametric approached, inspired by the established linear system

theory, in which the dynamical equations of the system are assumed to be known from

physical principles, with some uncertainty in certain parameters. In this framework,

the system identification and system control problems are decoupled. And therefore

they can he solved sequentially. Recently, adaptive system identification and control

methodologies have been investigated too, which leads to a good understanding of

the adaptation in linear systems and a satisfactorily general insight to adaptation in

nonlinear control systems [4, 5, 6, 7].










Control theory deals with the system manipulation of the behavior of dynamical

systems to satisfy certain desired outputs constraints [8, 9]. A classical parametric

design procedure follows the system modeling (identification) and controller selection

stages. In the case of traditional identification based on model derived from physical

principles, the data are used to estimate the unknown parameters, with physically

motivated system, whereas modern approaches stemming from the advances in neural

networks modeling introduce black-box: function approximation schemes [4]. The

neural network modeling capabilities may be further enhanced using multiple sub-

models in the context of switching between multiple adaptive models to achieve close-

loop system that enhance transient behavior and cope with modeling uncertainties

better than single model. Following the system identification stage, depending on

the chosen modeling approach, the controller is designed typically using classical

techniques based on linear system theory as well as classical or neural-networks-based

nonlinear techniques [1, 10].

1\odeling techniques can he classified into global and local. Global modeling uses

a single parametric system and fits it all over the state space. The global modeling

methods estimate the dynamics of system in one framework. Using single kernel or

multiple kernels, the model is trained with the whole training data set. The most

distinguishing feature for global modeling is that it is trained with all the data points,

and in the system identification phase, all kernels will take effect on every step of

prediction. Good examples of global modeling are the multi-1 .,-;r perception (j!l Pl)

and the radial-basis-function (RBF), even though the last one uses local kernels.

When estimating the distribution in the input space, each kernel in the RBF's hidden

l o,-;r accepts all the data from the training data set. Recurrent neural networks










(RNN) are another kind of powerful computational structure for global modeling

[3]. But because of their complex inner structure, efficient training algorithms are

necessary to achieve successful learning for the given task. Another difficulty for

global modeling is when dynamical system characteristics considerably vary over the

operating regime, effectively bringing the issue of time varying nonlinear parameters

into the model design. Because of the complexity and possibly time-variability of the

nonlinear dynamical system, global modeling and designing corresponding controllers

is a difficult and time-consuming task.

Local modeling, which utilizes a divide-and-conquer approach to solve compli-

cated problems, breaks the fitting of the system into smaller and easier pieces which

can be managed by simpler topologies. Using multiple kernels, the local modeling is

trained in a way that a kernel only gets training information from those data which

are located within the region that kernel defined. In system training phase, every

input data is located first, and only the i.- -!ant, i" kernel will later take effect on iden-

tification. To better understand this point, let us take the self-organizing map (SOM)

[11] as an example with its winner-takes-all criterion. Therefore the local modeling

technique can be regarded as a piece-wise modeling approach, and the pieces can be

grouped together when it is necessary to approximate global dynamics. Specifically

when each model is linear, the resulting model is a piece-wise linear dynamical ap-

proximation to the globally nonlinear dynamical system. The advantages of such a

partitioning approach are the following:

1. system identification complexity can be reduced significantly due to the scaling
down of the estimation problem from one large task to multiple small and simple
task,










2. the local models can easily scale up to encompass more volume in the state space
by adding new models as new data portion is acquired, which is especially of
benefit when training data cannot cover the whole state space;

3. the design of a control system for the nonlinear system can he reduced to the
design of multiple simple local controllers among which cooperation is possible
to generate a single controller to the current plant.

The Gaussian mixture model with local linear models (GMM-LLM) is a local

modeling approach, yet GMM training basically follows the global modeling train-

ing criterion because of the distributing property of Gaussian density function. In

the GM13 approach, the model is generated by a multi-dimensional joint mixture

of Gaussian distributions [12]. In other words, the distribution of state vectors is

described by a soft combination of several Gaussian distributions. And those combi-

nation coefficients will be determined according to the Gaussian posterior probability

of current state vector giving certain Gaussian distribution. Following the Gaussian

posterior probability property, every Gaussian is trained by all the training data set.

Yet due to the nature of the nonlinear Gaussian distribution function, the weights

will be large when the data is close to the centers of some Gaussian, which make the

Gaussian kernel's training emphasize more on their neighbor data than the data far

away. In the prediction part, it becomes clear that Gaussian model is taking effect

locally. The benefit from this nature is that the GMM is globally well trained, and

local dynamics are addressed as well. The latter is also the reason why we call the

approach local modeling.

The problem of controller design can he further eased with the selection of local

modeling. In this case, the global nonlinear controller can he transformed into multi-

ple piece-wise linear controllers for nonlinear systems, for which many strong design

tools are available [8, 9, 13, 14].












0.0

0.4

0.2



-0.2
0 5 10 15 20 25 30



0.4
0.3 -Local Linear Models
02-Gaussian Models
0.1
0.

-0.1
0 5 10 15 20 25 30



Figure 1-1. 1-D function estimation by GMM-LLM.
Top: Nonlinear function and its linear approximation, Bottom: GMM-LLM pairs


1.2 Local Modeling with Improved Gaussian Mixture Model


The framework proposed herein is an improved GMM with local linear model,


which is shown in figure 1-1. The GMM uses the Gaussian family to reconstruct the


distribution of the state space vectors applying a soft-combination instead of a winner-


takes-all criterion. Each Gaussian component represents a certain characteristic into


a specific location of system dynamic space. Since the Gaussian distribution needs


only two parameters, mean and covariance, the computation efficiency can be largely

increased in GMM training. Because of this modeling property, GMMs have been


widely applied in speech recovery, spectral analysis, etc. Schoner applied GMM


for function approximation using supervised training [15]. Initializing the Gaussian


kernels' weights from growing self-organizing maps (GSOM), a more stable global


maximum convergence is achieved.










Local modeling is usually based on the Nearest-neighbors criterion. A clustering

algorithm divides the training data set into a group of smaller sets based on the

Euclidean distance or other similar criteria, and a model is trained corresponding to

each data set. As Farmer and Sidorowich [16] have already shown, local linear models

provide an effective and accurate approximation despite their simplicity. Local linear

models have shown very good performance in comparative studies on time series

prediction problems and in many cases have generated more accurate prediction than

global approaches [16, 17].

Building local mappings in reconstructed state space is a time and memory

consuming process even though the performance can be satisfactory. A simplified

approach is to quantize the state space and build local mappings only in positions

where prototype vectors are located. The improvement utilized in the Self-Organizing

Maps (SOM) has been applied by Principe et al. to model autonomous and non-

autonomous systems [18, 19]. Motter also utilized SOM based modeling to the design

of switching controllers [20].

One possible problem the SOM-based modeling cannot avoid is the quantization

error in the modeling phase, due to the Euclidean distance between the prototype

vector and the data closest to the vector. Th O-ae proc sue h

prototype vectors can represent the distribution of the whole state space and the

distance is negligible, yet that is true only when the amount of prototype vectors

is very large. Meanwhile, large amount of prototype vectors will lead to further

requirements on computational resource and training time.

The structure of GMM consists of a set of Gaussian models and a gatingf function.

The weighting coefficient for a certain data point x corresponding to the Gaussian










kernels is described by the gating function. As the distance between the state and

kernels determines the gating function value, it makes modeling error decrease and

produces smooth switching compared with the SOM method.

The parameters of the Gaussian mixture are initially unknown, and will be es-

timated based on incomplete data through the Growing-SOM method. This implies

that parameters are updated in a cooperative rather than competitive way. The pro-

cedure is an updated weighting of the contribution of all data points given a particu-

lar Gaussian kernel. The training of GMM is based on maximum likelihood criterion

through the expectation-maximization (EM) algorithm. Section 3.2 discusses the EM

training algorithm in depth. Following the Gaussian distribution density function,

one local linear model is generated from local data for each Gaussian kernel. Thanks

to the gating rule, the mixture of Gaussians framework is able to continuously vary

the infrastructure of each piece-wise linear local model.

When the GMM-LLM is compared with RBF, the advantage of GMM-LLM is the

modeling of local dynamics. Even though both the GMM-LLM and RBF use a linear

combination in the final stage, the mechanism of how they describe the local dynamics

is different. RBF modeling uses a single fixed weighted combination of a group of

nonlinear outputs from Gaussian kernels, the centers of the Gaussians are adapted,

but not the covariance. Although the GMM-LLM uses a linear model to represent the

local system dynamics, which makes the output changes in a linear manner instead

of a nonlinear way. Several such local models exist distributed throughout the state

space, and they are weighted by the gating function to increase performance.










1.3 Dissertation Outline

In this dissertation, we will setup a control framework for the problem of modeling

and control of nonlinear dynamical systems. We present a modified mixture Gaussian

modeling-based system identification and control framework. GMM, as a piece-wise

linear predictive model, is trained to estimate the distribution of state space using

the Expectation-Maximization algorithm in order to satisfy the maximum likelihood

criterion. Following the mixture Gaussian distribution, local linear models (LLM)

are generated based on Mean-Square-Error (jl!810) criterion to make accurate one-

step prediction, which is obtained by soft-combinational function evaluation. Under

the assumption that the state dynamics are smooth and sufficient data samples are

available, GMM can provide a good and efficient prediction performance compared

with other competitive methodologiees.

Based on the mixture Gaussian reference model, we develop a set of controllers

for non-autonomous and nonlinear plants. Starting from the EM training, the sys-

tem model can be represented by GMM-LLM and the modeling error is bounded.

Several controller such as dynamic inverse and pole-placement PID controllers, op-

timal robust controller, echo state neural controller are realized and emploi-x & using

the mixture model for several systems with different dynamics. We emphasize the

optimal controller and the echo state neural controller. We will show that the opti-

mal controller exhibits excellent robustness against model uncertainty and external

disturbance, and the ESN controller has better generalization capability.

The dissertation is divided into five parts. In C'!s Ilter 2, some general description

and definitions of nonlinear dynamic systems and local modeling, time-embedding de-

lay system reconstruction, and dynamic system identification are given. The criterion










to select embedding dimension will be discussed. Echo state network, a new kind of

recurrent neural networks, is present for the later use on controller design.

In ('!, Ilter 3, GMM training methodology analysis is presented, which also in-

cludes improved GMM training through GSOM initialization, the generation of local

linear models (LLM), and the comparison of GMM-hased approach to SOM-hased

approach in order to address the computational efficiency of Gaussian modeling.

In C'!s Ilter 4, the descriptions for a set of controller design approaches based

on GMM, the controllers' mathematical backgrounds, and their stability issue are

discussed in detail.

('!s Ilter 5 gives application examples on a set of real system model, and main

results of GMM-hased applications are presented with comparisons and discussed in

details. For different modeling methods, the modeling performance will be compared

based on signal to error ratio criterion. For controllers, the control performance will

be compared based on criteria of rising (falling) time, settling time, overshot, and the

covariance of control error.

('!s Ilter 6 gives conclusion, contribution and briefly summarizes future work.















CHAPTER 2
NONLINEAR DYNAMICAL SYSTEM MODELING

2.1 Nonlinear Dynamical Systems

General linear and nonlinear system theories provide tools which make it pos-

sible to fully estimate a nonlinear dynamic system from observation under the as-

sumption that noise is zero or negligible. The enabling theoretical insight is the time

7. AIrl embedding theorem, which will be briefly reviewed along with some important

system-characterization methodology. Reconstruction of the physical state space is

explicitly or implicitly fundamental to the prediction and characterization applica-

tions in chapter 5.

Dynamics describe how one system state develops into another state over the

course of time. Technically, a dynamical system is a smooth function of the reals or

the integers on another object (usually a manifold). "When the reals are .Il 1En the

system is called a continuous dynamical system, and when the integers are acting, the

system is called a discrete dynamical system" [21]. A dynamical system expressed as

a discrete time evolution rule can be described from the observation of its outputs.

The dynamic of the system in state space can be defined as


x(t + 1) = f (x(t)) (2-1)

y(t) = h(x(t))


where x(t + 1) is the next step state for discrete-time systems. x(t) are the system

states, f(-) is a nonlinear transfer function and is usually referred as the vector field,










and h(-) is the output function. If the vector field f(-) is a linear function of the

system states, the underlying system is linear; otherwise, the system is non-linear. It

is also assumed that the systems here are deterministic. Thus the transition from the

system's state r(tl) at time tl to the state r(t2) at time t2 1S governed by deterministic

rules.

2.2 Time Embedding State Space Reconstruction

Equation (2-1) describes a system that follows the internal governing equations

f (-) given a certain initial condition. And the system's dynamic is fully characterized

by its manifold encoded in the function f(-). Because of Takens Theorem, transform-

ing the time series into a state space form will result in a dynamical equivalent to the

original space [22]. Since the mathematical description of f(-) is unavailable for most

real-world systems, with the formulation of the time embedding theorem, Takens pre-

sented a methodology to recover an equivalent description of f from observation of

the system:

Theorem 1. Let M~ be a compact I,,r,.:;./..1. of dimension va. For pairs (4.y)

S: Af At
generic 1/**/'''l et that the mesp #4,,: At R2nz+1.1.#.1b


#4,,(r)= (y(:r), y( (:r)),..., y(,-' (:r))) (2-2)


is
There are several methods for estimating the optimal embedding dimension.

Buzug and Pfister designed two methods called fill factor and local deformation [23].

The first is a procedure that yields a global measure of phase-space utilization for









(quasi) periodic and strange attractors and leads to a maximum separation of trajec-

tories within the phase space. The second describes the local dynamical behavior of

points on the attractor and gives a measure of homogeneity of the local flow. Gao and

Zheng proposed the local exponential divergence plot method for a chaotic time se-

ries, based on which the embedding dimension, delay time, and the largest Lyapunov

exponent are estimated [24].

We determine the minimum time embedding dimension using Lipschitz quotient

[25, 26]. If the function is assumed to depend on n 1 inputs and it actually depends

on n inputs, the data set may contain several points that are very close in the space

spanned by the n 1 inputs but differ significantly in the nth input. Assuming

the system function is smooth, if the (n 1)-D inputs are close, it is expected that

the outputs from them are also close. If one or several relevant inputs are missing,

outputs corresponding to different input will take different values. In this case, it can

be concluded that (n 1)-D input is not long enough.

The Lipschitz quotients in 1-D case are defined as


la= |yi-~),i, jE RN,i/j (2-3)


where NV is the number of samples, is input. For the multidimensional case, equation

is extended as


1" i -y(j i, j E J", i / j (2-4)


where n is the number of input.

The Lipschitz index can be defined as the maximum occurring Lipschitz quotient


l" = max(l") (2-5)










As discussed before, if n is not hig enough, the Lipschitz index will be large. If all

relevant inputs are included, the Lipschitz index will stay near constant.

2.3 Global Dynamical System Modeling

System identification is a technique to build mathematical models of dynamics

based on input-output measurements. An important property for system identifica-

tion is its capability of making short term prediction of system dynamics. Normally,

the nonlinear dynamical behavior of real-world systems complicates the construction

of models that can accurately describe dynamics. 1\oreover, the knowledge about the

inner structure of systems is often unavailable. Under this situation, the modeling

training can only be driven through a finite collection of system's input and output

measurement. This kind of identification is referred as black box: approach.

Several approaches have been studied for global modeling. One representative

of global nonlinear modeling is the time delay neural network (TDNN) which was

first developed by Weibel and Lang in 1987 [27]. Using time dl 11i- segment of state

as input data, the TDNN is structurally close related to the multi-11w-;r perception

(j!L Pl). The TDNN has been shown to be a universal mapper in multi dimensional

functional spaces with finite memory (especially myopic functional space) [28]. An-

other method is the polynomial modeling. Polynomial modeling is applied as the

basis for the realization of nonlinear mapping. Fuhrmann uses it in geometric control

theory [29]. Recurrent neural networks (RNN) represents a kind of nonlinear model-

ing methodology. Researchers use RNN when regular mathematical equations cannot

generate satisfying results [:30, :31].

In the following sections, we will introduce polynomial modeling briefly as we

will use its first order expansion as a linear model in chapter :3. And we will discuss









the echo state network (ESN) with modified design method as a new global modeling

approach.

2.3.1 Polynomial Modeling

Taking the single-input-single-output (SISO) system as an example, the linear

input-output polynomial model is the autoregressive moving average with exogenous

input (ARX) model


y(u) = blu(n 1) +- + bDu(n D) aly(n 1) --- aDy(n D) (2-6)


The polynomial Modeling can be extended to a nonlinear ARMAX (NARMAX)

model by replacing the linear mapper in (2-6) with nonlinear function f(-)


y(u) = f (u(n 1), ..., u(n D), y(n 1), ..., y(n D)) (2-7)


A more straightforward way to utilize polynomials for nonlinear system identifi-

cation is to apply a polynomial for approximation of the one-step prediction function

(2-7) with low order expansion, which is called K Jnt.-y.-<. -c~~ Gabor polynomial mod-

els. And the benefit from polynomial model is that all the nonlinear coefficients

calculations are avoided. A simplified version of the K Jnt.-y.>l~rov-Gabor polynomial

model uses less high-order information, also called V/olterra Series model, describes

the nonlinearity only for the input


y(u) = f [u(n 1),..., u(n D)] aiy(n 1) - aoy(n D) (2-8)


Thus the second order second degree polynomial function can be expressed as


y(u) = Or + 02u8 3 )+8u(n 2) + 04y9 1 5y(n 2) (2-9)


+ 06 2( ) 87 2(n 2) + Osu(n 1)u(n 2)










With this simplification the number of regressors is reduced, thus the computation

of prediction is simplified. The price paid here is the necessary increase of order D

in order to describe the nonlinearity of the original system. For linear model, there

will be no high order expansion, and the embedding length will be increased when

necessary.

2.3.2 Echo State Network

Under the condition that the system model is treated as a black-box, time em-

hedding input-output data will incur some technical problems besides its advantage

of simpleness. One of them is the need for large memory to access the system's his-

tory, another is the d.l 1 i-o I system response. The Echo State Network (ESN) gives

us an alternative for system modeling, which does not need time embedding data

and can supply instant system response [:32, :33, :34]. Comparing with other nonlinear

neural networks (recurrent networks especially), the ESN has less amount of parame-

ters need to be trained. 1\oreover, the parameters are linear and they can he trained

through the Wiener-Hopf approach [:35].

Heterogeneous ordered networks or, more specifically, recurrent neural networks

(RNN) are convenient and flexible computational structures applicable to a broad

spectrum of problems in system modeling and control. There are ]rl .ny r- .--4 to

make this statement more precise. For instance, RNNs can he cast as representations

of the vector field of a differential systems, or he trained to embody some desired

attractor dynamics [:31]. In those applications, systems without input (autonomous)

are modeled. Since we wish to model input-driven systems, we adopt a standard

perspective of system theory and view a deterministic, discrete-time dynamical system









as a function, which yields the next system output as a prediction, given the input

and output history [36].

As a special case of RNN, the ESN was first proposed by Jaeger [36, 37]. The

ESNs have interconnected and recurrent topology of nonlinear PEs which constitute

a i. i-~ m r of rich dyr! 1ons. The interesting property of ESN is that only the

memoryless readout of ESN needs training, whereas the internal recurrent topology

has fixed connection. This key property dramatically reduces the complexity of RNN

training to simple linear regression while preserving some of the computational power

of RNN.

We consider the recurrent discrete-time neural network as given in figure 2-

1 with M~ input units, NV processing units, and L output units. The value of

the input unit at time n is X(u) = [Xl(u), X2(u), XM/(u)], that of inter-

nal units is s(u) = [sl(u), 82 8 sN(u)] and that of output units is y(u)=

[y1(n),2 ya8), yL(u)] The system transfer function of ESN can be expressed as


s(n + 1) =f (Wi,X(n + 1) + Ws(u) + Wbacky 8)
(2-10)
y(n + 1) = fo,,(Wout s(n + 1))

where, for most of the case, nonlinear output function font(-) is eliminated, and only

linear output is used.

The critical difference between ESN and RNN lies in the adaptation of the recur-

rent weights. ESN compensates for not training the recurrent connection weights by

increasing the dimensionality of the hidden 1.v. -r. In this way, ESN creates a diversity

of dynamics with a large number of PEs in its reservoir and the readout is simply

trained to pick up dynamical components required for a given desired output signal.

Hence, it is crucial for the operation of the ESN to include a reservoir with "enough















Win Wu Wout






Wbc


Fiur 21.Blckdigrm f heEN

rihes"o dnmcs n i os nineig pl caton ofE N .sse
idniiatote pia lna ape sobandt rog reusv algrtm
whc miiiemensur e ero (\3 rhge re ro oetm
In th S ,teeh ttsfombssfntostaaeannieryflee














all hats the linear s r a nd-u is doit ng istocomine the bcassfntions dfE ,erived from

theinpt fction cete apia givndesire sigalTer spbain of theog ESNcan bve defind asthe



set of e allteecosae fr a functions define by(-0) hre foo is linear, o alvluso Wout.ere,

veso fthe ciiaqusionpu sisnl how big the span idpe of the echo states is.Tiisdrclread








tothe selec tion ofthe w ie eight siga.Te of the ESN resrvir Curnltee arfied two th










to generate W. The first method selects W randomly from a sparse matrix with the

help of the echo state condition ||W|| < 1 [36, 37], where ||-|| is the spectral radius

(SR). For the second method, the W is selected such that the maximum entropy is

achieved [35]. The echo state condition is a stability condition that guarantees that

the states are only controlled by the input whereas the sparseness is required to avoid

much correlation between the states by minimizing the connections among them.

Assuming unknown dynamical system transfer function is g(-), the objective of

system identification is to generate an estimation function g(-) such that the error

prediction is small,


I'I +1 Uk+1 II = Uky, uk) 9 Uk, uk) II E (2-11)


Let the linear mapper (assume a single output) be denoted by the weight vec-

tor Wout. Considering the cost criterion is MSE, the readout vector can be easily

computed through Wiener-Hopf function as


Wout = R-l P (2-12)


where R = E[sT s], P = E[sT d], and s is ESN internal states trained from input

sequence, d is corresponding desire sequence from training data.

We now present a more general formulation of the training procedure for the

case with no input feed forward and output feedback.

Task. Given a training I/O time series(utrain(n), ytrain())n=,---..,nma, where the

inputs come from a compact set U. We want a RNN whose output y(u) approximates















0.0 *

0.6 ***

0.4 *,* *** *
*, ,. ...**
0.2C * ** **




-0.2 ~ *, */* *


-0.6 *.*

-0.8- *

-1
-1 -0.5 0 0.5 1



Figure 2-2. Distribution of ESN internal weights with maximum spectral radius being
0.9, with unit circle as a reference.


1. Procure an ESN. Construct a RNN that has echo states in an admissi-


ble state set ,4 with respect to input. (i). setup normalized random input weights


since it shows that echo state property is independent of input weights; (ii). setup


sigmoid network whose weight matrix W's spectral radius satisfies | Anzas. < 1. Ex-

perience shows that the weight matrix with maximum entropy has the best system

identification performance in the sense of 1\SE. The sample result is shown in 2-2.


2. Run network with training, dismiss initial transient. Start with an


arbitrary networks state s(0), and feed input to ESN through (2-10) to obtain internal


states ss. Then dismiss from this run a suitably long initial transient period.

3. Compute output weights which minimize the training error. Use


back-propagation or any favorite linear regression algorithm to train the readout


weight matrix. With these weights, the network is ready for use.









2.4 Local Dynamic Modeling

Dynamic modeling seeks to qualify the system that created the time series. Fol-

lowing the discussion in chapter 1, global modeling utilizes the whole training data

set to describe the whole system dynamics. Yet the problem is that the global mod-

eling needs large amount of memory, and every data point will take effect on all the

memory. Consequently, global models will show some difficulties to bring the time-

varying issue into the design when system characteristics change considerably over

the operating regime.

Local modeling approaches, on the other way, divide the dynamic space into

several parts. Each local model is trained fittingf one part of the data from the state

space. The local linear approximation can be considered a simplified alternative to

nonlinear modeling. The approach of locally linear ARX has been widely discussed

in the time series literature. It is a state-dependent sequential type of recursive algo-

rithm for identifying dynamic models using simple linear regression or interpolation

[2]. Under the assumption that f(-) is smooth in the vicinity of x(u), f(-) can be

approximated by the first order term of its Taylor series expansion,


f(x(u)) fn(x(u)) (2-13)




where a', is the Jacobian of f at x(u), and b, is a constant vector at x(u). The

implicit assumption of the method here is that the time-steps for state dynamics are

small. The parameters (n', 6) can be estimated for the local linear mapping in the least

square sense from the local data samples. Casdagli et al. stated that local modeling

approaches can provided approximations of higher accuracy than global methods,









especially for systems with complex behavior. Casdagli also stressed that the great

advantage of local modeling is the flexibility in building the model [39].

As the center mission of modeling, the predictive mapping f : RD R can be

expressed through time embedding as


X(n + 1) = f (x(u)) (2-14)


where f (-) is a deterministic ARMA model with time-embedded input vector x(u) =

[x(u), x(n 1),..., x(n D + 1)]. With the method of local modeling the state

space is partitioned into small regions and a local model is established to describe

the dynamics in different region. Thus local models vary from region to region.

The approach of locally ARMA models has been discussed in the time series lit-

erature by Priestley [40]. Improved approach was proposed by Farmer and Sidorowich

[16], who also extended ARMA models to locally polynomial approximations of higher

order. The first order expansion, which is a linear model, is an effective local modeling

method. And the approximation with high-order polynomials are not significantly

better than those obtained with first order. In our work, we will use the first order

expansion.

In many local model applications, the self-organizing map (SOM) has been uti-

lized to divide the operating region into small parts, where one local model is trained

corresponding to one part of the state [5, 11, 18, 19]. The SOM is appropriate

for switching nonlinear relationships of high-dimensional data into simple geomet-

ric relationships that preserve the topology in the state space [41]. Principe and

Wang modeled a chaotic system with SOM-based local modeling method [19]. Cho










et al. realized competitive local controller using SOM-hased local modeling approach

[18, 20].

The role of GMM in system identification is to estimate the distribution of state

space. According to the distribution density of each Gaussian kernel within GMM,

the current state of the system is described by one or several Gaussian models. To

finish the predictive modeling, a set of local models also trained based on GMM.

Following the GMAI's criterion, one local model is linked to one Gaussian kernel.

And these two make up the identification model pair. The detail of improved GMM

construction and training and corresponding LLM training will be discussed later in

chapter 3.

2.5 Summary

Treating dynamical system as a black boxr, polynomial modeling and time enthed-

ding data structure build the foundation for the application of GMM. The simplified

polynomial expression of system dynamic makes linear modeling feasible. However, it

also limits the application range of each single model especially when large modeling

error is presented. In chapter 3, GMM will be use to estimate the application range

of each local model, and a comparison will be made between GMM and SOM. In

chapter 5, GMM-LLM hased system identification and modeling will be applied to a

series of applications and construct the basis of modeling predictive control (jl PC).

Front the nature of ESN, we can see one advantage of ESN is its less-training

construction procedure even though it is not linear modeling any more. Since its

internal states are randomly generated, the only part which needs training is its

readout vector, and there is no training that is necessary for the states. And since

the readout is of linear form, the training for the readout vector can he easily done










through Wiener-filter similar methods. Thus through the functionality of !unt, I>..1

ct II. the ESN will require less training computation compare with other recurrent

network. Another advantage of ESN is its input structure over tinte-embedding data

structure. Since ESN has a 1. -- no in!~" of internal states and pre-defined feedback,

these internal states can he deemed an !lIs is sI (I. li- to input for a certain system,

ESN can record it yet tinte-embedding may need longer data structure. We will also

show that the ESN with nmaxiniun entropy can accomplish systenl identification work

with nxininiuni \SE.















CHAPTER 3
GAUSSIAN MIXTURE MODELS AND LOCAL LINEAR MODELS

3.1 Introduction

The Gaussian Mixture Model (GMIN), also called mixture of expert models,

is a framework for unsupervised learning. GMM, along with local linear models

(LLM), constitutes the infrastructure to solve one of the fundamental problems of

our research, the system identification (SI) step. Conceptually, however, we cannot

use GMM alone to solve the SI problem. SI, as the discipline of making mathematical

models of unknown dynamical systems, must extract the structure in the joint space

(input & desire). GMM, as an "information" clustering approach, estimates the

distribution of data in the input space. GMM is similar to the Partition Algorithm

(PA) (Hilborn & Lainiotis 1969; Lainiotis 1971), which includes a group of Gaussian

kernel systems and a gating function [42], however it is based on a well defined

statistical framework for density estimation. The GMM is also similar to the first lIn-cr

of the Radial Basis Function (RBF) network used in nonlinear system approximation.

In fact the kernels in the R BF are also Gaussian functions, which are spread over the

data space using a clustering algorithm [42, 43]. Normally the covariance of these

Gaussian kernels are not trained from the data. The RBF output 1n-,-;-r combines

these internal states linearly into a single projector that is trained with the desired

response to extract the information in the joint space. The GMM-LLM approach

combines these two hasic strategies, by creating a local partition of the input space

as in the PA algorithm, and employs multiple local linear models attached to each










partition, instead of nonlinear combining the intermediate nonlinear states as in the

RBF.

Figure 3-1 depicts the elements of the GMM-LLM architecture. The input data

is modeled by a mixture of Gaussian's model, and the influence of each Gaussian

is evaluated by a weight that represents the likelihood of the data being generated

by the corresponding Gaussian mode. During training the data from each Gaussian

cluster is used to train the LLMs. During' -1 4! the input is directly sent to all the

LLMs and their output weighted with the probability that the sample is generated

by each model before being combined by the output adder. Note that during testing

these weights change with the data providing a greater flexibility when compared

with other local modeling alternatives. This architecture is principled using well

established statistical methods, very flexible, and still very simple because it uses

only linear models.

The GMM-LLM approach is conceptually an alternative to the Self Organizing

Map (SOM)-LLM reviewed in chapter 2. The SOM clusters the system state space

into a group of small regions using a winner-take~s-rell operator [42]. Each small input

region, which is also called Voronoi tessellation, is associated to one linear model

that is trained with the data from the tessellation and that attempts to approximate

the desired response when instantiated. Globally the modeling equation could be

described as

1,.rE clu~sterks
k=) 0, otherwise (1


Using soft-combination instead of winner-take~s-rell criterion, GMM does the local

modeling in a cooperative manner. Even though it still follows the description of the

















Gaussian 1 ;LLM 1- utu



Initial
training
data / asia L




GSOM training Gaussian K LLMK K 0
EM training



Gating Network



LLM 1

o, 2 Output
Testing
data




aK

LLM K



Figure 3-1. Structure of GMM with LLM.
Top: Training schematic, Bottom: Testing schematic.


first part of (3-1), it doesn't limit the coefficients to be "1" or "O". Consequently, the

GMM-LLM approach can reach similar SI performance with less PE compared with

the SOM-LLM approach. Moreover, soft-combination can generate smooth-switching


among different models, and can alleviate the difficulty faced by winner-takes-all,

when it chooses the wrong winner, where the corresponding LLM will generate worse

SI result.










Another competitor to the GMM-LLM is the fuzzy logic approach. The most

popular fuzzy logic approach is the Takagi-Sugeno (T-S) fuzzy model [44] which is also

a local modeling method. Similar to the GMM-LLM, T-S fuzzy model uses multiple

linear models for SI and soft-combination of linear nientership functions to weight the

outputs front linear models. The T-S fuzzy model divides the state space into several

overlapped regions using a set of simple "if-then" rules, one rule corresponds to one

linear membership function. Besides the order of nientership function, the difference

between the T-S fuzzy model and the GMM-LLM lies in the training method. The

T-S fuzzy model needs pre-knowledge about the system for both nientership function

training and linear model training. Unlike the unsupervised training in GMM, the

T-S fuzzy model training is in a supervised manner which increases the complexity

of modeling.

Assuming the training data set covers the state space of the target system, the

state space could be described by the embedded (reconstructed) data in the appro-

priate dimensional space with an optimal delay. The goal is to use a set of Gaussian

kernels to estimate the structure of the eventually complex data distribution in re-

constructed space. Since the only necessary parameters for a Gaussian distribution

are its mean and covariance which can save time and computation on model training.

This simplicity is one reason why Gaussian density function is chosen as the kernel

function.

Setting p(.F) as the distribution density of the current state .2, p(.2) can he ex-

panded in a sunt over Gaussian clusters ck. All the Gaussian clusters cover the whole

state space defined by (pk, k). The probability density of each cluster reflects the










domain of influence of each cluster.












where g(-) indicates that we explicitly state the form of the conditional Gaussian

distribution densities


g(.F|Ck) 2 -P-lk)T i~--Pk) (
(27r)D/2 k~ 1/2

where .F is current state, p~k and Ek are nlean Vector and covariance matrix of cluster

Ok in the feature space. Also the sunination over all the clusters is normalized by the

cluster probabilities p(Ck) to satisfy





If the elements of input .F are independent of each other or the cross correlation is

negligibly small, the covariance matrix could be reduced to a diagonal matrix, which

in turn simplifies the computation of Gaussian density function.



d=1 2/ETd,k

where D is the input dimension.

Function (:3-3) and (:35) show that GMM uses the linear combination of non-

linear estimation models so as to build a powerful global system model. Meanwhile,










GMM describes the local data feature with relatively simple models which satisfy

description of (3-2).

3.2 Maximum Likelihood Estimation Training EM Algorithm

The goal of the GMM approach is to find an optimal representation for the

distribution density p(x) of each point while maintaining a low computational cost.

In order to simplify the problem, the optimization criterion is set as the maximum

log-likelihood function over the whole data set. Figueiredo et al. use non-parametric

EM training approach for Gaussian mixtures [45]. We describe the maximum log-

likelihood function as


L(X, p-1, E) = Incp(,)] (3-6)
n=1


n=1 k=1

where NV and K are the total amount of data and Gaussian clusters respectively.

Since the Gaussian model parameters are calculated from the observed data set

and describe the local distribution of data points, a recursive search will be necessary

to find the optimal Gaussian model parameters. During the search, the maximum

likelihood function is used as an optimization criterion, so there will be two estimators

combined in each joint update, one for the estimation of likelihood, and one for

distribution of each Gaussian kernel. The Expectation-Maximization (EM) algorithm

[46] is applied to search for the optimal parameters of the Gaussian kernels. The EM

algorithm has gained wide appreciation as an efficient algorithm for probabilistic

networks training. The EM assumes that the known states (the observed data) and

the unknown states (the Gaussian parameters) characterize the whole system model.









As an unsupervised approach, the EM training based GMM only needs the number

of Gaussian kernels K. All other parameters are obtained through training.

The recursive process of EM algorithm starts with the E-step (Expectation

calculation), during which the cluster parameters are assumed correct. Expectation

of the log-likelihood (3-6) of the complete conditional probability by the observed

samples is computed. According to B li- Im Theorem, the posterior probabilities

are proportional to the likelihood function.


p(Ck|2) oc p(X|Ck) -- L(X, p~, E)


So equivalently, the posterior probabilities, which relate each Gaussian cluster

to each data point in the conditional probability form p(ck | ), can be evaluated as

p(X|kPCk)p k





The sum in the dominator normalizes among the Gaussian clusters, and the

numerator determines the largest possible density function for each data point. Thus

the maximum likelihood of the whole data set will monotonically increase.

In the M-step (Maximization calculation), the distribution density function

of each data point, which describes the likelihood of data given certain cluster, is

temporarily fixed. The update improves the parameters of each Gaussian cluster:

unconditional cluster probability p(Ck), mean Vector and covariance matrix (pk, k~).

Thus the likelihood will be maximized recursively. In equation (3-8), considering

that the amount of data points are limited and the values are discrete in reality, the










integral is replaced by the sum over all training data.




~~q jp p(Ckn n)8

n= 1

The mean vector and covariance matrix of each Gaussian cluster are updated

according to the stationary points of the likelihood to the unknowns (pk, k~). The

derivative equation of likelihood to the mean is


Inp(,] (3-9)
n= 1

In[-l pIeg(,ci)] = 0
n= 1 i= 1


n= 1 i= 1

p(Ck 9 @nCk)2(,- k
pK (-1) = 0
,=, tiK-1p Ck 9 /Ci) 2Ek

The answer to update the mean will be obtained through maximum likelihood

criterion:


p(k ) = p k|2 -E (3-10)
n=1 n=1

pk N
L=l 1P Ck I X









The same result is achieved through the integration method as:



p(Ck dr


p(Ck)3
1v p(Ck
N p(ck)3

pN-1 V~p(Ckl ~

Similarly, the Gaussian cluster covariance is:


Ek,ij i =)'j kj k|d (3-12)


N -p(ck;) r


E-1 p(CkX

The basic model estimation procedure can be summarized into five phases:

1. Pick initial conditions;

2. Estimate the data distribution density p(7|. ,);

3. Estimate the posterior probability of each cluster for each data p(Ck 2);

4. Estimate the cluster probability p(Ck), Gaussian cluster parameters mean pk and
covariance Eki

5. Go back to step (2) until the increment of likelihood function (3-6) is within
certain range.

3.3 Estimation of Local Linear Model

Because GMM-LLM is a group of globally optimized local model based on distri-

bution of training data set, it preserves topological properties of target system in the

state space. Since the soft combination of Gaussian clusters can be used to weight the










estimation of the current system's dynamic, the dynamic estimation can be extended

to a one-step linear mapping y(n + 1) = f(x'(u)) = opT, x,, where f stands for

a certain reference model. And fopt is the optimized current local model based on

Gaussian gating function. LLM is the most simplified reference model of the real sys-

tem and it will simplify the design procedure of controller [16]. Fr-om equations (3-2),

(3-7), and (3-10)-(3-12), we thus can express one-step linear prediction function as


y,+l i: p(X,|Ck P Ck) LT 3
k=~~p~ 1 =19/Ci) p Ci)







where k~ is the kth local linear model (LLM) corresponding to Gaussian cluster k,

which has same dimension of pk and x'. y,+ is the estimation of y,+l. For simplicity,

we denote p~,n aS posterior probability of cluster k corresponding to data n.

Equation (3-13) can be switched to matrix form in order to simplify the calcu-

lation of local linear model LkS.


yn 1 = P, LT X (3-14)


where

P,T = [pt,n; p2,n; ; PK,n]






-KxD
For the training data, the desire signal y,+l will be available. Therefore the

optimal coefficients for each local model can be determined by deriving the mean










square error criterion (jl!810)


min(.I) = (U+- 0+12(3-15)





The extreme (minimum) happens when the first order derivative of .7 to L is "O"


1~~ ~ ~~ Un1-Sa1Pz I = 0
(:316)





j d~= Ii I= 0 (:317)



From (3-17), using the result from (:313), each row of dJcan he expanded as


=1~~ ~ ~~ Un =1L~t ) Pr,t *Z", = 0
(:318)
=1 n+1' r,tz *It 1 ~t =1 Pk~,tz'P~z'*t'*T

From (:318), we obtain the 1\SE-hased optimization function for local linear

model Lks. The difference between winner-take~s-rell solution and this solution is that

Pk~,, will not he ah--li- 0 or 1, and it determines where and how much cluster k will

take effect.










In order to obtain the values of all the local models, we can expand equation

(3-18) to matrix format to simplify the notations






L2




LK

=R, L


where L is a vector representation of L.

Repeat (3 -18) for all the Gaussian clusters, the local linear model can then

be described as


1~I R1 1

2 ~ 22
P =-R -L (3-20)


PK RK K

SL- = -1 -


Here the kth Gaussian cluster's local linear model is the vector L 's ((k-1) x D+1)

to (k x D) elements. With all the formulations above, the basic GMM based local

linear model estimation procedure can be summarized as:

1. Accomplish state modeling using the EM algorithm, obtain distribution para-
meters (pk, k) foT eVery Gaussian cluster, conditional probability p(,|Ck), and
the cluster probability p(Ck);

2. Compute the posterior probability p(Ck|) through the B li- Im function for
each cluster;










3. Compute the local linear models using distribution information from previous
two steps and least square criterion;

4. In the testing phase, re-calculate the p(,|Ck) and p(Ck 2) giVen neW data point,
then estimate the system output through local linear models.

3.4 Improvement on GMM

Even though the EM algorithm improves the likelihood at every iteration, the

random initial conditions usually slow down the converge speed, and many times,

make the model converges to local maximum [47, 48]. In order to optimize the GMM

initial conditions, shorten the EM training time, and stabilize the final GMM per-

formance, a Growing-Self-Organizing Maps (G-SOM) algorithm is applied. Another

benefit from G-SOM is that it supplies a flexible structure to change the number

of Gaussian kernels in the GMM. Vlassis and Likas set up a criterion to determine

optimal number of Gaussian kernels [49].

The initial topology of the G-SOM network is a two-dimensional structure with

3 PEs, which will be used to describe clusters of patterns in the D-dimensional data

vector sRD. For this reason, every PE c has a D-dimension weights vector w indi-

cating its virtual position in sRD. During the self-organizing process, one new PE

will be added to the map at the end of each training iteration, preserving a triangle

connection scheme at all time as shown in figure 3-2.

The adaptation in the G-SOM model can be formulated as:

1. For every input data, locate the winning PE through the Euclidean distance
s = min dis E' w' ;

2. Increase the matching performance for s and its direct neighbors Ns, with AG,
Eb 2~ 9s n I;: En 2' I;: )O (for C 6 Ms 1V)

3. Increase the winning frequency counter -r, of a with increment a-r, = 1;






















(a) (b)

Figure 3-2. Explanation of G-SOM algorithm.
Left: Training of G-SOM; Right: Growing of G-SOM


4. Decrease all signals' -r, counters by a small value so that the mean of winning
frequency doesn't change too much;

5. After feeding in all data, one new PE is added between the PE with the highest
-r, and its farthest neighbor until the pre-defined PE count is reached, and all
the winning frequencies are set to zero.

After training, the G-SOM PEs' weights HI are utilized as GMM's means' ini-

tial value p with balanced winning frequency. The GMM's covariance matrices are

initially set to a large value to make every Gaussian kernel cover the largest pos-

sible area, and the optimal value of covariance matrix will be trained through EM

algorithm. The improved algorithm was extensively tested on the LoFLYTE data

that will be fully presented in (I Ilpter 5. The LoFLYTE simulation uses 32-D time

embedding space, and the difference of performance between the conventional GMM

and the GMM with a G SOM initialization is shown in table 3-1. Every SER result

is an average of five independent experiments. Overall, the G-SOM initial weighted

GMM makes system identification performance 2.4dB better than random initialized

GMM with smaller variance and decreases the EM training iterations.









Table :31. Performance on :32-D LoFLYTE data

GMM GMM with G-SOM
40 dB SNR noise SER (dB) Variance SER (dB) Variance
p 2:3.6:37 0.4:338 2:3.405 0.2589
q 26.012 0.2849 24.396 0.:37:38
r 24.66:3 0.3859 22.55:3 0.175:3
u 21.016 0.3574 26.352 0.2877
v 22.682 0.272:3 28.544 0.2065
w 22.829 0.3404 25.668 0.1926
Average 23.473 0.3458 25.153 0.2491
Training loops 17 4
0 noise
p 25.514 0.1968 24.721 0.217:3
q 27.828 0.2189 25.8:38 0.30:36
r 22.05:3 0.:36:31 25.696 0.1:301
u 19.065 0.2817 27.12:3 0.2085
v 26.5:30 0.1977 29.444 0.1447
w 2:3.315 0.2661 25.85:3 0.1:350
Average 24.051 0.2523 26.446 0.1899
Training loops 14 2


3.5 Comparison of the GMM with the SOM

We will make a detailed comparison of the GMM with the SOM as SOM-LLM has

been such a popular approach for SI and these two are so close to each other. The Self-

Organizing Map (SOM) is another local modeling algorithm proposed by K~ohonen

and extensively utilized in the Computational NeuroEngineerign Laboratory, with

considerable success to system identification and control. Using the clustering idea

applied to the reconstructed data in state space, the SOM sets up representative

vectors locally, with a neighbor relationship determined by the niulti-dintensional

Euclidean distance.

Yet the 1! in r~ problems that SOM cannot overcome are: the proliferation of

prototype vectors in particular in high dimensional spaces, the sensitivity to noise,










the non-smooth switching between models because of the winner-take~s-rell mecha-

nism, and the local minima entrapment during SOM training [48, 50]. Nothing can

he done for the first problem except increasing the network size, which brings also

slow training, requirement of large amounts of data, and many local linear models.

Although the SOM is insensitive to noise below the quantization produced by the

voronoi tessellation, when the noise is higher the wrong model can he instantiated.

The SOM preserves neighborhoods so the difference is not drastic, but contributes to

modeling error. Because of the difference between the LLhis, when the models switch

there may be transients in the response that also add to the modeling error for more

than one time step.

The GM13 approach divides state space in coarser regions, which improves the

poor scaling up of the SOM with dimension, and also improves the noise robustness.

However, each region may not he as well modeled by a linear model as in the case of

the SOM. However, only experimental evidence can judge this trade-off. Instead of

using winner-trake~s-rell criterion, GMM-LLM combines multi models into the current

data. In the case the Euclidean distance between data and the closest Gaussian

kernel is large, maximum likelihood criterion makes sure more than one kernel takes

effect. This phenomenon is shown in figure :3-3. Also, figure :3-3 shows that compared

with other linear membership functions, the use of non linear Gaussian distribution

can guarantee smooth switching between models. An important characteristic of the

GMM-LLM is that the weighting is changing during testing, unlike the SOM-LLM.

This will help in the overall modeling accuracy.

The local minimum in the SOM training is difficult to control during the an-

nealing process. Also the SOM cannot control the winning frequency which may














02-





-8 -6 -4 -2 0 2 4 6 8 10






Figure 3-3. Soft combination region using GMM (between dash line)


also provide added difficulties in the clustering. The GMM is also sensitive to local

maxima. However, in order to make GMM converge to global optimal, in a more

stable manner, G-SOM makes the weights such that every cluster is generated with

similar amount of winning frequency [51, 48]. This means that for the whole training

data set, each Gaussian kernel occupies about the same portion as a winner with

large output weight. Fritzke, Chinrungr-ingr and Siiquin proved the weights growing

criterion and the optimal validity respectively [52, 53].

Since the SOM algorithm is derived from heuristic ideas and this leads to a

number of limitations, some of the advantages of GMM over SOM are listed below.

1. GMM defines an explicit probability density function in data space to estimate
the distribution of the state. In contrast, SOM uses winner-takes-all crite-
rion based on the Euclidean distance [18]. GMM's soft-combination mechanism
makes smooth switching possible.

2. In GMM, the neighborhood-preserving nature of the mapping is an automatic
consequence of the choice of Gaussian functions which improves GMM's noise
robustness. Smooth neighborhood-preservation is not guaranteed by the SOM
procedure.










3. Through the relative winning frequency criterion from G-SOM construction, the
GMM training avoids the local minima entrapment which SOM alone can not
overcome .

4.GMM improves the salability of SOM through a coarser clustering of the state
space. For the same reason, the training efficiency of GMM is increased.

3.6 Summary

With detailed explanation EM algorithm, GMM has been theoretically proven

feasible on modeling analysis, and its practical modeling capability could tested on

various system. Gaussian distribution makes smooth switch between multiple models

possible. And using the soft combination mechanism, as we made the comparison

previously, GMM can largely decrease computation burden on modeling problem com-

pared with SOM hased algorithm. Gaussian models part is trained in un-supervised

way as there is no desire distribution pre-assigned. The LLM part, as desire system

response is available, will be trained in a supervised way to make sure the convergence

of error square (jl!510). Since GMM hased modeling algorithm can ease the work of

controller design, the benefits will be explained in chapter 4 and the simulation results

and the detailed performance comparison will be shown in chapter 5.















CHAPTER 4
GAUSSIAN MIXTURE MODEL BASED CONTROL SYSTEM

Within the framework of predictive control, the proposed GMM based local linear

modeling approach simplifies the design of control systems for nonlinear plants. It is

difficult to design globally stable nonlinear controllers with satisfactory performance

at every point in the state space of the closed loop system, which is especially true

in the case of unknown plant dynamics. However, the GMM local linear modeling

approach presented above, coupled with strong controller design techniques [54] and

recent theoretical results on switching control systems [55, 56], achieves the control

goal. The topology of local controllers that naturally arise from the local linear model-

ing approach is illustrated in figure 4-1, where the local controllers are directly linked

to each local model. Within the framework of GMM-LLM, the system dynamics can

be described piece-wise-linearly as


Yn+1= GL' X,, (4-1)

L p,, -X,

Aoy, + Aiy,_i + + ADyu-D

+Bou, + Blu,_l + + BDun-D


where X, is the time embedding vector [- yi, -, ui, ], LOPT Stands for the

current optimal model coefficients [- Ai, Bi, ], yis are the system states,

uis are the control signals.































Figure 4-1. Structure of Control System hased on GMM-LLM


In section 4.1, a brief review on mixture model based controller design will be

given. In section 4.2, the GMM-hased dynamical inverse controller is explained in

detail, which is later demonstrated feasible on nonlinear system in chapter 5. A

pole-placement PID control approach will be discussed in section 4.3.

Considering modeling error, measurement noise, and model uncertainty, robust

approaches to control become very appropriate for real-world applications. Therefore

various approaches have been proposed to solve the robust control problem, including

the H, approach [57], Lyapunov approach [58], geometric approach [59], etc. Lin,

Brandt, and Sun proposed an optimal control based robust control scheme [60, 61],

and we will modify it and fit it into our GMM-LLM framework.

4.1 Introduction of Mixture Model Controller

The design of predictive controllers, especially for large-scale and complex appli-

cations, depends integrally on the model of the physical plant [62, 63, 64]. Indeed, the









model constitutes the vital link between the physical problem in which the optimal

control will be used, and the mathematical realm in which it needs to be designed.

The classical approach to the control design problem assumes a known model, even

if dramatically reduced. In the case that the model complexity is reduced unreal-

istically, the consequence will be the generation of ineffective controls. For global

nonlinear modeling control approach, which is more difficult compared with local

or linear modeling control, neural networks based control scheme have been studied

extensively [65, 54, 66].

Using the Partition Algorithm (PA), Lainiotis first introduced the multi-model

partition (11 l Pl) methodology for the design of adaptive/intelligent systems, where

the control problem is decomposed into a set of easier derived subproblems [64, 67].

Given a set of local models, MMP designs the adaptive control as





where Uk 8() is the optimal control corresponding to kth sub-model.

Fuzzy control is another popular design methodology which attracts wide interest

these d we~ [68]. Among them, Takagi-Sugeno (T-S) fuzzy logic model, which applies

fuzzy rules and membership function, is a widely studied and applied fuzzy modeling

and control methodology. In the T-S fuzzy model, the system is described by a set

of if-then rules which have local models in the consequent parts. The ith rule in the

model is of the form: If zi(t) is ii T ,, and ..., and zp(t) is if 1 then


ex(t) = Aix(t) + Biu(t)


























Figure 4-2. T-S fuzzy logic membership function


The effective regions of if-then rule are overlapped with each other, and the weights

of each rule is measured by linear membership function which is shown in figure 4-2.

The control signal for fuzzy logic controls take the form of (4-2), where cOk is coming

from membership function too. Tanaka, Iwasaki, and Wang use T-S logic switching

fuzzy logic model to control of an R/C hovercraft [69].

Based on the idea of a group of simple controllers instead of a single complex

controller, switching control has been applied in recent years. A switching control

consists of a high level discrete-event supervisor and a family of low level feedback

controllers [70, 7]. Following the local modeling theorem, multiple controllers are

applied to certain regions of data. Motter used a SOM based system identification

model to design a switching controller for high speed wind tunnel control [20]. Li

and Lee use a SOM structure to cluster the state space, and then apply fuzzy rules

to design the control signals [41]. Cho et al. uses a SOM and sliding mode controller

to realize robust control [71, 72]. Based on those approaches, our research will try
































Figure 4-3. Structure of Inverse Control System


to improve the performance of multiple controller near the intersection of different

clusters.

4.2 Inverse Error Dynamic Controller Using GMM

The mission of controller design for nonlinear dynamical system could be alle-

viated using local linear model generated from GMM. Once the optimal local linear

models have been chosen from the input-output data available for system identifi-

cation, inverse error dynamic controller design can he applied to meet goals based

on the desire signal, the state signal, and the local model. Compared with the ha-

sic structure of a mixture model controller, inverse control is an extended version of

single-model control since the soft-combination happens before the combination of

control weights and inverse is applied to the unique current model. The structure of

the controller is shown in figure 4-3.










The principle behind inverse error dynamic control (IEDC) is the .I-i-opll..l~e

convergence of system error. The control methodology can he described as fixing a

set of stable poles for the tracking error signal dynanxics. If at time n the desire plant

output is denoted as d,2, and the actual plant output is y,z, the instantaneous error

is defined as e, = d,2 y,2. And the control goal is to guarantee that the error signal

follows the 1-D error dynamic function:


e,w+l + Xle, +Xel 2 1- + X.' -. 1 = 0 (4-3)


the parameter vector =~ [AI, -, XI]' is selected such that the roots of the polynomial

1 + Xlr + + A,.rz = 0 are all within the unit circle and thus the error dynamics is

stable.

If the order of polynomial is zero, the error function is simplified as e,2+l = 0.

The error function corresponds to an exact tracking controller, which determines the

necessary control input for the next error value to he "O". This strategy uses only

current model information and current control error, so its response will be instanta-

neous. And obviously it is not robust to modeling errors or external disturbances as

there is no further model and dynamic information to smooth the disturbance front

model uncertainty. Through 2nd order error dynamical function, the error signal is









filtered first, and the local model based control signal can be obtained as


e, I + Are, + Xa26-1 = 0

j [dn+1 yn+1] 16X, 26n-1,_ = 0

j [d,+l yn+,] + Xle, + X26n-1 = 0
(4-4)



Sn+1nu n,u n le~n 26- =le 0 ~,l



where model [fax ,Ls]T = L urrent, = (E" 21) is .omingIF fr.oml (3-14) and (Are, +

Xa26-1) 1S from weighted error signals of last two steps.

For different choices order in (4-3), the control law will drive the tracking closed-

loop system in various r- 11-<, meanwhile the error signal will ahr-l- .- satisfy the linear

difference equation given in (4-3). As long as the location of poles are within the

unit circle, the controller will guarantee the stable convergence of the dynamics.

Considering that the modeling error from the GMM-LLM will affect the actual pole

locations, the poles in (4-4) cannot be chosen close to one in order to guarantee

stability.

Now we discuss the stability issue for the controller. From (4-4), Xis are co-

efficients to be designed such that two poles pis are within the unit circle where

p, + p2 -1 and pi x p2 2 X. Since the LLMs are trained based on MSE criterion,

the modeling errors at each step can be deemed a bounded i.i.d. random variable

with zero mean. It can also be assumed that E' : ,] < 00, which is reasonable if the

system identification is done properly, and the measurement noise levels are finite.










From the experimental results in the simulation section, we will show that the mod-

eling error power is finite. Under those conditions and assumptions about nonlinear

uncertainties on ess, the error dynamic function is changed to


e,+l + Are, + X26n-1 = CM/,n+1 16M/r,n 26Men,n-1) (4-5)


where eM/,iS are the modeling errors at each step.

We define the right side of equation (4-5) as a. Considering the worst case when

modeling errors reach their bound, a is set to its upper bound a, (|a| < |a,| and

a, is a finite constant). The expression of e, can be obtained through Z transform

and inverse-Z transform to yield


E~z) =-
1 z-l 1+ Alz-l A2 X-2
n a n+1


-a p"1 (4-6)
(1 p2 92a pi1

As n oo, the mean of the tracking error is obtained as


lim E[e,] lim E[ ]
ntoo ntoo ( t 1-p
.1+ z+ A2
= hm E[-eM/,n]
ntoo (1 pt) (1 p2)

= [-eM/,00 (7


As for the power of tracking error, the expression can be written as

a2
lim E[e~ ] lim E ["
ntoo ntoo (1 p )2( p922
.1+ +A X2
= hm E ]'
ntoo (1 p, )2 J Pa2 2

~1E ] (4-8)
(1 + At + Xa2 2










Under the assumption that the modeling error (and measurement noise) con-

tributions eM/,n are Wide sense stationary random variables, we can compute the

.I-i-mptotic (finite) tracking error in terms of the power spectral density of eM/,n and

the coefficients As. As discussed before, the mean of modeling error is zero or small

value. Consequently, the modeling error will take effect on tracking performance

and the pole-placement problem must be solved considering the trade off between

convergence speed (faster poles, wider pass-band) versus better disturbance rejection

(slower poles, narrower pass-band).

4.3 PID Controller

PID stands for Proportional, Integral, and Derivative. PID control became a

traditional control approach in the 20th century because of its simplicity and accu-

racy. Nowedl-ll- researchers are trying to find new methods to better fine-tune the

PID parameters. Considering our system identification framework, the design of the

piece-wise linear PID controller will be accomplished once the system identification

is finished. Based on the GMM-LLM framework, the pole-placement PID design

technique could be realized based on [73]. Lately, Skoczowski et al. proposed a new

method based on two-loop model following control (j \! PC) to improve robustness [74].

The mixture model based PID control scheme is shown in figure 4-4.

Simplifying the model function (4-1) to a second order time-embedded SISO

form

Un,+l= aly, + a29n-1+ blu,+ b~i _1

The model's current Z-domain transfer function can be written as


B(z) blz2 + b2X
F(z) =
A(z) z2 01Z a2




























Figure 4-4. pole-placement PID control system.


Defining the desire signal and controller transfer function as D(z) and G(z)=

D(z)/C(z) respectively. The close-loop control signal transfer function U(z)is given


D(z)
U(z) (D(z) Y(z)) (4-9)
C(z)

From the definition of pole-placement, the goal of this approach is to map the

coefficients of close-loop system function from open loop coefficients (al, a2, bl, b2) tO

redesigned controller coefficients (ci, d ), whose characteristic polynomial could be

described as

z2 1X 2 X (4-10)

The close-loop system transfer function is then [1z 1)()BzDz] and the system

characteristic polynomial becomes


A(z)C(z) + z- B(z)D(z)


(4-11)










Using the characteristic polynomial for the system model as (4-10), along with

a second order observer polynomial term z2, the design equation for pole-placement

PID is given as


A(z)C(z) + z-1B(z)D(z) = z2 ,,2 + 1X 2~)

(z2 1%r a2) Z l)( Z C) Z -1(b z2 + b~X(l2 d1 d2X d3

= zX2 ,, X1X 2 ) (4-12)


where a (z 1) factor is added to the denominator of controller function C(z) in

order to filter the low-frequency noise. This equation could be solved for cas and d s.

Consequently the control signal is obtained from (4-12) as


u, = (1 c)u,_l + c un-2 16di, d26n-1 3 6n-2 (1


Given the LLM on the current time step, the PID controller could be easily

obtained through (4-13). Its stability will be assured as we discussed in previous

section for the dynamic inverse controller. One possible problem caused under this

framework is that the PID coefficients need to be calculated on every step as the

system's local model and the gating weights from GMM are changed.

4.4 Optimal Robust Controller

In order to achieve better control performance, we are going to first translate

a robust control problem into an optimal control problem and then apply optimal

control approaches to solve the robust control problem. We will show later that, by

properly choosing a set of cost functions that reflects the modeling uncertainty, noise,

and control requirements, the solution to the optimal control problem will be also a

solution to the robust control problem [60, 61].









D. 60.1...0 4.1: Robust control problem

Find a feedback control law a = k(x) such that x = 0 of the closed-loop system


x = A(x) + B(x)NV(x) + B(x)k(x) (4-14)


is globally .I-- phl1'1 cally stable for all admissible perturbations NV(x). ]

D. 60.7..-0 4..: Op~timal control problem

Find a feedback control law a = k(x) that minimize the following cost function


(Niaz(x) + x~x+ Ju)dt
/OO

where NL~,(x) is the power of uncertainty compensation; XTX is the cost of states

change; ufu is the cost of control signals. x~x and ufu can be replaced by XTQx and

UTRU with Q and R being symmetric and positive semi-definite to adjust the relative

weights of states and controls.

Theorem: Assume that the solution to the optimal control problem exists. Then

the solution to the optimal control problem is also a solution to the robust control

problem.

Proof: Define


V(xo) = min /7(Niaz,(X) + XTx + uT)dt


to be the minimum cost of bringing the system x = A(x) + B(x)u from xo to the

equilibrium "O". The Hamilton-Jacobi-Bellman equation [75] gives


min[Niaz,(x) + XTx + U n + V, (x)(A(x) + B(x)U)] = 0


1 We assume the perturbation is admissible, so we use the same B(x) as coefficients.









where V,(x) = 8V(x)/8x. Thus, if a = k(x) is the solution to the optimal control

problem, then


Imm,,(X) + XTx + UTU + ,((x)(A(x) + B(x)k~(x)) = 0

2kT(x) + V,(x)B(x) = 0 (4-15)

Since V(x) satisfies


V(x) > 0, x /

V(x) = 0, x= 0


Also, V(x) = dV(x)/dt < 0 for x / 0, because


17 = (8V(x)/8x)"(dx/dt)

= (x) [A(x) + B(x)NV(x) + B(x)k(x)]

= (x) [A(x) + B(x)k(x)] + V,(x)B(x)NV(x)

-1Va,(x) xTx k~(x)Tk(x) + ~((x)B(x)NV(x)

-1Va,(x) xTx k~(x)Tk(x) 2k~(x)TN(x)

-1Va,(x) + NV(x)T1N(x) (NV(x) + k(x))T(NV(x) + k(x)) x~x






Thus the condition of the Lyapunov local stability theory are satisfied. The equi-

librium x = 0 of (4-14) is globally .I- a.i-'1 ;cally stable for all possible uncertainty

NV(x), i.e. u = k(x) is a solution to the robust control problem. Consequently, there

exists a neighborhood Ni~= {x : ||X|| < c} for some c > 0 such that if x(t) enters Ni,









then

lim x (t) = (4-16)
t->oo

But x(t) cannot remain forever outside Ni, otherwise ||x(t)|| > c for all t > 0. There-

fore,


V(x(t)) V(x(0)) =t V(x(-))dv
0t






Let t oo, we have

V(x(t)) < V(x(0)) c2 -0

which contradicts the fact that V(x(t)) > 0 for all x(t). Therefore (4-16) is true no

matter where the trajectory begins [60]. Consequently the stability is assured without

considering the initial condition.

4.4.1 LLM-based Optimal Set-Point Controller

Typically, the solution of the optimal controller problem for time-invariant time-

deb i-. I (TITD) system is obtained by solving the infinite-dimensional algebraic Ric-

cati equation which requires the solution of simultaneous partial differential equations.

For the mixture model control case, we have proven that, GMM-LLM modeling error

is bounded and not larger than single ARX modeling error. And the control error,

under the condition that the controller is stable, will .I-i-mptotically converge to the

modeling error bound. Moreover, as the Gaussian local model is changing at every

step, the Riccati equation will need to be calculated on every step.










The computational technique, which was originally proposed by Yashiki and

Matsumoto [76] on SISO single delay system and later expanded by K~lein and Ramirez

et al.[77] on MIMO multiple d. lIn-s system, needs to be transformed to our one-step

prediction model into state variable format for the purpose of controllability checking.

The general case of MIMO systems exhibiting multiple d. 1 .1-< is written as (4-1). And

the corresponding state space model for MIMO system can be expressed as


Y1,n+1


=AX, + BU, (4-17)
YD,n+1

Yu+1

0 0 Al y i,n B1


+ U,
0 AD YD,n BD

I I Ao I7 y Y Bo

where only the y, I is the actual system state. The remaining vector yl, yD is

called pseudo-state vectors, which represent the time-d. 1 li-. I portion of the system

states. The pseudo-state is defined as


(4-18)


Yi,n = Ai yi,,_i + Bi u,_i i = 1, D









A main advantage of this transformation is for the controllability test. Defining

A and B as the coefficient matrix and vector in (4-17)


At


,B
AD

Ao

is simplified


I I

the standard state controllability test

matrix S


B1



BD

Bo

to check the rank of the state


rank(S) = rank[ B AB -. A+Cdi-1g

In our experiments, all the LLM give non-zero results, and since the rank of S is

related to the dimension of LLM is relatively low (S = [B, AB] for 4-D LLM case),

the controllability is assured. Only in the case when LLM have a large part of close-

to-zero coefficients, which also means LLM are not well trained, the state matrix will

not be full rank.

To realize the design of robust control signal, we start the cost function J as the

basic design rule:


J= XQ,+U U)


(4-19)


Considering the modeling uncertainty and measurement noise, (4-19) can be

modified as


J = (N(X,) VN(X,) + XIQX, + UfRU,)


(4-20)










Here the modeling uncertainty and measurement noise are combined with the

state itself. In other words, the physical dynamical changes because of the model

uncertainty can he recognized as the increment of system dynamics which will in turn

cost more energy and larger control signal to overcome them. This is a reasonable

explanation as modeling error appears as a warping of system states. The solution

from this kind of function forms the traditional linear quadratic regulator (LQR)

problem and can he obtained through the solution from the discrete algebraic Riccati

equation (DAR E)


ATXA X ATX B(BTX B + R)- BTXA + Q = 0 (4-21)


There have been several methods to compute the unique stabilizing numerical

solution P = X of the discrete-time algebraic Riccati equation [77], from which the

gain vector and consequently the control signal are computed as


G = (BTPB + R)-1BTPA (4-22)

U,z = GX,z (4-23)


where X,z is the re-formulated current input with pseudo state elements, A and B

are from state space form of real system model or i4 and B from state space form of

LL1\. The schematic diagram of the optimal robust control system is shown in figure

4-5.

One possible disadvantage for the optimal robust controller is that the related

computation is larger than those of inverse controller, PID controller, etc. Especially

under the condition of mixture model, theoretically there will ahr--1-- he a change on

the local model as system dynamics are changing. Since DARE computation needs































Figure 4-5. Optimal robust control system schematic diagram.


to be finished right after the generation of Gaussian mixture model, consequently the

DARE (4-21) and the control gain matrix (4-22) need to be re-calculated at every

time step, which might he a challenge under some on-line training environment.

Necessary trade-offs need to be made when resources are limited.

4.4.2 LLM-based Optimal Tracking Controller

Designing front the decrenient of the cost function, the control signals) will

make the system dynamics converge to a zero equilibrium point. For many real

control problems, we want the system to be capable of taking some actions instead of

converging to zero. In order to make the system follow non-zero desired tracks, the

cost function needs to calculate the control gain matrix such that it can calculate the

convergence of not only modeling uncertainty, but dynamical error and the change of

control signal as well. Thus the new cost function (4-20) needs to be further modified









for state errors and change of control signals

.7 = :(N(X(l)V'',r7)l VNX) fQ,+ U.d, (4-24)


= :( E( Q E + d Uf Rd U,,)


where E, is the dynamical error of system states, dU, is the difference of current

control signals and their previous record, Q and R are symmetric and positive semi-

definite weighting matrix. Again, we assume the modeling uncertainty and noise

can be combined with state errors since errors can be described as distance between

states and non-zero equilibrium. Correspondingly, the transfer coefficients for error

signal and control signals need to be changed. Starting from the system state transfer

function in (3-14), the error transfer functions can be described as


U7n+l = alX + a2 n-1 + blun + b2un-1

en+1 dn+1 yn+1 dn+1 yn+1

= i+l (a x, + a2 n-1 + blu, + b2un-1)

=d,+l al(d, e,) a2 dn-1 en-1) blu, b2un-1

(dn+1 01du a2dn-1 b2un-1) + 16, + 26n-1 blu,

=ale, + a26n-1 bl(u, by(d,+i aid, aadn-1 b2un-1)

=ale, + a26n-1 bl(u, C,) (4-25)


Defining il, = (d I aid,, a~n-1l~ b2Un-1l) aS a pseudo control signal, the new

state space transfer function for the error dynamics and the control increment [E, dU]



E, 1= A- E,+B-dU,, A- I:,B I (4-26)
1 a -bi










Considering the modeling uncertainty, the none-zero set point control and track-

ing control problems can be realized through optimal control law (4-24). The new

discrete Riccati equation could be calculated in the same form with modified coef-

ficients (A, B) in (4-26). When the gain matrix from discrete Riccati equation is

calculated, the current step control signal can be expressed as


d U, = G E,

U, = Us + dU, (4-27)


In (4-24) and (4-27), we use the upper bound of modeling error to describe

the model uncertainty, and the real the model uncertainty and noise will be a ran-

dom values which on average are smaller than that. Consequently under small noise

and model uncertainty situations, the transient performance from optimal controllers,

compare with other control approaches, might not be the best. However, since opti-

mal controller is considering the long-term effect of disturbance, its performance on

robustness will outrun all the other competitors in large noise cases. In chapter 5,

we will simulate and discuss the performance of optimal controller on different real

systems with disturbance-free case and the case with noise and/or model uncertainty

presents. Neglecting the computational cost, optimal robust controller is the optimal

choice for model based prediction controller.

4.5 Summary

GMM based mixture model controllers are close to switching controller, yet they

consider the situation of combining weights, which dramatically decrease its compu-

tational cost comparing with SOM-similar vector quantization methods. For different

controllers, the Gaussian models provides a simple linear description of the current










system dynamics as reference model. Under the condition that the Gaussian model-

ing systems are stable, all controllers can give a stable result where optional robust

controller will theoretically give the best tracking performance. A plant model is

rarely a fully accurate description of the real plant. Neglected dynamics, approx-

iniately known parameters, and changes in operating conditions all contribute to

plant-niodeling errors that can jeopardize controller performance. In the following

chapter, we will answer the problems including the performance of the controllers in

real systems, which includes accuracy and robustness, performance curve vs. number

of PE, and performance comparison of GMM-LLM with other modeling approaches.

Among all the options, ESN controller is quite different compared with others,

which will be discussed in details in appendix. Alany papers mentioned that the

training of ADC and/or BPTT algorithms will sometime inevitably lead to unstable

control results because of its computational complexity. Besides the neural networks

(ESN here), BPTT length and the change of learning rate will also take effect on the

final performance. We will start froni some nonlinear systems with -!np!I-~" setup,

use it for set-point control.















CHAPTER 5
APPLICATIONS AND DISCUSSIONS

The GMM hased local linear modeling and control techniques which have been

presented in previous chapters need to be tested on various system in order to prove

their advantages including feasibility, simplicity, and robustness, etc. As an inde-

pendent candidate, ESN will be used for system identification and partly be used

for system control. In this chapter, different GMM-hased control approaches will be

tested on a set of system including chaotic time series prediction, synthetic SISO

linear system identification, SISO nonlinear system identification and control, NASA

LoFLYTE wave-rider aircraft identification and control. A complete set of simula-

tion and experimental results will be provided front the application of these principles

to the listed problems. Among all the systems, the Lorenz system, as a traditional

chaotic system with high order nonlinear dynamics, and the missile system, as a tra-

ditional SISO control system, will be discussed in details about system identification

performance and control performance respectively. For all the sample systems, we

will assume that the system transfer functions are unknown, and the reference models

will be constructed based on I/O measurements.

5.1 Chaotic Systems

The two chaotic systems we present here are the Lorenz system and the Duffing

system which demonstrate irregular, complex behavior. Even though they are deter-

nministic systems, their dynamics show unpredictable character. Recently considerable

effort has been in the attention of nonlinear dynamic literature since removing chaos







64






50.


N 20


0 200 400 600 800 1000 1200 1400 1600 1800 2000 2
10 1
60 0 1
y(t) -105
-20 -5 x(t)
0 20 40 60 80 10 120 10 160100 200 -30 \-15

(a) (b)

Figure 5-1. Dynamics of the Lorenz system
Left:x (top), y (middle), and z (bottom), Right: 3-D display.


can improve system predictability and avoid fatigue failure of the system. Several

methodologies have been applied for the identification and synchronization of a vari-


ety of chaotic systems. For example, a generalized synchronization of chaos through

linear transformation [78], adaptive control and synchronization of chaos using Lya-


punov theory [79], and an adaptive variable structure control system for the tracking

od periodic orbits [80]. The contribution of our work lies in the design of different

control systems based on GMM-LLM approach.

5.1.1 The Lorenz System

One widely studied chaotic system which is also autonomous system, the Lorenz


system equation (Lorenz, 1963), formulated to model the convection process in the

atmosphere [81]. The Lorenz system can be described as:



x = o--(y- x)


y7 = p.-x-y-x-z (5-1)


z = -f -z+x-y7










where, y, and z are the measurements of fluid velocity and horizontal and vertical

temperature variations. The Lorenz system can exhibit quite complex dynamics

depending on the parameters. In our experiment, parameters are set as o- = 10,

p = 28, and P = ( to obtain chaotic dynamics and the first state x is set as

the output. Using 4th order Runge-K~utta integration with time step 0.01, 9000

samples were generated for analysis. Using the Lipschitz index analysis the embedding

dimension is determined to be 3 and the embedding delay -r = 2. The GMM training

data is constructed as


Xn = [x(n 1), x(n 3), x(n 5)] (5-2)


The first 8000 samples from the time-series are used to create the reconstructed

state space through eight-kernel GMM algorithm, and local linear models are trained

for one-step prediction. Another 1000 samples are used for signal to error (SER)

performance test Figure 5.1.1 shows that the estimated output is quite close to the

original Lorenz signal. And figure 5-3 shows the optimal distribution of Gaussian ker-

nel among the state space. In order to further compare the identification performance

between GMM and other approaches, we plot the figure to verify the performance

increment of GMM-LLM as number of PE increase. From figure 5-5 and table 5-1,

we find that "soft-combination" enable GMM to use fewer PEs. The total amount of


1 Performance is evaluated by SER=C,(d )/ C,(e )





















0 200 400 600 800 1000 0 200 400 600 800 1000

(a) (b)

Figure 5-2. System identification performance on the "x" of the Lorenz system.
Left: using GMM: Original and identified signals (top) and corresponding error (bot-
tom), Right: using ESN model (top) and corresponding error (bottom).


parameters in SOM-LLM and GMM-LLM approaches can be expressed as


PSOM/-LLM/ = NVx Dx 2

PGMMn/-LLM/ = NVx Dx 3+ NV


where NV is the number of PE, D is the dimension of each PE. Considering the

comparison of GMM with SOM based modeling performance mentioned in chapter

3 and some other methodologies, it can be concluded that, as GMM uses far less

number of PE, GMM based modeling algorithm can achieve similar performance

with less burden of computation.

In order to make a complete comparison, along with traditional methods RBF

and TDNN as two references, two 300-PE ESNs are constructed. One ESN is designed

with even distribution of spectral radius [36, 37], the other comes with optimized de-

sign criterion where larger part of internal weights distributed along the maximum

spectral radius. Both use the same MSE training criterion. In table 5-1 we see that

ESN can make better one-step prediction compared with the other approaches. Since









Table 5-1. Performance on the Lorenz system

Approaches (# of PEs) # of parameters SER (dB)
RBF (50) 203 31.05
TDNN (50) 250 29.23
SOM (10 x 10) 600 33.67
GMM (8) 80 29.16
ESN1 (300, 1D input) 300 35.05
ESN1 (300, 3D input) 300 43.40
ESN2 (300, 1D input) 300 36.54
ESN2 (300, 3D input) 300 45.20


2_2 1 1 5 0 5 1 520

Figure 5-3. Distribution of the Lorenz system (dash line) and GMM weights (star)

the ESN training is focused on the linear readout vector, the training for larger size of

ESN can still be realized within reasonable amount of time. An interesting phenom-

enon in the result is that ESN rely less on the embedding dimension compare with

other approaches, as one dimensional input already generate comparable prediction

performance. And that complies with our theoretical explanation in chapter 2 and 4.

Once the GMM-LLM structure is optimized according to the proposed method-

ology, the controller design can be accomplished using standard linear feedback con-

troller design techniques. Since the Lorenz system is originally an autonomous system,













1 5


0 500 1000

0. 1

O 500 1000




0 500 1000
0.5



0 500 1000


0.5



0 500 1000

0.5

0 500 1000
1.


0 500 1000
1.


0 500 1000


Figure 5-4. Weights change of 8 GMM kernels,


the control could be realized for dynamic compensation. Given a desired signal do,


the controller needs to find a control signal u, such that the system output error


converges to zero. Under the condition that model prediction is close enough to the


real dynamics, the Lorenz system will follow the control signals) to the equilibrium


point.


lim |8, y,| = 0
ntOO


For the Lorenz system, the 3-dimensional control signal vector is


(5-3)


u, = x, I+ T (d x,)


fe XnI + T (d, x,)


(5-4)


where one control signal corresponds to one state variable, and the predicted state


vector fe,41 is obtained from GMM-LLM prediction. T is the error compensation


coefficient between [-1,1], which confirms the error converges to zero. Following the



































:



o uy -v --- v


o ,,1~------4

-t~--~-~-yl


)I
//
ii II i~/il
I I
I -ii


26
S24
22
20
18 -


10 20 30
# of PE


40 50 60


Figure 5-5. Performance comparison between SOM and GMM


Figure 5-6. Compensation performance
Left: stabilizing x (top), y (middle), and z (bottom), Right: synchronizing x, y, and
z (upper three). Control signals start from data point 1000 for both cases.


condition of desire signals as equilibrium points or a totally different dynamics, we

can realize a stabilizing and synchronizing control. Figure 5-6 shows stabilizing and


synchronizing performance where the controller takes effect at point 1000. We found

that for both cases, the controller can compensate the dynamic error very fast and

guarantees the convergence. One factor needs to be emphasized here is that the

Lorenz system, being a chaotic system, is very sensitive to appropriate modeling.


SOM
GMM










Otherwise the controller will not converge the dynamics to equilibrium in case the

system identification performance is not close enough to the real values.

5.1.2 The Duffing Oscillator System

We now consider another kind of chaotic oscillator system, the Duffing oscillator

system. The most general forced form of the Duffing equation is


x + 6x + (pZ3 0ZL'O) = 7COS(Lot + 4) (5-5)


Depending on the parameters chosen, the equation can take a number of special

forms. For example, with no damping and no forcing, 6 = y = 0 and taking the plus

sign, the equation becomes

x + Lo'x + PZ3 = 0


This equation can display chaotic behavior. For P > 0, the equation represents

a h! Id -pin g1 and for p < 0, it represents a "soft -pII' ".. If P < 0, the phase

portrait curves are closed [21].

Without losing generality, we simplify the duffing oscillator with control signal.

The system function can be re-written into state space format as





xi = x2

x2 = u 6 2 0 1l p: + rCOS(wt) (5-6)


with corresponding constant coefficients [6, wo, P, 7] = [0.4, -1.1, 1.0, 1.8] and w = 1.8

[82, 25, 21], and the control signal is set as |u| < 5.

Using 4th order Runge-K~utta integration with time step 0.2s, 8000 samples were

generated for analysis. The GMM-LLM is trained with the first 5000 points, and the









X1 direction





,'~ 0 5 100 150 200
*Y: ."'I C2 diredlon



2 -15 0 05 15 20 50 100 150 200



Figure 5-7. Dynamics of the Duffing oscillator as an autonomous system.
Left: State space description, Right: Time series.

Table 5-2. Performance on the Duffingf oscillator

MODEL (# of PEs) # of parameters SER (dB)
RBF (50) 254 27.94
SOM (64) 512 28.06;
ESN (300) 300 30.75
GMM (8) 104 26.46


remaining is used for modeling testing and control tracking testing. The embedding

dimension is determined as Xk = rth, y~lUk-, Gk G1]. From experiments, the size

of Gaussian models is set as eight PEs. For ESN part, the same amount of data

is used for both training and testing. The data embedding is set as "1" dimension

(Xk r tle Uk]T), and 300 internal weights are used for echo states.

The identification performance on duffing system is shown in figure 5-8, and sam-

ple of Gaussian distribution is shown in figure 5-9. Their SER performance long with

comparison with single model performance are listed in table 5-2. Considering the

structural difference, we will mainly compare GMM-LLM with RBF and SOM based

model. We find that GMM is using far less amount of models, which means a simpler

structure compare with other modeling paradigm, and the prediction performance is












1 ~2 P ree lon



30 20 0 0 0 0030 200 400 600 800 1000
Prediction error
02 01




-02 -01
0 200 400 600 800 1000 100 200 300 400 500 600 700 800 900 1000



Figure 5-8. System identification performance on the Duffing system
Left: GMM-LLM results; Right: ESN results.


still very good. Figure 5-9 shows that for a large part of the testing, soft-combination

is taking effect among the Gaussian models, which complies with our explanation in

chapter 3. ESN, on the other way, is using less input dimension, and the prediction

is one of the best. Modified ESN, from now on, will be our primary option for the

echo state setup as it is producing better results than its previous design.

Next, we try to realize two control missions based on Gaussian models. The

first sets system dynamics to the zero equilibrium point; the second makes the sys-

tem track a different dynamics. As the original dynamic is set as [G, of,7] =

[0.4, -1.1, 1.0, 1.8] and w = 1.8, the new one is set as [6, wo, P, 7] = [0.41, -1.1, 1.0, 2.0]

and w = 1.9.

For these control missions, we use the PID controller, sliding mode controller, and

optimal robust controller. Considering the significance of the problem, the robustness

comparison of different controller will be discussed in the nonlinear SISO system. For

PID controller, the poles are set as p1,2 = 0.6 to ensure stability and good converging

speed. The parameters for sliding mode controller are set as cl = 1, c2 = -2.5, eT =

0.0001, qT = 0.3 [83, 84, 85]. For the optimal controller, the gain matrix is designed
















05


800 850 900 950 101


0s


800 850 900 950 101


0,


800 850 900 950 101





800 850 900 950 101


05


800 850 900 950 1000


0


800 850 900 950 1000


05


800 850 900 950 1000


05


800 850 900 950 1000


Figulre 5 9. Sample of weights change of 8 GMM kernels,







Q1 -I R = 0.01



We compare the performance of controllers using 200s trajectory regarding falling


time, settling time, and steady state error. Because of the relatively low SER system


identification performance, none of the controller can make the Duffing system dy-


namic converge to desire track perfectly, as shown in figure 5-10 where Os ~ 10s of


dynamics is shown. Roughly -p.' II:;n the control performance of all three controllers


are close to each other. In the set point control mission, however, optimal robust con-


troller and sliding mode control clearly have shorter settling time and smaller steady


state error over PID controller. Regardless rising (falling) time and settling time in


the second figure, the PID controller shows large offset where system dynamics reach


highly nonlinear range, e.g. at the start and near the extrema. Since the modeling for








74





12
Desire


SSliding
08-

06-

04-

02-




-0 2

-040261




12
Desire


SSliding
08-

06-

04-

02-



-0-

-0 2

04024 6 810




Figure 5-10. Control performance of different controllers on the Duffing system

Top: Set point control, system dynamic starts from 1; bottom: Tracking control, new

track comes from a different system.



the Duffing system does not consider noise, optimal robust controller does not show


unique advantage here. Considering the robustness of optimal controller over distur-


hance, we can expect that it will outperform other competitors under the situation


where modeling uncertainty and noise are both present.










5.2 SISO System

5.2.1 Linear SISO Mass-Spring System

Now we start to study single-input-single-output systems. In order to keep the

consistency of our examples, we will start with a linear system to check the basic

capability of our approach and then follow with nonlinear systems. The linear SISO

system example is designed based on a single mass spring damper system, where the

dynamics of the system can be easily described by two states as


x z = 2 (5-7)


Z2 = -[kxt-bZ



where input xi's are states and xl describes the movement of the mass, y7 is the

output, and a is the external force [86]. In the experiment here u is limited within

+0.5, other parameters are set as m = 5, k = 100, and b = 0.1. The sample of system

dynamics is shown in figure 5-11.

The amount of Gaussian models is chosen as 6 from experiments. The 6-kernel

GMM was trained with 5000 samples of input-output pairs obtained from equation

(5-7) using 4th order Runge-K~utta integration with time step of T, = 0.05s, with a

being i.i.d. uniformly distributed random signal. The system identification data was

constructed with embedding dimension of 4, according to Lipschitz index, for input

and output both being 2,. Thus Ir(u7) =[xy (u,), xz(n7 1), U(u7), u(n2 1)]. For. ESN

mnodeling, the input dimension is again decreased to kIu) = [xi~(u), u(u1)].

The identified models were tested on original 2000 samples data, generated using

a new sequence of random input signal. The actual plant output, the model predicted











0 01



O 500 1000 1500 2000
01

M 0

-0 1
0 500 1000 1500 2000
05

-30

-0 5
0 500 1000 1500 2000



Figure 5-11. Dynamics of the mass system.
top: X1; middle: X2; bottom: control signal.

Table 5-3. Performance on the linear mass system

Model (# of PE) # of parameters SER (dB)
SOM-hased (8 x 8 = 64) 512 50.75
RBF (64) :324 47. 30
GMM-hased (6) 78 47. 24
ESN (200) 200 49.05


output and the estimation error for the GMM-hased LLM models are provided in

figure 5-12. Figure 5-13 shows that cooperative modeling is taking effect during

testing. The SER performance is 47.2:388 dB. The same data is applied to SOM and

RBF to make a comparison. Table 5-3 shows that GMM-LLM does not generate the

best result. Yet considering the amount of PE used in GMM-LLAL GMM-LLM is a

highly efficient modeling method.

The control mission for the plant can he accomplished through a series of con-

trollers. The zero order inverse controller will directly replace next step prediction

with desire signal. The PID controller design is carried out in the way of standard

pole-placentent technique. The corresponding PID coefficients are determined to











x 103 Single readout x 103
10 10
-Desire -Desire
Prediction Peito
5 5



50 200 400 600 800 1000 50 200 400 600 800 1000
x 10 Prediction error x 103




04 20 0 0 0 0050 200 400 600 800 1000



Figure 5-12. Sample of system identification performance on the Mass system.
Left: GMM-LLM results; Right: ESN results.


bring the close-loop response poles to 0.05 + i0.3 and 0.05 i0.3 in order to decrease

the transient overshot, which are all verified in the simulation of cross-validation. The

sliding mode controller, as a strong candidate for the robust control, is considered

as a reference with parameters cl = 1, c2 = -2.5, eT = 0.0002, qT = 0.5, which are

fine-tuned results from simulation. For the optimal robust controller, the gain matrix

are designed as


Q] -~ ,s1 R = 0.25



The set-point control mission is set as: system starts at a value of 0.5, and desire


signal is 0 for 3 second, then the desire signal is changed to 0.3. In order to identify the

robustness of different controllers, another set of experiment with 30 dB SNR noise

is added to the system dynamics as modeling uncertainty. The final performance

is shown in figure 5-14. We can see that in the noise free case, all controllers have

similar settling time and small steady state error. PID controller has big overshot,

short falling time. Sliding mode controller has longer falling time yet small overshot.


Optimal controller has both short falling time and small overshot. In 30dB noise case,












0~ 0

0 0 0 05 300 400 500 0 100 200 300 400 500



0 100 200 300 400 500~L~ 0 10 20 30 40 50


0 100 200 300 400 500 0 100 200 300 400 500


Fiue5-3 aml f egtscageo G Mkrnl orms sse

al hrecotoles onothae uc ife nc onflig(iig tmstln ie

and ovrht h piaotolrovosl upromohrtoo h rtro












dyaic can be3 describe bywihscag f6GM enl o asss




and oersh t. =h o 2 0.1costolerxbi)(5x y uprfr 4xfe +w x() -h 0.5 osti 5-8









whr nputhe nisO te rudder delctnione and in pactc n nisa limited withine +1whic

corresponds to he ra dngei of ruder aisl +90 degree. .Tesmlfidtos







79



Output & Desire signal
--Desire Desire
0 5 *-Optimal 0 5 Opma
-- Sliding Sliding
04 ~

03 0



01 01



-01 -01
0 05 1 15 2 25 0 1 2 3 4 5



Figure 5-14. Set-point control performance on the mass system.
Left: noise-free case; Right: 30dB SNR noise case.


60 10

0

-10
0 200 400 600 800 1000




0 200 400 600 800 1000




-_010 -5 0 5 10 0 200 400 600 800 1000



Figure 5-15. Missile system dynamics.
Left: state space description; Right: X1, X2, and control (top to bottom).



The GMM was trained with 6000 samples of input-output pairs obtained from


equation (5-8) using 4th order Runge-Kutta integration with time step of Ts = 0.05s,


which corresponds to 300 seconds of flight time. In order to excite a rich variety of


dynamical modes in the plant, the system identification data was generated using a


uniformly distributed random input signal within the specified limits [2]. GMM-based


LLM approach developed a eight-mode mixture models, resulting in eight cooperative


linear models. The embedding dimension, according to Lipschitz index, for input


and output were both selected to be 2, resulting in 4-coefficient local models. This











Desre -Desire
5 ~Prediction ) -Pred iction



-10 -10
0 200 400 600 800 1000 0 200 400 600 800 1000
05 05




-50 200 400 600 800 1000 -50 200 400 600 800 1000


Figure 5-16. Sample of system identification performance on the missile system.
Left: GMM-LLM results; Right: ESN results.

Table 5-4. Performance on the missile model

Model (# of PE) # of parameters SER (dB)
SOM-based (8 x 8 = 64) 512 31.70
RBF (100) 504 33.03
GMM-based (8) 104 31.00
ESN (300) 300 36.25


embedding delay dimension was also chosen in accordance with the dimensionality of

the state dynamics.

The identified models were also tested on original 50s-length data (1000 samples),

generated using a new sequence of random input signal. The actual plant output, the

model predicted output and the estimation error for the GMM-based LLM models

are provided in figure 5-16. The ESN is constructed with 300 PEs, and input is set to

state without time embedding. The SER performance, compared with 31.7dB from

64-PE SOM in table 5-4, is 31 dB [18 .

The control mission for the plant can be accomplished through a series of con-

trollers. The zero order inverse controller will directly replace the next step prediction












1 0.

00 0 I
O 100 200 300 400 500 0 100 200 300 400 500




0 100 200 300 400 500 0 100 200 300 400 500
0.5 -1



0 100 200 300 400 500 0 100 200 300 400 500

h1 l i l

0 100 200 300 400 500 0 100 200 300 400 500



Figure 5-17. Sample of weights change of 8 GMM kernels for missile system


with the desired signal. The PID controller design is carried out in the way of stan-

dard pole-placement technique. The corresponding PID coefficients are determined to

bring the close-loop response poles from the plant output to 0.5 + i0.1, and 0.5 i0.1i.

The sliding mode controller, as a strong candidate for the robust control, is considered

as a reference with parameters cl = 1, c2 = -1.85, eT = 0.0001, qT = 0.3 [8:3, 84, 85].

For the optimal robust controller, the gain matrix are designed as



Qi -: O3 R = 0.55



As we discussed in previous chapter, control performance will be determined by

modeling performance as well as controller performance. Under the condition that the

controller is stable, the control performance will eventually converge to the minimum

of modeling error. In order to deliver the results more clearly, we show the figures in











--MPID-GMM
-MPID-SOM
**** PID-MA
-TDNN


25 2 5 6 5 2 6 6 6
Time (seconds)


Figure 5-18. Performance of PID controller with different modeling approaches.


the following sequence: we show the different-model-same-controller results first to

verify the effect from modeling part, then the same-model-different-controller results.


Figure 5-18 di ptlli-, under noise-free condition, the set-point control perfor-

mance of PID controller with different modeling apps... llity! plus TDNN controller

as a reference. Recalling the system function (5-8), the region around -1.5 is where

nonlinear dynamics is taking the largest effect. And figure 5-18 clearly shows that,


1). the single model PID (PID-MA) has longer settling time and the largest vibra-

tion before the convergence because of its modeling performance; 2). TDNN, due to

its structural difference, has the shortest falling time yet the longest settling time;


3). the Gaussian mixture model PID controller (\!P'ID-GMM) has very close per-

formance to the SOM multiple model PID controller (\!P'ID-SOM); 4). MPID-SOM


gives the most stable performance and it needs more training. For now we can con-

clude that GMM-LLM, when compared with other modeling approaches, can save

much computational cost and the final control performance to a large extent.











Output & Desire signal
0 6 Desire Desire
PIDPID
-- -Optimal Optimal
0 4 Sliding04 Sliding

02 0




0-08

-0 84


55 6 65 7 75 8 85 9 95 556 65 7 75 8 85 9 95


(a) (b)
1 1
PID PID
08 -*-Optimal 0 Optimal
-Sliding -Sliding
06C 06










u012345678 u012345678


(c) (d)


Figulre 5-19. GMM-LLM based control performance on the missile system.
(a): Step response without noise (output and desire); (b): Step response with noise
(output and desire); (c): Tracking response without noise (output and desire); (d):
Tracking response with noise (output and desire).



Now we test the performance of controllers under the condition that modeling


uncertainty and measurement noise is present. The closed-loop nonlinear system is


tested in two cases: one with various step changes to the desire output and one


with a smoothly changing desired output, which is the sum of several sinusoids with


different amplitudes, phases and frequencies. To simulate the worst condition, the


disturbance is considered modeling uncertainty. After generating the training data,


we add 30dB SNR noise to the system model during testing. In that case, modeling










Table 5-5. Control performance comparison on missile system considering noise

Set-point Set-point Tracking Tracking
Error Mean Error Variance Error Mean Error Variance
MPID-GMM -0.0289 0.0272 0.0070 0.0108
Optimal -0.0222 0.0186 0.0048 0.0072
Sliding mode -0.0272 0.0202 -0.0024 0.0101












Time (Sec)

(a) (b)

Figure 5-20. ESN control performance on the missile system.
(a): Noise free set-point performance, (b): Set-point performance with 30dB noise.


effect is negligible and control performance is emphasized. From figure 5-19 it shows

that the step and tracking performance of all the designed nonlinear control schemes

in close-loop operation with actual plant model is very satisfactory. GMM-LLM

modeling error is insignificant in the final performance. Comparing with other two

approaches, PID needs the least calculation for coefficients. Considering the detail

of figure 5-19, as in table 5-5, the difference among those approaches are noticeable,

in which the optimal robust controller demonstrates the superior performance for

disturbance rejection compared with others.

Finally, we try the ESN controller for set-point control in order to verify the

feasibility of ESN controller. Following the structure in chapter 4, we construct a

300-PE ESN controller with random readout vector. The input to the controller is

the current system states without time embedding, and the model information is










derived from the Gaussian mixture reference model. BPTT length is set to 100 step,

which corresponds to 5 second in real system. The whole system is recursively trained

in 15 iterations. The final control performance is show in figure 5-20. In the figure,

we can see that the ESN controller can accomplish the regular control mission, yet its

performance is not comparable to the optimal robust controller. Since the advantage

of ESN is its generalization capacity, simplicity for training, and its independency of

time embedding input, further investigation needs to be made for ESN hased robust

controller.

5.3 Nonlinear MIMO System: LoFLYTE System

The last and the most complex application we have considered is a MIMO nonlin-

ear dynamic system, the LoFLYTE System from NASA. The LoFLYTE System has

6 degrees of freedom, 3-dimensional moving and 3-dimensional rolling, and 4-channel

inputs. Since the flight motion can he divided into longitudinal motion and lateral

motion, the modeling and control mission can he simplified into 2-input-2-output

mission. We will design MIMO system identification model first, and realize inverse

controller and optimal robust controller based on that.

5.3.1 System Introduction

The nonlinear MIMO dynamic system discussed here is the NASA LoFLYTE

tested aircraft, an unmanned aerial vehicle (UAV), which is designed and tested by

Accurate Automation Corporation (AAC). The LoFLYTE tested flies slower than

the speed of sound, Alach one, but they have the same i.-- i.; !II1 shape designed

for Mach five flight. The term i.-- i.; !II1 refers to the fact that aircraft of this

type ride on the shock waves that they create as they fly above the speed of sound.

Other supersonic and hypersonic aircraft suffer reduced performance because they



















Aileror1 ,2. I~
.. RUdder
Throttle




Elevatr 2


Figure 5-21. General description of an aircraft


fight against the shock waves. The waverider shape improves fuel consumption by

decreasing air resistance at speeds greater than Mach one. This full scale aircraft will

take off horizontally, then it will use air-breathing engines to accelerate to a cruising

speed of Mach five at a very high altitude. And it will end its flight by landing on a

conventional runway. The task here is to develop modeling and control strategies for

LoFLYTE based solely on input-output data.

According to classical aerodynamics, the flight dynamics of any rigid body are

determined by movements along and around three axes: roll, pitch (longitudinal

motion), and yaw (lateral motion). Besides the strong coupling between system

states, the main control effect can still be classified. The elevator be is the main effect

for controlling the longitudinal motion state variables (pitch angle, 8, and pitch rate,

q). The rudder 6, is primarily controls the lateral motion state variables (yaw angle,

pai, and yaw rate, r). The aileron be mainly controls the roll motion state variables

(roll angle, phi, and roll rate, p). Finally, the throttle be largely controls the aircraft's

longitudinal speed, and for some planes, deflectable thrust vectors might allow yaw


























Figure 5-22. Dynamics of the LoFLYTE system.
Left: p (top), r (bottom), Right: Control signal, aileron (top), rudder (bottom).


and roll contributions from the engine power in some case. Under certain symmetry

assumptions for the aircraft body, the state dynamics of the rigid-body aircraft can

be described as follows


u = (wq yr) g sin 0 + Fe/m

i = (ur wp) + g cos 8 sin + F,/m

wi = (vp + Uq) + g cos 8 cos + Fz/m

p = [ (ly, Izz) 97 + laz (qr pq) + L]/1zz

q (z-4-p+z@_2+ ]I

r = [ (law 199) pq + laz (pq gr) + N]/lzz,

S= p + q sin tan 8 + r cos tan 8


(5-9)


q cos r sin 8


q sin sec 0 + r cos sec 0


500~ ~ 100 150 2000 25030 30 005050




500 1000 1500 2000 2500 3000 3500 4000 4500 5000




(b)


0 500~ 100 150 200 250 300



0 500 1000 1500 2000 2500 3000










where u, v, and w are the speed components of the aircraft along its body axes .r, y,

and x respectively. p, q, and r are the angular speeds along those axes. And 4, 8, and

( are the Euler angles that define the rotation matrix between the body coordinate

frame and the inertial coordinate frame. The gravity y is along the down direction of

the inertial coordinate. The throttle, engine power, and aerodynamic effects generate

the forces F,, F,, and F, as well as the moments L, M~, and NV. The m, I,:,, I,,, and

I,, are the aircraft mass and moment of inertia respectively which are determined by

the aircraft's geometry.

The LoFLYTE program is an active flight test program at the Air Force Flight

Test Center at Edwards air force base. The LoFLYTE aircraft is simulated using

a software, C++ version or Matlab version, by ACC and is assumed to be close to

the true plant. Without losing the generality, the throttle is assumed constant and

the state variables p, q, r, u, v, w are available for external measurement. The goal

of the system identification and control problem is to determine local linear models

front four inputs (aileron, elevator, rudder, and throttle) to six state variables and to

control them in order to track a desired trajectory of flight.

5.3.2 System Identification and Control experiment

The experiment here is designed to use the Matlah version of the ACC Afliht

simulator. The control signal is generated by manually flying with 3-dintensional

joystick to imitate a test flight. The system transfer function is assumed un-known

and state variables, output data, are obtained front the Matlah output.

In order to simplify the problem, the longitudinal motion of the system, axis

.r ( for ward) and x (down) and pitch rate q, are considered decoupled with lateral

motions, consisting the axis y (right), roll rate p, and yaw rate r. Instead of dealing




















o


-Desire



0 01


-0

0 01


5


0


I:
I
111 11 I
II


0 500 1000 1500 2000 0 500 1000 1500 2000


(c) (d)

Figure 5-23. System Identification performance on the LoFLYTE system.
(a): desired p and GMM-LLM prediction (top), error (bottom); (b): desired r and
GMM-LLM prediction (top), error (bottom);(c): ESN on p (top), error (bottom);
(d): ESN on r (top), error (bottom).


with six variable at the same time, the whole system is broken up into two parts.

And the problem is simplified to modeling the LoFLYTE's p and r dynamics using


input-output data which includes aileron, rudder, p, and r. Figure 5-22 shows the

dynamics of the LoFLYTE using the corresponding control signals. And local linear

format of system transfer function can be expressed as


SA xn-i + B un-i
i=0 i=0

Xn+1


(5-10)


xn+1


Yn+1


II I I II1111111 1 I I:F 111111 111
I'Y II II II
,I I 11111 II' 111

I~ i
boo 3200 3400 3600 3800 4000 4200 4400 4600 4800 500





:llr ~ l~(r ~~e -~/1~




Full Text

PAGE 4

Iwouldliketoexpressmysinceregratitudetomyadvisor,Dr.JoseC.Principe,forhiscontinuousguidance,support,andhelpthroughoutthepastveyearsofmyPh.D.study.Withouthisinvaluableencouragementandpatience,thisworkwouldhavebeenimpossible.HegavemetheprivilegetoworkintheComputationalNeuroEngineeringLabandallowedmethefreedomtoexploretheunknownworld,andwasalwaysattentiveandpertinentincriticalmoments.MydeepappreciationisalsoextendedtoDr.JohnG.Harris,Dr.JohnM.Shea,andDr.LocVu-Quocfortheirinterestsandparticipationonmysupervisorycommittee.MydeeprecognitiongoestoDr.DenizErdogmusandDr.MarkA.MotterfortheirhelpandinformativecommentsduringmystayinCNEL.IalsoowemanythankstoallmycolleaguesintheCNELgroupfortheirdiscussionofideasandfriendship.Mywife,YunZhu,hasmydeepestappreciationforhergreatloveanduncondi-tionalsupport.Finally,Iamgratefultomyparentsandmybrotherfortheirlove. iv

PAGE 5

page ACKNOWLEDGMENTS ............................. iv LISTOFTABLES ................................. vii LISTOFFIGURES ................................ viii ABSTRACT .................................... x CHAPTER 1INTRODUCTION .............................. 1 1.1MissionofSystemIdenticationandControl ............. 1 1.2LocalModelingwithImprovedGaussianMixtureModel ...... 5 1.3DissertationOutline .......................... 8 2NONLINEARDYNAMICALSYSTEMMODELING ........... 10 2.1NonlinearDynamicalSystems ..................... 10 2.2TimeEmbeddingStateSpaceReconstruction ............ 11 2.3GlobalDynamicalSystemModeling .................. 13 2.3.1PolynomialModeling ...................... 14 2.3.2EchoStateNetwork ....................... 15 2.4LocalDynamicModeling ........................ 20 2.5Summary ................................ 22 3GAUSSIANMIXTUREMODELSANDLOCALLINEARMODELS .. 24 3.1Introduction ............................... 24 3.2MaximumLikelihoodEstimationTraining{EMAlgorithm ..... 29 3.3EstimationofLocalLinearModel ................... 32 3.4ImprovementonGMM ......................... 36 3.5ComparisonoftheGMMwiththeSOM ............... 38 3.6Summary ................................ 41 4GAUSSIANMIXTUREMODELBASEDCONTROLSYSTEM ..... 42 4.1IntroductionofMixtureModelController ............... 43 4.2InverseErrorDynamicControllerUsingGMM ............ 46 4.3PIDController ............................. 50 4.4OptimalRobustController ....................... 52 v

PAGE 6

........... 55 4.4.2LLM-basedOptimalTrackingController ........... 59 4.5Summary ................................ 61 5APPLICATIONSANDDISCUSSIONS ................... 63 5.1ChaoticSystems ............................ 63 5.1.1TheLorenzSystem ....................... 64 5.1.2TheDungOscillatorSystem ................. 70 5.2SISOSystem .............................. 75 5.2.1LinearSISOMass-SpringSystem ............... 75 5.2.2NonlinearSISOMissileSystem ................. 78 5.3NonlinearMIMOSystem:LoFLYTESystem ............. 85 5.3.1SystemIntroduction ....................... 85 5.3.2SystemIdenticationandControlexperiment ........ 88 6CONCLUSIONS ............................... 97 6.1Summary ................................ 97 6.2FutureWork ............................... 98 6.3Conclusion ................................ 99 APPENDIX AOPTIMIZINGPARAMETERSOFESN .................. 100 BESN-BASEDDACCONTROLLER .................... 104 REFERENCES ................................... 110 BIOGRAPHICALSKETCH ............................ 117 vi

PAGE 7

Table page 3{1Performanceon32-DLoFLYTEdata .................... 38 5{1PerformanceontheLorenzsystem ..................... 67 5{2PerformanceontheDungoscillator .................... 71 5{3Performanceonthelinearmasssystem ................... 76 5{4Performanceonthemissilemodel ...................... 80 5{5Controlperformancecomparisononmissilesystemconsideringnoise ... 84 5{6PerformanceontheLoFLYTEsimulationdata ............... 91 5{7ControlperformancecomparisonontheLoFlytesystemconsideringnoise 92 vii

PAGE 8

Figure page 1{11-DfunctionestimationbyGMM-LLM. .................. 5 2{1BlockdiagramoftheESN. .......................... 17 2{2DistributionofESNinternalweightswithmaximumspectralradiusbeing0.9,withunitcircleasareference. ..................... 19 3{1StructureofGMMwithLLM. ........................ 26 3{2ExplanationofG-SOMalgorithm. ...................... 37 3{3SoftcombinationregionusingGMM(betweendashline) ......... 40 4{1StructureofControlSystembasedonGMM-LLM ............. 43 4{2T-Sfuzzylogicmembershipfunction .................... 45 4{3StructureofInverseControlSystem ..................... 46 4{4pole-placementPIDcontrolsystem. ..................... 51 4{5Optimalrobustcontrolsystemschematicdiagram. ............ 59 5{1DynamicsoftheLorenzsystem ....................... 64 5{2Systemidenticationperformanceonthe\x"oftheLorenzsystem. ... 66 5{3DistributionoftheLorenzsystem(dashline)andGMMweights(star) 67 5{4Weightschangeof8GMMkernels; ..................... 68 5{5PerformancecomparisonbetweenSOMandGMM ............. 69 5{6Compensationperformance ......................... 69 5{7DynamicsoftheDungoscillatorasanautonomoussystem. ....... 71 5{8SystemidenticationperformanceontheDungsystem ......... 72 5{9Sampleofweightschangeof8GMMkernels; ................ 73 5{10ControlperformanceofdierentcontrollersontheDungsystem .... 74 5{11Dynamicsofthemasssystem. ........................ 76 viii

PAGE 9

.... 77 5{13Sampleofweightschangeof6GMMkernelsformasssystem ....... 78 5{14Set-pointcontrolperformanceonthemasssystem. ............ 79 5{15Missilesystemdynamics. ........................... 79 5{16Sampleofsystemidenticationperformanceonthemissilesystem. ... 80 5{17Sampleofweightschangeof8GMMkernelsformissilesystem ...... 81 5{18PerformanceofPIDcontrollerwithdierentmodelingapproaches. .... 82 5{19GMM-LLMbasedcontrolperformanceonthemissilesystem. ...... 83 5{20ESNcontrolperformanceonthemissilesystem. .............. 84 5{21Generaldescriptionofanaircraft ...................... 86 5{22DynamicsoftheLoFLYTEsystem. ..................... 87 5{23SystemIdenticationperformanceontheLoFLYTEsystem. ....... 89 5{24SampleofweightschangeofGMMkernelsfortheLoFLYTEsystem ... 90 5{25ControlperformanceontheLoFLYTEsystem(set-point,noisefree). .. 93 5{26ControlperformanceontheLoFLYTEsystem(tracking,noisefree). ... 94 5{27RobustcontrolperformanceontheLoFLYTEsystem(set-point,30dBSNRnoise). .................................. 95 5{28RobustcontrolperformanceontheLoFLYTEsystem(tracking,30dBSNRnoise). .................................. 96 A{1DistributionofimprovedESNinternalweightswithmaximumspectralradiusis0.9,withunitcircleasareference. ................ 101 B{1DemonstrationsystemwithESNreferencemodelandcontroller ..... 105 B{2SchematicdiagramofcontrolsystemwithESNascontrollerbasedonGaussianmodels ............................... 108 ix

PAGE 10

Inthisdissertation,wepresentamethodologyofcombininganimprovedGaussianmixturemodels(GMM)withlocallinearmodels(LLM)fordynamicalsystemidenti-cationandrobustpredictivemodelcontrol.Inordertounderstandtheadvantageofthemixturemodel,itsstructureandtrainingarediscussedindetail.Agrowingself-organizingmapisutilizedtoimprovetherandominitializationofmixturemodels,whichmakestheGMMconvergencemorestable.Toincreaselocalmodelingcapa-bilityanddecreasemodelingerror,locallinearmodelsaretrainedbasedonGMMasone-steppredictors.Followingthelocalmodelingapproach,aseriesofcontrollersaredesignedtorealizeatrackingapplication,amongwhichtheoptimalrobustcontrolshowsbetterrobustnessoverothercontrollers.Fiveapplicationsystemswithdier-entdynamicsaresimulatedinordertoverifythemodelingandcontrolcapabilityoftheimprovedGaussianmixturemodel.Throughexperimentsandcomparisonwithself-organizingmaps,radialbasisfunctions,andothermethodologies,itisshownthattheimprovedGMM-LLMapproachisamoreexiblemodelingapproachwithhigher x

PAGE 11

xi

PAGE 12

1 ].Inparticular,theanalysisoflinearidenticationandcontrolfornonlineardynamicalsystemshasbeenstudiedbymanyresearchersleadingtoabetterunderstandingofvariousmechanismswhichincludesobservability,controllability,stability,androbust-ness[ 1 2 3 ].Thebestdevelopedaspectofthetheorytreatssystemsdenedbylinearoperatorsusingwellestablishedtechniquesbasedonlinearalgebra,andthetheoryoflineardierentialequations.Nevertheless,academicresearchhasbeenconcentratingaroundproblemsinvolvingtheidentication,stabilization,andcontrolofnonlineardynamicalsystemsinthelastseveraldecades.Theseeortshavenowmaturedintoabroadgrouptheoriesonnonlineardynamicalsystemsapplications.Initialeortsinthisareapursuedparametricapproached,inspiredbytheestablishedlinearsystemtheory,inwhichthedynamicalequationsofthesystemareassumedtobeknownfromphysicalprinciples,withsomeuncertaintyincertainparameters.Inthisframework,thesystemidenticationandsystemcontrolproblemsaredecoupled.Andthereforetheycanbesolvedsequentially.Recently,adaptivesystemidenticationandcontrolmethodologieshavebeeninvestigatedtoo,whichleadstoagoodunderstandingoftheadaptationinlinearsystemsandasatisfactorilygeneralinsighttoadaptationinnonlinearcontrolsystems[ 4 5 6 7 ]. 1

PAGE 13

Controltheorydealswiththesystemmanipulationofthebehaviorofdynamicalsystemstosatisfycertaindesiredoutputsconstraints[ 8 9 ].Aclassicalparametricdesignprocedurefollowsthesystemmodeling(identication)andcontrollerselectionstages.Inthecaseoftraditionalidenticationbasedonmodelderivedfromphysicalprinciples,thedataareusedtoestimatetheunknownparameters,withphysicallymotivatedsystem,whereasmodernapproachesstemmingfromtheadvancesinneuralnetworksmodelingintroduceblack-boxfunctionapproximationschemes[ 4 ].Theneuralnetworkmodelingcapabilitiesmaybefurtherenhancedusingmultiplesub-modelsinthecontextofswitchingbetweenmultipleadaptivemodelstoachieveclose-loopsystemthatenhancetransientbehaviorandcopewithmodelinguncertaintiesbetterthansinglemodel.Followingthesystemidenticationstage,dependingonthechosenmodelingapproach,thecontrollerisdesignedtypicallyusingclassicaltechniquesbasedonlinearsystemtheoryaswellasclassicalorneural-networks-basednonlineartechniques[ 1 10 ]. Modelingtechniquescanbeclassiedintoglobalandlocal.Globalmodelingusesasingleparametricsystemandtsitalloverthestatespace.Theglobalmodelingmethodsestimatethedynamicsofsysteminoneframework.Usingsinglekernelormultiplekernels,themodelistrainedwiththewholetrainingdataset.Themostdistinguishingfeatureforglobalmodelingisthatitistrainedwithallthedatapoints,andinthesystemidenticationphase,allkernelswilltakeeectoneverystepofprediction.Goodexamplesofglobalmodelingarethemulti-layerperceptron(MLP)andtheradial-basis-function(RBF),eventhoughthelastoneuseslocalkernels.Whenestimatingthedistributionintheinputspace,eachkernelintheRBF'shiddenlayeracceptsallthedatafromthetrainingdataset.Recurrentneuralnetworks

PAGE 14

(RNN)areanotherkindofpowerfulcomputationalstructureforglobalmodeling[ 3 ].Butbecauseoftheircomplexinnerstructure,ecienttrainingalgorithmsarenecessarytoachievesuccessfullearningforthegiventask.Anotherdicultyforglobalmodelingiswhendynamicalsystemcharacteristicsconsiderablyvaryovertheoperatingregime,eectivelybringingtheissueoftimevaryingnonlinearparametersintothemodeldesign.Becauseofthecomplexityandpossiblytime-variabilityofthenonlineardynamicalsystem,globalmodelinganddesigningcorrespondingcontrollersisadicultandtime-consumingtask. Localmodeling,whichutilizesadivide-and-conquerapproachtosolvecompli-catedproblems,breaksthettingofthesystemintosmallerandeasierpieceswhichcanbemanagedbysimplertopologies.Usingmultiplekernels,thelocalmodelingistrainedinawaythatakernelonlygetstraininginformationfromthosedatawhicharelocatedwithintheregionthatkerneldened.Insystemtrainingphase,everyinputdataislocatedrst,andonlythe\winner"kernelwilllatertakeeectoniden-tication.Tobetterunderstandthispoint,letustaketheself-organizingmap(SOM)[ 11 ]asanexamplewithitswinner-takes-allcriterion.Thereforethelocalmodelingtechniquecanberegardedasapiece-wisemodelingapproach,andthepiecescanbegroupedtogetherwhenitisnecessarytoapproximateglobaldynamics.Specicallywheneachmodelislinear,theresultingmodelisapiece-wiselineardynamicalap-proximationtothegloballynonlineardynamicalsystem.Theadvantagesofsuchapartitioningapproacharethefollowing: 1.systemidenticationcomplexitycanbereducedsignicantlyduetothescalingdownoftheestimationproblemfromonelargetasktomultiplesmallandsimpletask;

PAGE 15

2.thelocalmodelscaneasilyscaleuptoencompassmorevolumeinthestatespacebyaddingnewmodelsasnewdataportionisacquired,whichisespeciallyofbenetwhentrainingdatacannotcoverthewholestatespace; 3.thedesignofacontrolsystemforthenonlinearsystemcanbereducedtothedesignofmultiplesimplelocalcontrollersamongwhichcooperationispossibletogenerateasinglecontrollertothecurrentplant. TheGaussianmixturemodelwithlocallinearmodels(GMM-LLM)isalocalmodelingapproach,yetGMMtrainingbasicallyfollowstheglobalmodelingtrain-ingcriterionbecauseofthedistributingpropertyofGaussiandensityfunction.IntheGMMapproach,themodelisgeneratedbyamulti-dimensionaljointmixtureofGaussiandistributions[ 12 ].Inotherwords,thedistributionofstatevectorsisdescribedbyasoftcombinationofseveralGaussiandistributions.Andthosecombi-nationcoecientswillbedeterminedaccordingtotheGaussianposteriorprobabilityofcurrentstatevectorgivingcertainGaussiandistribution.FollowingtheGaussianposteriorprobabilityproperty,everyGaussianistrainedbyallthetrainingdataset.YetduetothenatureofthenonlinearGaussiandistributionfunction,theweightswillbelargewhenthedataisclosetothecentersofsomeGaussian,whichmaketheGaussiankernel'strainingemphasizemoreontheirneighbordatathanthedatafaraway.Inthepredictionpart,itbecomesclearthatGaussianmodelistakingeectlocally.ThebenetfromthisnatureisthattheGMMisgloballywelltrained,andlocaldynamicsareaddressedaswell.Thelatterisalsothereasonwhywecalltheapproachlocalmodeling. Theproblemofcontrollerdesigncanbefurthereasedwiththeselectionoflocalmodeling.Inthiscase,theglobalnonlinearcontrollercanbetransformedintomulti-plepiece-wiselinearcontrollersfornonlinearsystems,forwhichmanystrongdesigntoolsareavailable[ 8 9 13 14 ].

PAGE 16

Figure1{1. 1-DfunctionestimationbyGMM-LLM.Top:Nonlinearfunctionanditslinearapproximation;Bottom:GMM-LLMpairs 1{1 .TheGMMusestheGaussianfamilytoreconstructthedistributionofthestatespacevectorsapplyingasoft-combinationinsteadofawinner-takes-allcriterion.EachGaussiancomponentrepresentsacertaincharacteristicintoaspeciclocationofsystemdynamicspace.SincetheGaussiandistributionneedsonlytwoparameters,meanandcovariance,thecomputationeciencycanbelargelyincreasedinGMMtraining.Becauseofthismodelingproperty,GMMshavebeenwidelyappliedinspeechrecovery,spectralanalysis,etc.SchonerappliedGMMforfunctionapproximationusingsupervisedtraining[ 15 ].InitializingtheGaussiankernels'weightsfromgrowingself-organizingmaps(GSOM),amorestableglobalmaximumconvergenceisachieved.

PAGE 17

LocalmodelingisusuallybasedontheNearest-neighborscriterion.AclusteringalgorithmdividesthetrainingdatasetintoagroupofsmallersetsbasedontheEuclideandistanceorothersimilarcriteria,andamodelistrainedcorrespondingtoeachdataset.AsFarmerandSidorowich[ 16 ]havealreadyshown,locallinearmodelsprovideaneectiveandaccurateapproximationdespitetheirsimplicity.Locallinearmodelshaveshownverygoodperformanceincomparativestudiesontimeseriespredictionproblemsandinmanycaseshavegeneratedmoreaccuratepredictionthanglobalapproaches[ 16 17 ]. Buildinglocalmappingsinreconstructedstatespaceisatimeandmemoryconsumingprocesseventhoughtheperformancecanbesatisfactory.Asimpliedapproachistoquantizethestatespaceandbuildlocalmappingsonlyinpositionswhereprototypevectorsarelocated.TheimprovementutilizedintheSelf-OrganizingMaps(SOM)hasbeenappliedbyPrincipeetal.tomodelautonomousandnon-autonomoussystems[ 18 19 ].MotteralsoutilizedSOMbasedmodelingtothedesignofswitchingcontrollers[ 20 ]. OnepossibleproblemtheSOM-basedmodelingcannotavoidisthequantizationerrorinthemodelingphase,duetotheEuclideandistancebetweentheprototypevectorandthedataclosesttothevector.TheSOM-basedapproachassumestheprototypevectorscanrepresentthedistributionofthewholestatespaceandthedistanceisnegligible,yetthatistrueonlywhentheamountofprototypevectorsisverylarge.Meanwhile,largeamountofprototypevectorswillleadtofurtherrequirementsoncomputationalresourceandtrainingtime. ThestructureofGMMconsistsofasetofGaussianmodelsandagatingfunction.Theweightingcoecientforacertaindatapoint~xcorrespondingtotheGaussian

PAGE 18

kernelsisdescribedbythegatingfunction.Asthedistancebetweenthestateandkernelsdeterminesthegatingfunctionvalue,itmakesmodelingerrordecreaseandproducessmoothswitchingcomparedwiththeSOMmethod. TheparametersoftheGaussianmixtureareinitiallyunknown,andwillbees-timatedbasedonincompletedatathroughtheGrowing-SOMmethod.Thisimpliesthatparametersareupdatedinacooperativeratherthancompetitiveway.Thepro-cedureisanupdatedweightingofthecontributionofalldatapointsgivenaparticu-larGaussiankernel.ThetrainingofGMMisbasedonmaximumlikelihoodcriterionthroughtheexpectation-maximization(EM)algorithm.Section3.2discussestheEMtrainingalgorithmindepth.FollowingtheGaussiandistributiondensityfunction,onelocallinearmodelisgeneratedfromlocaldataforeachGaussiankernel.Thankstothegatingrule,themixtureofGaussiansframeworkisabletocontinuouslyvarytheinfrastructureofeachpiece-wiselinearlocalmodel. WhentheGMM-LLMiscomparedwithRBF,theadvantageofGMM-LLMisthemodelingoflocaldynamics.EventhoughboththeGMM-LLMandRBFusealinearcombinationinthenalstage,themechanismofhowtheydescribethelocaldynamicsisdierent.RBFmodelingusesasinglexedweightedcombinationofagroupofnonlinearoutputsfromGaussiankernels,thecentersoftheGaussiansareadapted,butnotthecovariance.AlthoughtheGMM-LLMusesalinearmodeltorepresentthelocalsystemdynamics,whichmakestheoutputchangesinalinearmannerinsteadofanonlinearway.Severalsuchlocalmodelsexistdistributedthroughoutthestatespace,andtheyareweightedbythegatingfunctiontoincreaseperformance.

PAGE 19

BasedonthemixtureGaussianreferencemodel,wedevelopasetofcontrollersfornon-autonomousandnonlinearplants.StartingfromtheEMtraining,thesys-temmodelcanberepresentedbyGMM-LLMandthemodelingerrorisbounded.Severalcontrollersuchasdynamicinverseandpole-placementPIDcontrollers,op-timalrobustcontroller,echostateneuralcontrollerarerealizedandemployedusingthemixturemodelforseveralsystemswithdierentdynamics.Weemphasizetheoptimalcontrollerandtheechostateneuralcontroller.Wewillshowthattheopti-malcontrollerexhibitsexcellentrobustnessagainstmodeluncertaintyandexternaldisturbance,andtheESNcontrollerhasbettergeneralizationcapability. Thedissertationisdividedintoveparts.InChapter2,somegeneraldescriptionanddenitionsofnonlineardynamicsystemsandlocalmodeling,time-embeddingde-laysystemreconstruction,anddynamicsystemidenticationaregiven.Thecriterion

PAGE 20

toselectembeddingdimensionwillbediscussed.Echostatenetwork,anewkindofrecurrentneuralnetworks,ispresentforthelateruseoncontrollerdesign. InChapter3,GMMtrainingmethodologyanalysisispresented,whichalsoin-cludesimprovedGMMtrainingthroughGSOMinitialization,thegenerationoflocallinearmodels(LLM),andthecomparisonofGMM-basedapproachtoSOM-basedapproachinordertoaddressthecomputationaleciencyofGaussianmodeling. InChapter4,thedescriptionsforasetofcontrollerdesignapproachesbasedonGMM,thecontrollers'mathematicalbackgrounds,andtheirstabilityissuearediscussedindetail. Chapter5givesapplicationexamplesonasetofrealsystemmodel,andmainresultsofGMM-basedapplicationsarepresentedwithcomparisonsanddiscussedindetails.Fordierentmodelingmethods,themodelingperformancewillbecomparedbasedonsignaltoerrorratiocriterion.Forcontrollers,thecontrolperformancewillbecomparedbasedoncriteriaofrising(falling)time,settlingtime,overshot,andthecovarianceofcontrolerror. Chapter6givesconclusion,contributionandbrieysummarizesfuturework.

PAGE 21

Dynamicsdescribehowonesystemstatedevelopsintoanotherstateoverthecourseoftime.Technically,adynamicalsystemisasmoothfunctionoftherealsortheintegersonanotherobject(usuallyamanifold).\Whentherealsareacting,thesystemiscalledacontinuousdynamicalsystem,andwhentheintegersareacting,thesystemiscalledadiscretedynamicalsystem"[ 21 ].Adynamicalsystemexpressedasadiscretetimeevolutionrulecanbedescribedfromtheobservationofitsoutputs.Thedynamicofthesysteminstatespacecanbedenedas (2{1) wherex(t+1)isthenextstepstatefordiscrete-timesystems.x(t)arethesystemstates,f()isanonlineartransferfunctionandisusuallyreferredasthevectoreld, 10

PAGE 22

andh()istheoutputfunction.Ifthevectoreldf()isalinearfunctionofthesystemstates,theunderlyingsystemislinear;otherwise,thesystemisnon-linear.Itisalsoassumedthatthesystemsherearedeterministic.Thusthetransitionfromthesystem'sstatex(t1)attimet1tothestatex(t2)attimet2isgovernedbydeterministicrules. 2{1 )describesasystemthatfollowstheinternalgoverningequationsf()givenacertaininitialcondition.Andthesystem'sdynamicisfullycharacterizedbyitsmanifoldencodedinthefunctionf().BecauseofTakensTheorem,transform-ingthetimeseriesintoastatespaceformwillresultinadynamicalequivalenttotheoriginalspace[ 22 ].Sincethemathematicaldescriptionoff()isunavailableformostreal-worldsystems,withtheformulationofthetimeembeddingtheorem,Takenspre-sentedamethodologytorecoveranequivalentdescriptionofffromobservationofthesystem: 23 ].Therstisaprocedurethatyieldsaglobalmeasureofphase-spaceutilizationfor

PAGE 23

(quasi)periodicandstrangeattractorsandleadstoamaximumseparationoftrajec-torieswithinthephasespace.Theseconddescribesthelocaldynamicalbehaviorofpointsontheattractorandgivesameasureofhomogeneityofthelocalow.GaoandZhengproposedthelocalexponentialdivergenceplotmethodforachaotictimese-ries,basedonwhichtheembeddingdimension,delaytime,andthelargestLyapunovexponentareestimated[ 24 ]. WedeterminetheminimumtimeembeddingdimensionusingLipschitzquotient[ 25 26 ].Ifthefunctionisassumedtodependonn1inputsanditactuallydependsonninputs,thedatasetmaycontainseveralpointsthatareverycloseinthespacespannedbythen1inputsbutdiersignicantlyinthenthinput.Assumingthesystemfunctionissmooth,ifthe(n1)-Dinputsareclose,itisexpectedthattheoutputsfromthemarealsoclose.Ifoneorseveralrelevantinputsaremissing,outputscorrespondingtodierentinputwilltakedierentvalues.Inthiscase,itcanbeconcludedthat(n1)-Dinputisnotlongenough. TheLipschitzquotientsin1-Dcasearedenedas j(i)(j)j;i;j2
PAGE 24

Asdiscussedbefore,ifnisnotbigenough,theLipschitzindexwillbelarge.Ifallrelevantinputsareincluded,theLipschitzindexwillstaynearconstant. Severalapproacheshavebeenstudiedforglobalmodeling.Onerepresentativeofglobalnonlinearmodelingisthetimedelayneuralnetwork(TDNN)whichwasrstdevelopedbyWeibelandLangin1987[ 27 ].Usingtimedelaysegmentofstateasinputdata,theTDNNisstructurallycloserelatedtothemulti-layerperceptron(MLP).TheTDNNhasbeenshowntobeauniversalmapperinmultidimensionalfunctionalspaceswithnitememory(especiallymyopicfunctionalspace)[ 28 ].An-othermethodisthepolynomialmodeling.Polynomialmodelingisappliedasthebasisfortherealizationofnonlinearmapping.Fuhrmannusesitingeometriccontroltheory[ 29 ].Recurrentneuralnetworks(RNN)representsakindofnonlinearmodel-ingmethodology.ResearchersuseRNNwhenregularmathematicalequationscannotgeneratesatisfyingresults[ 30 31 ]. Inthefollowingsections,wewillintroducepolynomialmodelingbrieyaswewilluseitsrstorderexpansionasalinearmodelinchapter3.Andwewilldiscuss

PAGE 25

theechostatenetwork(ESN)withmodieddesignmethodasanewglobalmodelingapproach. ~y(n)=b1u(n1)++bDu(nD)a1y(n1)aDy(nD)(2{6) ThepolynomialModelingcanbeextendedtoanonlinearARMAX(NARMAX)modelbyreplacingthelinearmapperin( 2{6 )withnonlinearfunctionf() ~y(n)=f(u(n1);:::;u(nD);y(n1);:::;y(nD))(2{7) Amorestraightforwardwaytoutilizepolynomialsfornonlinearsystemidenti-cationistoapplyapolynomialforapproximationoftheone-steppredictionfunction( 2{7 )withloworderexpansion,whichiscalledKolmogorov-Gaborpolynomialmod-els.Andthebenetfrompolynomialmodelisthatallthenonlinearcoecientscalculationsareavoided.AsimpliedversionoftheKolmogorov-Gaborpolynomialmodeluseslesshigh-orderinformation,alsocalledVolterraSeriesmodel,describesthenonlinearityonlyfortheinput ~y(n)=f[u(n1);:::;u(nD)]a1y(n1)aDy(nD)(2{8) Thusthesecondorderseconddegreepolynomialfunctioncanbeexpressedas (2{9) +6u2(n1)+7u2(n2)+8u(n1)u(n2)

PAGE 26

Withthissimplicationthenumberofregressorsisreduced,thusthecomputationofpredictionissimplied.ThepricepaidhereisthenecessaryincreaseoforderDinordertodescribethenonlinearityoftheoriginalsystem.Forlinearmodel,therewillbenohighorderexpansion,andtheembeddinglengthwillbeincreasedwhennecessary. 32 33 34 ].Comparingwithothernonlinearneuralnetworks(recurrentnetworksespecially),theESNhaslessamountofparame-tersneedtobetrained.Moreover,theparametersarelinearandtheycanbetrainedthroughtheWiener-Hopfapproach[ 35 ]. Heterogeneousorderednetworksor,morespecically,recurrentneuralnetworks(RNN)areconvenientandexiblecomputationalstructuresapplicabletoabroadspectrumofproblemsinsystemmodelingandcontrol.Therearemanywaystomakethisstatementmoreprecise.Forinstance,RNNscanbecastasrepresentationsofthevectoreldofadierentialsystems,orbetrainedtoembodysomedesiredattractordynamics[ 31 ].Inthoseapplications,systemswithoutinput(autonomous)aremodeled.Sincewewishtomodelinput-drivensystems,weadoptastandardperspectiveofsystemtheoryandviewadeterministic,discrete-timedynamicalsystem

PAGE 27

asafunction,whichyieldsthenextsystemoutputasaprediction,giventheinputandoutputhistory[ 36 ]. AsaspecialcaseofRNN,theESNwasrstproposedbyJaeger[ 36 37 ].TheESNshaveinterconnectedandrecurrenttopologyofnonlinearPEswhichconstitutea\reservoirofrichdynamics".TheinterestingpropertyofESNisthatonlythememorylessreadoutofESNneedstraining,whereastheinternalrecurrenttopologyhasxedconnection.ThiskeypropertydramaticallyreducesthecomplexityofRNNtrainingtosimplelinearregressionwhilepreservingsomeofthecomputationalpowerofRNN. Weconsidertherecurrentdiscrete-timeneuralnetworkasgiveningure 2{1 withMinputunits,Nprocessingunits,andLoutputunits.ThevalueoftheinputunitattimenisX(n)=[X1(n);X2(n);;XM(n)]T,thatofinter-nalunitsiss(n)=[s1(n);s2(n);;sN(n)]T,andthatofoutputunitsisy(n)=[y1(n);y2(n);;yL(n)]T.ThesystemtransferfunctionofESNcanbeexpressedas where,formostofthecase,nonlinearoutputfunctionfout()iseliminated,andonlylinearoutputisused. ThecriticaldierencebetweenESNandRNNliesintheadaptationoftherecur-rentweights.ESNcompensatesfornottrainingtherecurrentconnectionweightsbyincreasingthedimensionalityofthehiddenlayer.Inthisway,ESNcreatesadiversityofdynamicswithalargenumberofPEsinitsreservoirandthereadoutissimplytrainedtopickupdynamicalcomponentsrequiredforagivendesiredoutputsignal.Hence,itiscrucialfortheoperationoftheESNtoincludeareservoirwith\enough

PAGE 28

Figure2{1. BlockdiagramoftheESN. richness"ofdynamics.AndinmostengineeringapplicationsofESN,e.g.systemidentication,theoptimallinearmapperisobtainedthroughrecursivealgorithmswhichminimizemeansquareerror(MSE)orhigherordererrormomentum. IntheESN,theechostatesformbasisfunctionsthatareanonlinearlylteredversionoftheinputsignals[ 35 38 ].ThelinearindependencyoftheechostatesisalmostalwayssatisedbecauseofthenonlinearPEsexceptrarepathologicalcaseswhenWandWinhassymmetricalentrieswithrespecttoanytwoPEsintherecur-rentnetwork.Notethat,theechostatesarenotorthogonaltoeachothersincethereareconnectionsbetweenPEsmakingthestatescorrelated.Withthisinterpretation,allwhatthelinearread-outisdoingistocombinethebasisfunctionsderivedfromtheinputtocreateagivendesiredsignal.ThespanoftheESNcanbedenedasthesetofallfunctionsdenedby( 2{10 ),wherefoutislinear,forallvaluesofWout.Here,thecriticalquestionishowbigthespanoftheechostatesis.ThisisdirectlyrelatedtotheselectionoftheweightsoftheESNreservoir.Currently,therearetwoways

PAGE 29

togenerateW.TherstmethodselectsWrandomlyfromasparsematrixwiththehelpoftheechostateconditionkWk<1[ 36 37 ],wherekkisthespectralradius(SR).Forthesecondmethod,theWisselectedsuchthatthemaximumentropyisachieved[ 35 ].Theechostateconditionisastabilityconditionthatguaranteesthatthestatesareonlycontrolledbytheinputwhereasthesparsenessisrequiredtoavoidmuchcorrelationbetweenthestatesbyminimizingtheconnectionsamongthem. Assumingunknowndynamicalsystemtransferfunctionisg(),theobjectiveofsystemidenticationistogenerateanestimationfunction~g()suchthattheerrorpredictionissmall, Letthelinearmapper(assumeasingleoutput)bedenotedbytheweightvec-torWout.ConsideringthecostcriterionisMSE,thereadoutvectorcanbeeasilycomputedthroughWiener-Hopffunctionas whereR=E[sTs],P=E[sTd],andsisESNinternalstatestrainedfrominputsequence,discorrespondingdesiresequencefromtrainingdata. Wenowpresentamoregeneralformulationofthetrainingprocedureforthecasewithnoinputfeedforwardandoutputfeedback.

PAGE 30

Figure2{2. DistributionofESNinternalweightswithmaximumspectralradiusbeing0.9,withunitcircleasareference. 2{2 2{10 )toobtaininternalstatesss.Thendismissfromthisrunasuitablylonginitialtransientperiod.

PAGE 31

Localmodelingapproaches,ontheotherway,dividethedynamicspaceintoseveralparts.Eachlocalmodelistrainedttingonepartofthedatafromthestatespace.Thelocallinearapproximationcanbeconsideredasimpliedalternativetononlinearmodeling.TheapproachoflocallylinearARXhasbeenwidelydiscussedinthetimeseriesliterature.Itisastate-dependentsequentialtypeofrecursivealgo-rithmforidentifyingdynamicmodelsusingsimplelinearregressionorinterpolation[ 2 ].Undertheassumptionthatf()issmoothinthevicinityofx(n),f()canbeapproximatedbytherstordertermofitsTaylorseriesexpansion, (2{13) =~aTnx(n)+~bn

PAGE 32

especiallyforsystemswithcomplexbehavior.Casdaglialsostressedthatthegreatadvantageoflocalmodelingistheexibilityinbuildingthemodel[ 39 ]. Asthecentermissionofmodeling,thepredictivemappingf:RD!Rcanbeexpressedthroughtimeembeddingas wheref()isadeterministicARMAmodelwithtime-embeddedinputvectorx(n)=[x(n);x(n1);:::;x(nD+1)].Withthemethodoflocalmodelingthestatespaceispartitionedintosmallregionsandalocalmodelisestablishedtodescribethedynamicsindierentregion.Thuslocalmodelsvaryfromregiontoregion. TheapproachoflocallyARMAmodelshasbeendiscussedinthetimeserieslit-eraturebyPriestley[ 40 ].ImprovedapproachwasproposedbyFarmerandSidorowich[ 16 ],whoalsoextendedARMAmodelstolocallypolynomialapproximationsofhigherorder.Therstorderexpansion,whichisalinearmodel,isaneectivelocalmodelingmethod.Andtheapproximationwithhigh-orderpolynomialsarenotsignicantlybetterthanthoseobtainedwithrstorder.Inourwork,wewillusetherstorderexpansion. Inmanylocalmodelapplications,theself-organizingmap(SOM)hasbeenuti-lizedtodividetheoperatingregionintosmallparts,whereonelocalmodelistrainedcorrespondingtoonepartofthestate[ 5 11 18 19 ].TheSOMisappropriateforswitchingnonlinearrelationshipsofhigh-dimensionaldataintosimplegeomet-ricrelationshipsthatpreservethetopologyinthestatespace[ 41 ].PrincipeandWangmodeledachaoticsystemwithSOM-basedlocalmodelingmethod[ 19 ].Cho

PAGE 33

18 20 ]. TheroleofGMMinsystemidenticationistoestimatethedistributionofstatespace.AccordingtothedistributiondensityofeachGaussiankernelwithinGMM,thecurrentstateofthesystemisdescribedbyoneorseveralGaussianmodels.Tonishthepredictivemodeling,asetoflocalmodelsalsotrainedbasedonGMM.FollowingtheGMM'scriterion,onelocalmodelislinkedtooneGaussiankernel.Andthesetwomakeuptheidenticationmodelpair.ThedetailofimprovedGMMconstructionandtrainingandcorrespondingLLMtrainingwillbediscussedlaterinchapter3. FromthenatureofESN,wecanseeoneadvantageofESNisitsless-trainingconstructionprocedureeventhoughitisnotlinearmodelinganymore.Sinceitsinternalstatesarerandomlygenerated,theonlypartwhichneedstrainingisitsreadoutvector,andthereisnotrainingthatisnecessaryforthestates.Andsincethereadoutisoflinearform,thetrainingforthereadoutvectorcanbeeasilydone

PAGE 34

throughWiener-ltersimilarmethods.Thusthroughthefunctionalityof\internalstates",theESNwillrequirelesstrainingcomputationcomparewithotherrecurrentnetwork.AnotheradvantageofESNisitsinputstructureovertime-embeddingdatastructure.SinceESNhasa\reservoir"ofinternalstatesandpre-denedfeedback,theseinternalstatescanbedeemedan\internaldelay"toinputforacertainsystem,ESNcanrecordityettime-embeddingmayneedlongerdatastructure.WewillalsoshowthattheESNwithmaximumentropycanaccomplishsystemidenticationworkwithminimumMSE.

PAGE 35

42 ],howeveritisbasedonawelldenedstatisticalframeworkfordensityestimation.TheGMMisalsosimilartotherstlayeroftheRadialBasisFunction(RBF)networkusedinnonlinearsystemapproximation.InfactthekernelsintheRBFarealsoGaussianfunctions,whicharespreadoverthedataspaceusingaclusteringalgorithm[ 42 43 ].NormallythecovarianceoftheseGaussiankernelsarenottrainedfromthedata.TheRBFoutputlayercombinestheseinternalstateslinearlyintoasingleprojectorthatistrainedwiththedesiredresponsetoextracttheinformationinthejointspace.TheGMM-LLMapproachcombinesthesetwobasicstrategies,bycreatingalocalpartitionoftheinputspaceasinthePAalgorithm,andemploysmultiplelocallinearmodelsattachedtoeach 24

PAGE 36

partition,insteadofnonlinearcombiningtheintermediatenonlinearstatesasintheRBF. Figure 3{1 depictstheelementsoftheGMM-LLMarchitecture.TheinputdataismodeledbyamixtureofGaussian'smodel,andtheinuenceofeachGaussianisevaluatedbyaweightthatrepresentsthelikelihoodofthedatabeinggeneratedbythecorrespondingGaussianmode.DuringtrainingthedatafromeachGaussianclusterisusedtotraintheLLMs.Duringtesting,theinputisdirectlysenttoalltheLLMsandtheiroutputweightedwiththeprobabilitythatthesampleisgeneratedbyeachmodelbeforebeingcombinedbytheoutputadder.Notethatduringtestingtheseweightschangewiththedataprovidingagreaterexibilitywhencomparedwithotherlocalmodelingalternatives.Thisarchitectureisprincipledusingwellestablishedstatisticalmethods,veryexible,andstillverysimplebecauseitusesonlylinearmodels. TheGMM-LLMapproachisconceptuallyanalternativetotheSelfOrganizingMap(SOM)-LLMreviewedinchapter2.TheSOMclustersthesystemstatespaceintoagroupofsmallregionsusingawinner-takes-alloperator[ 42 ].Eachsmallinputregion,whichisalsocalledVoronoitessellation,isassociatedtoonelinearmodelthatistrainedwiththedatafromthetessellationandthatattemptstoapproximatethedesiredresponsewheninstantiated.Globallythemodelingequationcouldbedescribedas Usingsoft-combinationinsteadofwinner-takes-allcriterion,GMMdoesthelocalmodelinginacooperativemanner.Eventhoughitstillfollowsthedescriptionofthe

PAGE 37

Figure3{1. StructureofGMMwithLLM.Top:Trainingschematic;Bottom:Testingschematic. rstpartof( 3{1 ),itdoesn'tlimitthecoecientstobe\1"or\0".Consequently,theGMM-LLMapproachcanreachsimilarSIperformancewithlessPEcomparedwiththeSOM-LLMapproach.Moreover,soft-combinationcangeneratesmooth-switchingamongdierentmodels,andcanalleviatethedicultyfacedbywinner-takes-all,whenitchoosesthewrongwinner,wherethecorrespondingLLMwillgenerateworseSIresult.

PAGE 38

AnothercompetitortotheGMM-LLMisthefuzzylogicapproach.ThemostpopularfuzzylogicapproachistheTakagi-Sugeno(T-S)fuzzymodel[ 44 ]whichisalsoalocalmodelingmethod.SimilartotheGMM-LLM,T-SfuzzymodelusesmultiplelinearmodelsforSIandsoft-combinationoflinearmembershipfunctionstoweighttheoutputsfromlinearmodels.TheT-Sfuzzymodeldividesthestatespaceintoseveraloverlappedregionsusingasetofsimple\if-then"rules,onerulecorrespondstoonelinearmembershipfunction.Besidestheorderofmembershipfunction,thedierencebetweentheT-SfuzzymodelandtheGMM-LLMliesinthetrainingmethod.TheT-Sfuzzymodelneedspre-knowledgeaboutthesystemforbothmembershipfunctiontrainingandlinearmodeltraining.UnliketheunsupervisedtraininginGMM,theT-Sfuzzymodeltrainingisinasupervisedmannerwhichincreasesthecomplexityofmodeling. Assumingthetrainingdatasetcoversthestatespaceofthetargetsystem,thestatespacecouldbedescribedbytheembedded(reconstructed)dataintheappro-priatedimensionalspacewithanoptimaldelay.ThegoalistouseasetofGaussiankernelstoestimatethestructureoftheeventuallycomplexdatadistributioninre-constructedspace.SincetheonlynecessaryparametersforaGaussiandistributionareitsmeanandcovariancewhichcansavetimeandcomputationonmodeltraining.ThissimplicityisonereasonwhyGaussiandensityfunctionischosenasthekernelfunction. Settingp(~x)asthedistributiondensityofthecurrentstate~x,p(~x)canbeex-pandedinasumoverGaussianclustersck.AlltheGaussianclusterscoverthewholestatespacedenedby(k,k).Theprobabilitydensityofeachclusterreectsthe

PAGE 39

domainofinuenceofeachcluster. (3{2) =KXk=1p(ck)p(~xjck)=KXk=1p(ck)g(~xjck) whereg()indicatesthatweexplicitlystatetheformoftheconditionalGaussiandistributiondensities (2)D=2jkj1=2e(~xk)T1k(~xk)(3{3) where~xiscurrentstate,kandkaremeanvectorandcovariancematrixofclusterckinthefeaturespace.Alsothesummationoveralltheclustersisnormalizedbytheclusterprobabilitiesp(ck)tosatisfy Kk=1p(ck)=1(3{4) Iftheelementsofinput~xareindependentofeachotherorthecrosscorrelationisnegligiblysmall,thecovariancematrixcouldbereducedtoadiagonalmatrix,whichinturnsimpliesthecomputationofGaussiandensityfunction. 22d;k(3{5) whereDistheinputdimension. Function( 3{3 )and( 3{5 )showthatGMMusesthelinearcombinationofnon-linearestimationmodelssoastobuildapowerfulglobalsystemmodel.Meanwhile,

PAGE 40

GMMdescribesthelocaldatafeaturewithrelativelysimplemodelswhichsatisfydescriptionof( 3{2 ). 45 ].Wedescribethemaximumlog-likelihoodfunctionas (3{6) =NXn=1ln[KXk=1p(ck)g(~xnjck)] whereNandKarethetotalamountofdataandGaussianclustersrespectively. SincetheGaussianmodelparametersarecalculatedfromtheobserveddatasetanddescribethelocaldistributionofdatapoints,arecursivesearchwillbenecessarytondtheoptimalGaussianmodelparameters.Duringthesearch,themaximumlikelihoodfunctionisusedasanoptimizationcriterion,sotherewillbetwoestimatorscombinedineachjointupdate,onefortheestimationoflikelihood,andonefordistributionofeachGaussiankernel.TheExpectation-Maximization(EM)algorithm[ 46 ]isappliedtosearchfortheoptimalparametersoftheGaussiankernels.TheEMalgorithmhasgainedwideappreciationasanecientalgorithmforprobabilisticnetworkstraining.TheEMassumesthattheknownstates(theobserveddata)andtheunknownstates(theGaussianparameters)characterizethewholesystemmodel.

PAGE 41

Asanunsupervisedapproach,theEMtrainingbasedGMMonlyneedsthenumberofGaussiankernelsK.Allotherparametersareobtainedthroughtraining. TherecursiveprocessofEMalgorithmstartswiththeE-step(Expectationcalculation),duringwhichtheclusterparametersareassumedcorrect.Expectationofthelog-likelihood( 3{6 )ofthecompleteconditionalprobabilitybytheobservedsamplesiscomputed.AccordingtoBayesianTheorem,theposteriorprobabilitiesareproportionaltothelikelihoodfunction. Soequivalently,theposteriorprobabilities,whichrelateeachGaussianclustertoeachdatapointintheconditionalprobabilityformp(ckj~x),canbeevaluatedas (3{7) =p(~xjck)p(ck) Ki=1p(~xjci)p(ci) ThesuminthedominatornormalizesamongtheGaussianclusters,andthenumeratordeterminesthelargestpossibledensityfunctionforeachdatapoint.Thusthemaximumlikelihoodofthewholedatasetwillmonotonicallyincrease. IntheM-step(Maximizationcalculation),thedistributiondensityfunctionofeachdatapoint,whichdescribesthelikelihoodofdatagivencertaincluster,istemporarilyxed.TheupdateimprovestheparametersofeachGaussiancluster:unconditionalclusterprobabilityp(ck),meanvectorandcovariancematrix(k;k).Thusthelikelihoodwillbemaximizedrecursively.Inequation( 3{8 ),consideringthattheamountofdatapointsarelimitedandthevaluesarediscreteinreality,the

PAGE 42

integralisreplacedbythesumoveralltrainingdata. ThemeanvectorandcovariancematrixofeachGaussianclusterareupdatedaccordingtothestationarypointsofthelikelihoodtotheunknowns(k;k).Thederivativeequationoflikelihoodtothemeanis @kNXn=1ln[p(~xn)]=0 (3{9) @kNXn=1ln[KXi=1p(ci)g(~xnjci)]=0NXn=1@ @kln[KXi=1p(ck)g(~xnjci)]=0NXn=1p(ck)g(~xnjck) 2k(1)=0 Theanswertoupdatethemeanwillbeobtainedthroughmaximumlikelihoodcriterion:

PAGE 43

Thesameresultisachievedthroughtheintegrationmethodas: =Z~xp(~x)p(ckj~x) Similarly,theGaussianclustercovarianceis: k;ij=Z(~xik;i)(~xjk;j)p(ckj~x)d~x Thebasicmodelestimationprocedurecanbesummarizedintovephases: 1.Pickinitialconditions; 2.Estimatethedatadistributiondensityp(~xjck); 3.Estimatetheposteriorprobabilityofeachclusterforeachdatap(ckj~x); 4.Estimatetheclusterprobabilityp(ck),Gaussianclusterparametersmeankandcovariancek; 5.Gobacktostep(2)untiltheincrementoflikelihoodfunction( 3{6 )iswithincertainrange.

PAGE 44

estimationofthecurrentsystem'sdynamic,thedynamicestimationcanbeextendedtoaone-steplinearmapping~y(n+1)=~f(~x(n))=~LTopt~xn,where~fstandsforacertainreferencemodel.And~LoptistheoptimizedcurrentlocalmodelbasedonGaussiangatingfunction.LLMisthemostsimpliedreferencemodeloftherealsys-temanditwillsimplifythedesignprocedureofcontroller[ 16 ].Fromequations( 3{2 ),( 3{7 ),and( 3{10 )-( 3{12 ),wethuscanexpressone-steplinearpredictionfunctionas ~yn+1=KXk=1p(~xnjck)p(ck) =KXk=1p(ckj~xn)~LTk~xn=~LTopt~xn Equation( 3{13 )canbeswitchedtomatrixforminordertosimplifythecalcu-lationoflocallinearmodel~Lks. ~yn+1=PTnLT~xn(3{14) where

PAGE 45

squareerrorcriterion(MSE) minL(J)=1 =1 @~Lk=2 =)@J @L=26666642 (3{17) From( 3{17 ),usingtheresultfrom( 3{13 ),eachrowof@J @~Lcanbeexpandedas From( 3{18 ),weobtaintheMSE-basedoptimizationfunctionforlocallinearmodel~Lks.Thedierencebetweenwinner-takes-allsolutionandthissolutionisthatPk;nwillnotbealways0or1,anditdetermineswhereandhowmuchclusterkwilltakeeect.

PAGE 46

Inordertoobtainthevaluesofallthelocalmodels,wecanexpandequation( 3{18 )tomatrixformattosimplifythenotations =2666664PNn=1P1;nPr;n~xn~xTn:::PNn=1PK;nPr;n~xn~xTn3777775T26666666664~L1~L2...~LK37777777775=Rr~L Repeat( 318 )foralltheGaussianclusters,thelocallinearmodelcanthenbedescribedas =)~L=R1~P 1.AccomplishstatemodelingusingtheEMalgorithm,obtaindistributionpara-meters(k;k)foreveryGaussiancluster,conditionalprobabilityp(~xnjck),andtheclusterprobabilityp(ck); 2.Computetheposteriorprobabilityp(ckj~x)throughtheBayesianfunctionforeachcluster;

PAGE 47

3.Computethelocallinearmodelsusingdistributioninformationfromprevioustwostepsandleastsquarecriterion; 4.Inthetestingphase,re-calculatethep(~xnjck)andp(ckj~x)givennewdatapoint,thenestimatethesystemoutputthroughlocallinearmodels. 47 48 ].InordertooptimizetheGMMinitialconditions,shortentheEMtrainingtime,andstabilizethenalGMMper-formance,aGrowing-Self-OrganizingMaps(G-SOM)algorithmisapplied.AnotherbenetfromG-SOMisthatitsuppliesaexiblestructuretochangethenumberofGaussiankernelsintheGMM.VlassisandLikassetupacriteriontodetermineoptimalnumberofGaussiankernels[ 49 ]. TheinitialtopologyoftheG-SOMnetworkisatwo-dimensionalstructurewith3PEs,whichwillbeusedtodescribeclustersofpatternsintheD-dimensionaldatavector
PAGE 48

(a)(b) Figure3{2. ExplanationofG-SOMalgorithm.Left:TrainingofG-SOM;Right:GrowingofG-SOM 4.Decreaseallsignals'scountersbyasmallvaluesothatthemeanofwinningfrequencydoesn'tchangetoomuch; 5.Afterfeedinginalldata,onenewPEisaddedbetweenthePEwiththehighestsanditsfarthestneighboruntilthepre-denedPEcountisreached,andallthewinningfrequenciesaresettozero. Aftertraining,theG-SOMPEs'weights~wareutilizedasGMM'smeans'ini-tialvaluewithbalancedwinningfrequency.TheGMM'scovariancematricesareinitiallysettoalargevaluetomakeeveryGaussiankernelcoverthelargestpos-siblearea,andtheoptimalvalueofcovariancematrixwillbetrainedthroughEMalgorithm.TheimprovedalgorithmwasextensivelytestedontheLoFLYTEdatathatwillbefullypresentedinChapter5.TheLoFLYTEsimulationuses32-Dtimeembeddingspace,andthedierenceofperformancebetweentheconventionalGMMandtheGMMwithaGSOMinitializationisshownintable 3{1 .EverySERresultisanaverageofveindependentexperiments.Overall,theG-SOMinitialweightedGMMmakessystemidenticationperformance2:4dBbetterthanrandominitializedGMMwithsmallervarianceanddecreasestheEMtrainingiterations.

PAGE 49

Table3{1. Performanceon32-DLoFLYTEdata GMM GMMwithG-SOM 40dBSNRnoise SER(dB) Variance SER(dB) Variance p 23.637 0.4338 23.405 0.2589 q 26.012 0.2849 24.396 0.3738 r 24.663 0.3859 22.553 0.1753 u 21.016 0.3574 26.352 0.2877 v 22.682 0.2723 28.544 0.2065 w 22.829 0.3404 25.668 0.1926 Average 0.3458 25.153 0.2491 17 4 0noise p 25.514 0.1968 24.721 0.2173 q 27.828 0.2189 25.838 0.3036 r 22.053 0.3631 25.696 0.1301 u 19.065 0.2817 27.123 0.2085 v 26.530 0.1977 29.444 0.1447 w 23.315 0.2661 25.853 0.1350 Average 0.2523 26.446 0.1899 14 2 YetthemajorproblemsthatSOMcannotovercomeare:theproliferationofprototypevectorsinparticularinhighdimensionalspaces,thesensitivitytonoise,

PAGE 50

thenon-smoothswitchingbetweenmodelsbecauseofthewinner-takes-allmecha-nism,andthelocalminimaentrapmentduringSOMtraining[ 48 50 ].Nothingcanbedonefortherstproblemexceptincreasingthenetworksize,whichbringsalsoslowtraining,requirementoflargeamountsofdata,andmanylocallinearmodels.AlthoughtheSOMisinsensitivetonoisebelowthequantizationproducedbythevoronoitessellation,whenthenoiseishigherthewrongmodelcanbeinstantiated.TheSOMpreservesneighborhoodssothedierenceisnotdrastic,butcontributestomodelingerror.BecauseofthedierencebetweentheLLMs,whenthemodelsswitchtheremaybetransientsintheresponsethatalsoaddtothemodelingerrorformorethanonetimestep. TheGMMapproachdividesstatespaceincoarserregions,whichimprovesthepoorscalingupoftheSOMwithdimension,andalsoimprovesthenoiserobustness.However,eachregionmaynotbeaswellmodeledbyalinearmodelasinthecaseoftheSOM.However,onlyexperimentalevidencecanjudgethistrade-o.Insteadofusingwinner-takes-allcriterion,GMM-LLMcombinesmultimodelsintothecurrentdata.InthecasetheEuclideandistancebetweendataandtheclosestGaussiankernelislarge,maximumlikelihoodcriterionmakessuremorethanonekerneltakeseect.Thisphenomenonisshowningure 3{3 .Also,gure 3{3 showsthatcomparedwithotherlinearmembershipfunctions,theuseofnonlinearGaussiandistributioncanguaranteesmoothswitchingbetweenmodels.AnimportantcharacteristicoftheGMM-LLMisthattheweightingischangingduringtesting,unliketheSOM-LLM.Thiswillhelpintheoverallmodelingaccuracy. ThelocalminimumintheSOMtrainingisdiculttocontrolduringthean-nealingprocess.AlsotheSOMcannotcontrolthewinningfrequencywhichmay

PAGE 51

Figure3{3. SoftcombinationregionusingGMM(betweendashline) alsoprovideaddeddicultiesintheclustering.TheGMMisalsosensitivetolocalmaxima.However,inordertomakeGMMconvergetoglobaloptimal,inamorestablemanner,G-SOMmakestheweightssuchthateveryclusterisgeneratedwithsimilaramountofwinningfrequency[ 51 48 ].Thismeansthatforthewholetrainingdataset,eachGaussiankerneloccupiesaboutthesameportionasawinnerwithlargeoutputweight.Fritzke,Chinrungrueng,andSequinprovedtheweightsgrowingcriterionandtheoptimalvalidityrespectively[ 52 53 ]. SincetheSOMalgorithmisderivedfromheuristicideasandthisleadstoanumberoflimitations,someoftheadvantagesofGMMoverSOMarelistedbelow. 1.GMMdenesanexplicitprobabilitydensityfunctionindataspacetoestimatethedistributionofthestate.Incontrast,SOMuseswinner-takes-allcrite-rionbasedontheEuclideandistance[ 18 ].GMM'ssoft-combinationmechanismmakessmoothswitchingpossible. 2.InGMM,theneighborhood-preservingnatureofthemappingisanautomaticconsequenceofthechoiceofGaussianfunctionswhichimprovesGMM'snoiserobustness.Smoothneighborhood-preservationisnotguaranteedbytheSOMprocedure.

PAGE 52

3.ThroughtherelativewinningfrequencycriterionfromG-SOMconstruction,theGMMtrainingavoidsthelocalminimaentrapmentwhichSOMalonecannotovercome. 4.GMMimprovesthescalabilityofSOMthroughacoarserclusteringofthestatespace.Forthesamereason,thetrainingeciencyofGMMisincreased.

PAGE 53

Withintheframeworkofpredictivecontrol,theproposedGMMbasedlocallinearmodelingapproachsimpliesthedesignofcontrolsystemsfornonlinearplants.Itisdiculttodesigngloballystablenonlinearcontrollerswithsatisfactoryperformanceateverypointinthestatespaceoftheclosedloopsystem,whichisespeciallytrueinthecaseofunknownplantdynamics.However,theGMMlocallinearmodelingapproachpresentedabove,coupledwithstrongcontrollerdesigntechniques[ 54 ]andrecenttheoreticalresultsonswitchingcontrolsystems[ 55 56 ],achievesthecontrolgoal.Thetopologyoflocalcontrollersthatnaturallyarisefromthelocallinearmodel-ingapproachisillustratedingure 4{1 ,wherethelocalcontrollersaredirectlylinkedtoeachlocalmodel.WithintheframeworkofGMM-LLM,thesystemdynamicscanbedescribedpiece-wise-linearlyas ~yn+1=XkkLTkXn =LTOPTXn=A0yn+A1yn1++ADynD+B0un+B1un1++BDunD 42

PAGE 54

Figure4{1. StructureofControlSystembasedonGMM-LLM Insection4.1,abriefreviewonmixturemodelbasedcontrollerdesignwillbegiven.Insection4.2,theGMM-baseddynamicalinversecontrollerisexplainedindetail,whichislaterdemonstratedfeasibleonnonlinearsysteminchapter5.Apole-placementPIDcontrolapproachwillbediscussedinsection4.3. Consideringmodelingerror,measurementnoise,andmodeluncertainty,robustapproachestocontrolbecomeveryappropriateforreal-worldapplications.Thereforevariousapproacheshavebeenproposedtosolvetherobustcontrolproblem,includingtheH1approach[ 57 ],Lyapunovapproach[ 58 ],geometricapproach[ 59 ],etc.Lin,Brandt,andSunproposedanoptimalcontrolbasedrobustcontrolscheme[ 60 61 ],andwewillmodifyitandtitintoourGMM-LLMframework. 62 63 64 ].Indeed,the

PAGE 55

modelconstitutesthevitallinkbetweenthephysicalprobleminwhichtheoptimalcontrolwillbeused,andthemathematicalrealminwhichitneedstobedesigned.Theclassicalapproachtothecontroldesignproblemassumesaknownmodel,evenifdramaticallyreduced.Inthecasethatthemodelcomplexityisreducedunreal-istically,theconsequencewillbethegenerationofineectivecontrols.Forglobalnonlinearmodelingcontrolapproach,whichismoredicultcomparedwithlocalorlinearmodelingcontrol,neuralnetworksbasedcontrolschemehavebeenstudiedextensively[ 65 54 66 ]. UsingthePartitionAlgorithm(PA),Lainiotisrstintroducedthemulti-modelpartition(MMP)methodologyforthedesignofadaptive/intelligentsystems,wherethecontrolproblemisdecomposedintoasetofeasierderivedsubproblems[ 64 67 ].Givenasetoflocalmodels,MMPdesignstheadaptivecontrolas where^uk(n)istheoptimalcontrolcorrespondingtokthsub-model. Fuzzycontrolisanotherpopulardesignmethodologywhichattractswideinterestthesedays[ 68 ].Amongthem,Takagi-Sugeno(T-S)fuzzylogicmodel,whichappliesfuzzyrulesandmembershipfunction,isawidelystudiedandappliedfuzzymodelingandcontrolmethodology.IntheT-Sfuzzymodel,thesystemisdescribedbyasetofif-thenruleswhichhavelocalmodelsintheconsequentparts.Theithruleinthemodelisoftheform:Ifz1(t)isMi;1,and:::,andzP(t)isMi;P,then

PAGE 56

Figure4{2. T-Sfuzzylogicmembershipfunction Theeectiveregionsofif-thenruleareoverlappedwitheachother,andtheweightsofeachruleismeasuredbylinearmembershipfunctionwhichisshowningure 4{2 .Thecontrolsignalforfuzzylogiccontrolstaketheformof( 4{2 ),wherekiscomingfrommembershipfunctiontoo.Tanaka,Iwasaki,andWanguseT-SlogicswitchingfuzzylogicmodeltocontrolofanR/Chovercraft[ 69 ]. Basedontheideaofagroupofsimplecontrollersinsteadofasinglecomplexcontroller,switchingcontrolhasbeenappliedinrecentyears.Aswitchingcontrolconsistsofahighleveldiscrete-eventsupervisorandafamilyoflowlevelfeedbackcontrollers[ 70 7 ].Followingthelocalmodelingtheorem,multiplecontrollersareappliedtocertainregionsofdata.MotterusedaSOMbasedsystemidenticationmodeltodesignaswitchingcontrollerforhighspeedwindtunnelcontrol[ 20 ].LiandLeeuseaSOMstructuretoclusterthestatespace,andthenapplyfuzzyrulestodesignthecontrolsignals[ 41 ].Choetal.usesaSOMandslidingmodecontrollertorealizerobustcontrol[ 71 72 ].Basedonthoseapproaches,ourresearchwilltry

PAGE 57

Figure4{3. StructureofInverseControlSystem toimprovetheperformanceofmultiplecontrollerneartheintersectionofdierentclusters. 4{3

PAGE 58

Theprinciplebehindinverseerrordynamiccontrol(IEDC)istheasymptoticconvergenceofsystemerror.Thecontrolmethodologycanbedescribedasxingasetofstablepolesforthetrackingerrorsignaldynamics.Ifattimenthedesireplantoutputisdenotedasdn,andtheactualplantoutputisyn,theinstantaneouserrorisdenedasen=dnyn.Andthecontrolgoalistoguaranteethattheerrorsignalfollowsthel-Derrordynamicfunction: theparametervector~=[1;:::;l]Tisselectedsuchthattherootsofthepolynomial1+1x++lxl=0areallwithintheunitcircleandthustheerrordynamicsisstable. Iftheorderofpolynomialiszero,theerrorfunctionissimpliedasen+1=0.Theerrorfunctioncorrespondstoanexacttrackingcontroller,whichdeterminesthenecessarycontrolinputforthenexterrorvaluetobe\0".Thisstrategyusesonlycurrentmodelinformationandcurrentcontrolerror,soitsresponsewillbeinstanta-neous.Andobviouslyitisnotrobusttomodelingerrorsorexternaldisturbancesasthereisnofurthermodelanddynamicinformationtosmooththedisturbancefrommodeluncertainty.Through2ndordererrordynamicalfunction,theerrorsignalis

PAGE 59

lteredrst,andthelocalmodelbasedcontrolsignalcanbeobtainedas wheremodel[~Ln;x~Ln;u]T=~LTcurrent=(PTn~L)Tiscomingfrom( 3{14 )and(1en+2en1)isfromweightederrorsignalsoflasttwosteps. Fordierentchoicesorderin( 4{3 ),thecontrollawwilldrivethetrackingclosed-loopsysteminvariousways,meanwhiletheerrorsignalwillalwayssatisfythelineardierenceequationgivenin( 4{3 ).Aslongasthelocationofpolesarewithintheunitcircle,thecontrollerwillguaranteethestableconvergenceofthedynamics.ConsideringthatthemodelingerrorfromtheGMM-LLMwillaecttheactualpolelocations,thepolesin( 4{4 )cannotbechosenclosetooneinordertoguaranteestability. Nowwediscussthestabilityissueforthecontroller.From( 4{4 ),isareco-ecientstobedesignedsuchthattwopolespisarewithintheunitcirclewherep1+p2=1andp1p2=2.SincetheLLMsaretrainedbasedonMSEcriterion,themodelingerrorsateachstepcanbedeemedaboundedi.i.d.randomvariablewithzeromean.ItcanalsobeassumedthatE[e2M;n]<1,whichisreasonableifthesystemidenticationisdoneproperly,andthemeasurementnoiselevelsarenite.

PAGE 60

Fromtheexperimentalresultsinthesimulationsection,wewillshowthatthemod-elingerrorpowerisnite.Underthoseconditionsandassumptionsaboutnonlinearuncertaintiesonens,theerrordynamicfunctionischangedto whereeM;isarethemodelingerrorsateachstep. Wedenetherightsideofequation( 4{5 )as.Consideringtheworstcasewhenmodelingerrorsreachtheirbound,issettoitsupperboundu(jjjujanduisaniteconstant).TheexpressionofencanbeobtainedthroughZtransformandinverse-Ztransformtoyield Asn!1,themeanofthetrackingerrorisobtainedas limn!1E[en]=limn!1E[u (4{7) Asforthepoweroftrackingerror,theexpressioncanbewrittenas limn!1E[e2k]=limn!1E[2u (4{8)

PAGE 61

Undertheassumptionthatthemodelingerror(andmeasurementnoise)con-tributionseM;narewidesensestationaryrandomvariables,wecancomputetheasymptotic(nite)trackingerrorintermsofthepowerspectraldensityofeM;nandthecoecientsi.Asdiscussedbefore,themeanofmodelingerroriszeroorsmallvalue.Consequently,themodelingerrorwilltakeeectontrackingperformanceandthepole-placementproblemmustbesolvedconsideringthetradeobetweenconvergencespeed(fasterpoles,widerpass-band)versusbetterdisturbancerejection(slowerpoles,narrowerpass-band). 73 ].Lately,Skoczowskietal.proposedanewmethodbasedontwo-loopmodelfollowingcontrol(MFC)toimproverobustness[ 74 ].ThemixturemodelbasedPIDcontrolschemeisshowningure 4{4 Simplifyingthemodelfunction( 4{1 )toasecondordertime-embeddedSISOform ~yn+1=a1yn+a2yn1+b1un+b2un1 z2a1za2

PAGE 62

Figure4{4. pole-placementPIDcontrolsystem. DeningthedesiresignalandcontrollertransferfunctionasD(z)andG(z)=D(z)=C(z)respectively.Theclose-loopcontrolsignaltransferfunctionU(z)isgivenby Fromthedenitionofpole-placement,thegoalofthisapproachistomapthecoecientsofclose-loopsystemfunctionfromopenloopcoecients(a1;a2;b1;b2)topredesignedcontrollercoecients(ci;di),whosecharacteristicpolynomialcouldbedescribedas Theclose-loopsystemtransferfunctionisthen1 [1+zA(z)C(z)=B(z)D(z)],andthesystemcharacteristicpolynomialbecomes

PAGE 63

Usingthecharacteristicpolynomialforthesystemmodelas( 4{10 ),alongwithasecondorderobserverpolynomialtermz2,thedesignequationforpole-placementPIDisgivenas (4{12) wherea(z1)factorisaddedtothedenominatorofcontrollerfunctionC(z)inordertolterthelow-frequencynoise.Thisequationcouldbesolvedforcisanddis.Consequentlythecontrolsignalisobtainedfrom( 4{12 )as GiventheLLMonthecurrenttimestep,thePIDcontrollercouldbeeasilyobtainedthrough( 4{13 ).Itsstabilitywillbeassuredaswediscussedinprevioussectionforthedynamicinversecontroller.OnepossibleproblemcausedunderthisframeworkisthatthePIDcoecientsneedtobecalculatedoneverystepasthesystem'slocalmodelandthegatingweightsfromGMMarechanged. 60 61 ].

PAGE 64

_x=A(x)+B(x)N(x)+B(x)k(x)(4{14) isgloballyasymptoticallystableforalladmissibleperturbationsN(x). 75 ]gives minu[N2max(x)+xTx+uTu+VTx(x)(A(x)+B(x)u)]=0

PAGE 65

whereVx(x)=@V(x)=@x.Thus,ifu=k(x)isthesolutiontotheoptimalcontrolproblem,then (4{15) SinceV(x)satises Also,_V(x)=dV(x)=dt<0forx6=0,because _V=(@V(x)=@x)T(dx=dt)=VTx(x)[A(x)+B(x)N(x)+B(x)k(x)]=VTx(x)[A(x)+B(x)k(x)]+VTx(x)B(x)N(x)=N2max(x)xTxk(x)Tk(x)+VTx(x)B(x)N(x)=N2max(x)xTxk(x)Tk(x)2k(x)TN(x)=N2max(x)+N(x)TN(x)(N(x)+k(x))T(N(x)+k(x))xTx(N2max(x)N(x)TN(x))xTxxTx<0 ThustheconditionoftheLyapunovlocalstabilitytheoryaresatised.Theequi-libriumx=0of( 4{14 )isgloballyasymptoticallystableforallpossibleuncertaintyN(x),i.e.u=k(x)isasolutiontotherobustcontrolproblem.Consequently,thereexistsaneighborhoodN=fx:kxk0suchthatifx(t)entersN,

PAGE 66

then limt!1x(t)=0(4{16) Butx(t)cannotremainforeveroutsideN,otherwisekx(t)kcforallt0.There-fore, 4{16 )istruenomatterwherethetrajectorybegins[ 60 ].Consequentlythestabilityisassuredwithoutconsideringtheinitialcondition.

PAGE 67

Thecomputationaltechnique,whichwasoriginallyproposedbyYashikiandMatsumoto[ 76 ]onSISOsingledelaysystemandlaterexpandedbyKleinandRamirezetal.[ 77 ]onMIMOmultipledelayssystem,needstobetransformedtoourone-steppredictionmodelintostatevariableformatforthepurposeofcontrollabilitychecking.ThegeneralcaseofMIMOsystemsexhibitingmultipledelaysiswrittenas( 4{1 ).AndthecorrespondingstatespacemodelforMIMOsystemcanbeexpressedas =0BBBBBBBBB@00A1............00ADIIA01CCCCCCCCCA0BBBBBBBBB@y1;n...yD;nyn1CCCCCCCCCA+0BBBBBBBBB@B1...BDB01CCCCCCCCCAUn

PAGE 68

Amainadvantageofthistransformationisforthecontrollabilitytest.Dening~Aand~Basthecoecientmatrixandvectorin( 4{17 ) ~A=0BBBBBBBBB@00A1............00ADIIA01CCCCCCCCCA;~B=0BBBBBBBBB@B1...BDB01CCCCCCCCCA rank(S)=rank[~B~A~B~AD+Pdi1~B] Inourexperiments,alltheLLMgivenon-zeroresults,andsincetherankofSisrelatedtothedimensionofLLMisrelativelylow(S=[~B;~A~B]for4-DLLMcase),thecontrollabilityisassured.OnlyinthecasewhenLLMhavealargepartofclose-to-zerocoecients,whichalsomeansLLMarenotwelltrained,thestatematrixwillnotbefullrank. Torealizethedesignofrobustcontrolsignal,westartthecostfunctionJasthebasicdesignrule: Consideringthemodelinguncertaintyandmeasurementnoise,( 4{19 )canbemodiedas (4{20) =Xn(XTnQ1Xn+UTnRUn)

PAGE 69

Herethemodelinguncertaintyandmeasurementnoisearecombinedwiththestateitself.Inotherwords,thephysicaldynamicalchangesbecauseofthemodeluncertaintycanberecognizedastheincrementofsystemdynamicswhichwillinturncostmoreenergyandlargercontrolsignaltoovercomethem.Thisisareasonableexplanationasmodelingerrorappearsasawarpingofsystemstates.Thesolutionfromthiskindoffunctionformsthetraditionallinearquadraticregulator(LQR)problemandcanbeobtainedthroughthesolutionfromthediscretealgebraicRiccatiequation(DARE) TherehavebeenseveralmethodstocomputetheuniquestabilizingnumericalsolutionP=Xofthediscrete-timealgebraicRiccatiequation[ 77 ],fromwhichthegainvectorandconsequentlythecontrolsignalarecomputedas whereXnisthere-formulatedcurrentinputwithpseudostateelements,AandBarefromstatespaceformofrealsystemmodelor~Aand~BfromstatespaceformofLLM.Theschematicdiagramoftheoptimalrobustcontrolsystemisshowningure 4{5 Onepossibledisadvantagefortheoptimalrobustcontrolleristhattherelatedcomputationislargerthanthoseofinversecontroller,PIDcontroller,etc.Especiallyundertheconditionofmixturemodel,theoreticallytherewillalwaysbeachangeonthelocalmodelassystemdynamicsarechanging.SinceDAREcomputationneeds

PAGE 70

Figure4{5. Optimalrobustcontrolsystemschematicdiagram. tobenishedrightafterthegenerationofGaussianmixturemodel,consequentlytheDARE( 4{21 )andthecontrolgainmatrix( 4{22 )needtobere-calculatedateverytimestep,whichmightbeachallengeundersomeon-linetrainingenvironment.Necessarytrade-osneedtobemadewhenresourcesarelimited. 4{20 )needstobefurthermodied

PAGE 71

forstateerrorsandchangeofcontrolsignals (4{24) =Xn(ETnQ1En+dUTnRdUn) whereEnisthedynamicalerrorofsystemstates,dUnisthedierenceofcurrentcontrolsignalsandtheirpreviousrecord,QandRaresymmetricandpositivesemi-deniteweightingmatrix.Again,weassumethemodelinguncertaintyandnoisecanbecombinedwithstateerrorssinceerrorscanbedescribedasdistancebetweenstatesandnon-zeroequilibrium.Correspondingly,thetransfercoecientsforerrorsignalandcontrolsignalsneedtobechanged.Startingfromthesystemstatetransferfunctionin( 3{14 ),theerrortransferfunctionscanbedescribedas ~yn+1=a1xn+a2xn1+b1un+b2un1en+1=dn+1yn+1dn+1~yn+1=dn+1(a1xn+a2xn1+b1un+b2un1)=dn+1a1(dnen)a2(dn1en1)b1unb2un1=(dn+1a1dna2dn1b2un1)+a1en+a2en1b1un=a1en+a2en1b1(un1 (4{25) Dening^un=1

PAGE 72

Consideringthemodelinguncertainty,thenone-zerosetpointcontrolandtrack-ingcontrolproblemscanberealizedthroughoptimalcontrollaw( 4{24 ).ThenewdiscreteRiccatiequationcouldbecalculatedinthesameformwithmodiedcoef-cients(A;B)in( 4{26 ).WhenthegainmatrixfromdiscreteRiccatiequationiscalculated,thecurrentstepcontrolsignalcanbeexpressedas In( 4{24 )and( 4{27 ),weusetheupperboundofmodelingerrortodescribethemodeluncertainty,andtherealthemodeluncertaintyandnoisewillbearan-domvalueswhichonaveragearesmallerthanthat.Consequentlyundersmallnoiseandmodeluncertaintysituations,thetransientperformancefromoptimalcontrollers,comparewithothercontrolapproaches,mightnotbethebest.However,sinceopti-malcontrollerisconsideringthelong-termeectofdisturbance,itsperformanceonrobustnesswilloutrunalltheothercompetitorsinlargenoisecases.Inchapter5,wewillsimulateanddiscusstheperformanceofoptimalcontrollerondierentrealsystemswithdisturbance-freecaseandthecasewithnoiseand/ormodeluncertaintypresents.Neglectingthecomputationalcost,optimalrobustcontrolleristheoptimalchoiceformodelbasedpredictioncontroller.

PAGE 73

systemdynamicsasreferencemodel.UndertheconditionthattheGaussianmodel-ingsystemsarestable,allcontrollerscangiveastableresultwhereoptionalrobustcontrollerwilltheoreticallygivethebesttrackingperformance.Aplantmodelisrarelyafullyaccuratedescriptionoftherealplant.Neglecteddynamics,approx-imatelyknownparameters,andchangesinoperatingconditionsallcontributetoplant-modelingerrorsthatcanjeopardizecontrollerperformance.Inthefollowingchapter,wewillanswertheproblemsincludingtheperformanceofthecontrollersinrealsystems,whichincludesaccuracyandrobustness,performancecurvevs.numberofPE,andperformancecomparisonofGMM-LLMwithothermodelingapproaches. Amongalltheoptions,ESNcontrollerisquitedierentcomparedwithothers,whichwillbediscussedindetailsinappendix.ManypapersmentionedthatthetrainingofADCand/orBPTTalgorithmswillsometimeinevitablyleadtounstablecontrolresultsbecauseofitscomputationalcomplexity.Besidestheneuralnetworks(ESNhere),BPTTlengthandthechangeoflearningratewillalsotakeeectonthenalperformance.Wewillstartfromsomenonlinearsystemswith\simple"setup,useitforset-pointcontrol.

PAGE 74

TheGMMbasedlocallinearmodelingandcontroltechniqueswhichhavebeenpresentedinpreviouschaptersneedtobetestedonvarioussysteminordertoprovetheiradvantagesincludingfeasibility,simplicity,androbustness,etc.Asaninde-pendentcandidate,ESNwillbeusedforsystemidenticationandpartlybeusedforsystemcontrol.Inthischapter,dierentGMM-basedcontrolapproacheswillbetestedonasetofsystemincludingchaotictimeseriesprediction,syntheticSISOlinearsystemidentication,SISOnonlinearsystemidenticationandcontrol,NASALoFLYTEwave-rideraircraftidenticationandcontrol.Acompletesetofsimula-tionandexperimentalresultswillbeprovidedfromtheapplicationoftheseprinciplestothelistedproblems.Amongallthesystems,theLorenzsystem,asatraditionalchaoticsystemwithhighordernonlineardynamics,andthemissilesystem,asatra-ditionalSISOcontrolsystem,willbediscussedindetailsaboutsystemidenticationperformanceandcontrolperformancerespectively.Forallthesamplesystems,wewillassumethatthesystemtransferfunctionsareunknown,andthereferencemodelswillbeconstructedbasedonI/Omeasurements. 63

PAGE 75

(a)(b) Figure5{1. DynamicsoftheLorenzsystemLeft:x(top),y(middle),andz(bottom);Right:3-Ddisplay. canimprovesystempredictabilityandavoidfatiguefailureofthesystem.Severalmethodologieshavebeenappliedfortheidenticationandsynchronizationofavari-etyofchaoticsystems.Forexample,ageneralizedsynchronizationofchaosthroughlineartransformation[ 78 ],adaptivecontrolandsynchronizationofchaosusingLya-punovtheory[ 79 ],andanadaptivevariablestructurecontrolsystemforthetrackingodperiodicorbits[ 80 ].ThecontributionofourworkliesinthedesignofdierentcontrolsystemsbasedonGMM-LLMapproach. 81 ].TheLorenzsystemcanbedescribedas: _x=(yx)_y=xyxz _z=z+xy

PAGE 76

wherex,y,andzarethemeasurementsofuidvelocityandhorizontalandverticaltemperaturevariations.TheLorenzsystemcanexhibitquitecomplexdynamicsdependingontheparameters.Inourexperiment,parametersaresetas=10,=28,and=10 3toobtainchaoticdynamicsandtherststatexissetastheoutput.Using4thorderRunge-Kuttaintegrationwithtimestep0.01,9000samplesweregeneratedforanalysis.UsingtheLipschitzindexanalysistheembeddingdimensionisdeterminedtobe3andtheembeddingdelay=2.TheGMMtrainingdataisconstructedas Therst8000samplesfromthetime-seriesareusedtocreatethereconstructedstatespacethrougheight-kernelGMMalgorithm,andlocallinearmodelsaretrainedforone-stepprediction.Another1000samplesareusedforsignaltoerror(SER)performancetest 5.1.1 showsthattheestimatedoutputisquiteclosetotheoriginalLorenzsignal.Andgure 5{3 showstheoptimaldistributionofGaussianker-nelamongthestatespace.InordertofurthercomparetheidenticationperformancebetweenGMMandotherapproaches,weplottheguretoverifytheperformanceincrementofGMM-LLMasnumberofPEincrease.Fromgure 5{5 andtable 5{1 ,wendthat\soft-combination"enableGMMtousefewerPEs.Thetotalamountof

PAGE 77

(a)(b) Figure5{2. Systemidenticationperformanceonthe\x"oftheLorenzsystem.Left:usingGMM:Originalandidentiedsignals(top)andcorrespondingerror(bot-tom);Right:usingESNmodel(top)andcorrespondingerror(bottom). parametersinSOM-LLMandGMM-LLMapproachescanbeexpressedas Inordertomakeacompletecomparison,alongwithtraditionalmethodsRBFandTDNNastworeferences,two300-PEESNsareconstructed.OneESNisdesignedwithevendistributionofspectralradius[ 36 37 ],theothercomeswithoptimizedde-signcriterionwherelargerpartofinternalweightsdistributedalongthemaximumspectralradius.BothusethesameMSEtrainingcriterion.Intable 5{1 weseethatESNcanmakebetterone-steppredictioncomparedwiththeotherapproaches.Since

PAGE 78

Table5{1. PerformanceontheLorenzsystem Approaches(#ofPEs) #ofparameters SER(dB) RBF(50) 203 31.05 TDNN(50) 250 29.23 SOM(1010) 600 33.67 GMM(8) 80 29.16 ESN1(300,1Dinput) 300 35.05 ESN1(300,3Dinput) 300 43.40 ESN2(300,1Dinput) 300 36.54 ESN2(300,3Dinput) 300 45.20 Figure5{3. DistributionoftheLorenzsystem(dashline)andGMMweights(star) theESNtrainingisfocusedonthelinearreadoutvector,thetrainingforlargersizeofESNcanstillberealizedwithinreasonableamountoftime.Aninterestingphenom-enonintheresultisthatESNrelylessontheembeddingdimensioncomparewithotherapproaches,asonedimensionalinputalreadygeneratecomparablepredictionperformance.Andthatcomplieswithourtheoreticalexplanationinchapter2and4. OncetheGMM-LLMstructureisoptimizedaccordingtotheproposedmethod-ology,thecontrollerdesigncanbeaccomplishedusingstandardlinearfeedbackcon-trollerdesigntechniques.SincetheLorenzsystemisoriginallyanautonomoussystem,

PAGE 79

Figure5{4. Weightschangeof8GMMkernels; thecontrolcouldberealizedfordynamiccompensation.Givenadesiredsignaldn,thecontrollerneedstondacontrolsignalunsuchthatthesystemoutputerrorconvergestozero.Undertheconditionthatmodelpredictioniscloseenoughtotherealdynamics,theLorenzsystemwillfollowthecontrolsignal(s)totheequilibriumpoint. limn!1jdnynj=0(5{3) FortheLorenzsystem,the3-dimensionalcontrolsignalvectoris (5{4) whereonecontrolsignalcorrespondstoonestatevariable,andthepredictedstatevector~xn+1isobtainedfromGMM-LLMprediction.Tistheerrorcompensationcoecientbetween[-1,1],whichconrmstheerrorconvergestozero.Followingthe

PAGE 80

Figure5{5. PerformancecomparisonbetweenSOMandGMM (a)(b) Figure5{6. CompensationperformanceLeft:stabilizingx(top),y(middle),andz(bottom);Right:synchronizingx,y,andz(upperthree).Controlsignalsstartfromdatapoint1000forbothcases. conditionofdesiresignalsasequilibriumpointsoratotallydierentdynamics,wecanrealizeastabilizingandsynchronizingcontrol.Figure 5{6 showsstabilizingandsynchronizingperformancewherethecontrollertakeseectatpoint1000.Wefoundthatforbothcases,thecontrollercancompensatethedynamicerrorveryfastandguaranteestheconvergence.OnefactorneedstobeemphasizedhereisthattheLorenzsystem,beingachaoticsystem,isverysensitivetoappropriatemodeling.

PAGE 81

Otherwisethecontrollerwillnotconvergethedynamicstoequilibriumincasethesystemidenticationperformanceisnotcloseenoughtotherealvalues. x+_x+(x3!0x)=cos(!t+)(5{5) Dependingontheparameterschosen,theequationcantakeanumberofspecialforms.Forexample,withnodampingandnoforcing,==0andtakingtheplussign,theequationbecomes x+!0x+x3=0 Thisequationcandisplaychaoticbehavior.For>0,theequationrepresentsa\hardspring",andfor<0,itrepresentsa\softspring".If<0,thephaseportraitcurvesareclosed[ 21 ]. Withoutlosinggenerality,wesimplifythedungoscillatorwithcontrolsignal.Thesystemfunctioncanbere-writtenintostatespaceformatas (5{6) withcorrespondingconstantcoecients[;!0;;]=[0:4;1:1;1:0;1:8]and!=1:8[ 82 25 21 ],andthecontrolsignalissetasjuj5. Using4thorderRunge-Kuttaintegrationwithtimestep0.2s,8000samplesweregeneratedforanalysis.TheGMM-LLMistrainedwiththerst5000points,andthe

PAGE 82

Figure5{7. DynamicsoftheDungoscillatorasanautonomoussystem.Left:Statespacedescription;Right:Timeseries. Table5{2. PerformanceontheDungoscillator MODEL(#ofPEs) #ofparameters SER(dB) RBF(50) 254 27.94 SOM(64) 512 28.06 ESN(300) 300 30.75 GMM(8) 104 26.46 remainingisusedformodelingtestingandcontroltrackingtesting.TheembeddingdimensionisdeterminedasXk=[yk;yk1;uk;uk1]T.Fromexperiments,thesizeofGaussianmodelsissetaseightPEs.ForESNpart,thesameamountofdataisusedforbothtrainingandtesting.Thedataembeddingissetas\1"dimension(Xk=[yk;uk]T),and300internalweightsareusedforechostates. Theidenticationperformanceondungsystemisshowningure 5{8 ,andsam-pleofGaussiandistributionisshowningure 5{9 .TheirSERperformancelongwithcomparisonwithsinglemodelperformancearelistedintable 5{2 .Consideringthestructuraldierence,wewillmainlycompareGMM-LLMwithRBFandSOMbasedmodel.WendthatGMMisusingfarlessamountofmodels,whichmeansasimplerstructurecomparewithothermodelingparadigm,andthepredictionperformanceis

PAGE 83

Figure5{8. SystemidenticationperformanceontheDungsystem Left:GMM-LLMresults;Right:ESNresults. stillverygood.Figure 5{9 showsthatforalargepartofthetesting,soft-combinationistakingeectamongtheGaussianmodels,whichcomplieswithourexplanationinchapter3.ESN,ontheotherway,isusinglessinputdimension,andthepredictionisoneofthebest.ModiedESN,fromnowon,willbeourprimaryoptionfortheechostatesetupasitisproducingbetterresultsthanitspreviousdesign. Next,wetrytorealizetwocontrolmissionsbasedonGaussianmodels.Therstsetssystemdynamicstothezeroequilibriumpoint;thesecondmakesthesys-temtrackadierentdynamics.Astheoriginaldynamicissetas[;!0;;]=[0:4;1:1;1:0;1:8]and!=1:8,thenewoneissetas[;!0;;]=[0:41;1:1;1:0;2:0]and!=1:9. Forthesecontrolmissions,weusethePIDcontroller,slidingmodecontroller,andoptimalrobustcontroller.Consideringthesignicanceoftheproblem,therobustnesscomparisonofdierentcontrollerwillbediscussedinthenonlinearSISOsystem.ForPIDcontroller,thepolesaresetasp1;2=0:6toensurestabilityandgoodconvergingspeed.Theparametersforslidingmodecontrolleraresetasc1=1;c2=2:5;eT=0:0001;qT=0:3[ 83 84 85 ].Fortheoptimalcontroller,thegainmatrixisdesigned

PAGE 84

Figure5{9. Sampleofweightschangeof8GMMkernels; as Wecomparetheperformanceofcontrollersusing200strajectoryregardingfallingtime,settlingtime,andsteadystateerror.BecauseoftherelativelylowSERsystemidenticationperformance,noneofthecontrollercanmaketheDungsystemdy-namicconvergetodesiretrackperfectly,asshowningure 5{10 where0s10sofdynamicsisshown.Roughlyspeaking,thecontrolperformanceofallthreecontrollersareclosetoeachother.Inthesetpointcontrolmission,however,optimalrobustcon-trollerandslidingmodecontrolclearlyhaveshortersettlingtimeandsmallersteadystateerroroverPIDcontroller.Regardlessrising(falling)timeandsettlingtimeinthesecondgure,thePIDcontrollershowslargeosetwheresystemdynamicsreachhighlynonlinearrange,e.g.atthestartandneartheextrema.Sincethemodelingfor

PAGE 85

Figure5{10. ControlperformanceofdierentcontrollersontheDungsystemTop:Setpointcontrol,systemdynamicstartsfrom1;bottom:Trackingcontrol,newtrackcomesfromadierentsystem. theDungsystemdoesnotconsidernoise,optimalrobustcontrollerdoesnotshowuniqueadvantagehere.Consideringtherobustnessofoptimalcontrolleroverdistur-bance,wecanexpectthatitwilloutperformothercompetitorsunderthesituationwheremodelinguncertaintyandnoisearebothpresent.

PAGE 86

5.2.1LinearSISOMass-SpringSystem _x1=1 _x2=1 86 ].Intheexperimenthereuislimitedwithin0:5,otherparametersaresetasm=5,k=100,andb=0:1.Thesampleofsystemdynamicsisshowningure 5{11 TheamountofGaussianmodelsischosenas6fromexperiments.The6-kernelGMMwastrainedwith5000samplesofinput-outputpairsobtainedfromequation( 5{7 )using4thorderRunge-KuttaintegrationwithtimestepofTs=0:05s,withubeingi.i.d.uniformlydistributedrandomsignal.Thesystemidenticationdatawasconstructedwithembeddingdimensionof4,accordingtoLipschitzindex,forinputandoutputbothbeing2.Thus~X(n)=[x1(n);x1(n1);u(n);u(n1)].ForESNmodeling,theinputdimensionisagaindecreasedto~X(n)=[x1(n);u(n)]. Theidentiedmodelsweretestedonoriginal2000samplesdata,generatedusinganewsequenceofrandominputsignal.Theactualplantoutput,themodelpredicted

PAGE 87

Figure5{11. Dynamicsofthemasssystem.top:X1;middle:X2;bottom:controlsignal. Table5{3. Performanceonthelinearmasssystem Model(#ofPE) #ofparameters SER(dB) SOM-based(88=64) 512 50.75 RBF(64) 324 47.30 GMM-based(6) 78 47.24 ESN(200) 200 49.05 outputandtheestimationerrorfortheGMM-basedLLMmodelsareprovidedingure 5{12 .Figure 5{13 showsthatcooperativemodelingistakingeectduringtesting.TheSERperformanceis47.2388dB.ThesamedataisappliedtoSOMandRBFtomakeacomparison.Table 5{3 showsthatGMM-LLMdoesnotgeneratethebestresult.YetconsideringtheamountofPEusedinGMM-LLM,GMM-LLMisahighlyecientmodelingmethod. Thecontrolmissionfortheplantcanbeaccomplishedthroughaseriesofcon-trollers.Thezeroorderinversecontrollerwilldirectlyreplacenextsteppredictionwithdesiresignal.ThePIDcontrollerdesigniscarriedoutinthewayofstandardpole-placementtechnique.ThecorrespondingPIDcoecientsaredeterminedto

PAGE 88

Figure5{12. SampleofsystemidenticationperformanceontheMasssystem.Left:GMM-LLMresults;Right:ESNresults. bringtheclose-loopresponsepolesto0:05+i0:3and0:05i0:3inordertodecreasethetransientovershot,whichareallveriedinthesimulationofcross-validation.Theslidingmodecontroller,asastrongcandidatefortherobustcontrol,isconsideredasareferencewithparametersc1=1;c2=2:5;eT=0:0002;qT=0:5,whicharene-tunedresultsfromsimulation.Fortheoptimalrobustcontroller,thegainmatrixaredesignedas Theset-pointcontrolmissionissetas:systemstartsatavalueof0.5,anddesiresignalis0for3second,thenthedesiresignalischangedto0.3.Inordertoidentifytherobustnessofdierentcontrollers,anothersetofexperimentwith30dBSNRnoiseisaddedtothesystemdynamicsasmodelinguncertainty.Thenalperformanceisshowningure 5{14 .Wecanseethatinthenoisefreecase,allcontrollershavesimilarsettlingtimeandsmallsteadystateerror.PIDcontrollerhasbigovershot,shortfallingtime.Slidingmodecontrollerhaslongerfallingtimeyetsmallovershot.Optimalcontrollerhasbothshortfallingtimeandsmallovershot.In30dBnoisecase,

PAGE 89

Figure5{13. Sampleofweightschangeof6GMMkernelsformasssystem allthreecontrollersdonothavemuchdierenceonfalling(rising)time,settlingtime,andovershot.Theoptimalcontrollerobviouslyoutperformothertwoonthecriterionofsteadystateerrorastheerrorvariancefromtheoptimalcontrollerissmallerthanthosefromothercontrollers. _x1=x20:1cos(x1)(5x14x31+x51)0:5cos(x1)u _x2=65x1+50x3115x51x2100uy=x1

PAGE 90

Figure5{14. Set-pointcontrolperformanceonthemasssystem.Left:noise-freecase;Right:30dBSNRnoisecase. Figure5{15. Missilesystemdynamics.Left:statespacedescription;Right:X1,X2,andcontrol(toptobottom). TheGMMwastrainedwith6000samplesofinput-outputpairsobtainedfromequation( 5{8 )using4thorderRunge-KuttaintegrationwithtimestepofTs=0:05s,whichcorrespondsto300secondsofighttime.Inordertoexcitearichvarietyofdynamicalmodesintheplant,thesystemidenticationdatawasgeneratedusingauniformlydistributedrandominputsignalwithinthespeciedlimits[ 2 ].GMM-basedLLMapproachdevelopedaeight-modemixturemodels,resultingineightcooperativelinearmodels.Theembeddingdimension,accordingtoLipschitzindex,forinputandoutputwerebothselectedtobe2,resultingin4-coecientlocalmodels.This

PAGE 91

Figure5{16. Sampleofsystemidenticationperformanceonthemissilesystem.Left:GMM-LLMresults;Right:ESNresults. Table5{4. Performanceonthemissilemodel Model(#ofPE) #ofparameters SER(dB) SOM-based(88=64) 512 31.70 RBF(100) 504 33.03 GMM-based(8) 104 31.00 ESN(300) 300 36.25 embeddingdelaydimensionwasalsochoseninaccordancewiththedimensionalityofthestatedynamics. Theidentiedmodelswerealsotestedonoriginal50s-lengthdata(1000samples),generatedusinganewsequenceofrandominputsignal.Theactualplantoutput,themodelpredictedoutputandtheestimationerrorfortheGMM-basedLLMmodelsareprovidedingure 5{16 .TheESNisconstructedwith300PEs,andinputissettostatewithouttimeembedding.TheSERperformance,comparedwith31.7dBfrom64-PESOMintable 5{4 ,is31dB[ 18 ]. Thecontrolmissionfortheplantcanbeaccomplishedthroughaseriesofcon-trollers.Thezeroorderinversecontrollerwilldirectlyreplacethenextstepprediction

PAGE 92

Figure5{17. Sampleofweightschangeof8GMMkernelsformissilesystem withthedesiredsignal.ThePIDcontrollerdesigniscarriedoutinthewayofstan-dardpole-placementtechnique.ThecorrespondingPIDcoecientsaredeterminedtobringtheclose-loopresponsepolesfromtheplantoutputto0:5+i0:1,and0:5i0:1.Theslidingmodecontroller,asastrongcandidatefortherobustcontrol,isconsideredasareferencewithparametersc1=1;c2=1:85;eT=0:0001;qT=0:3[ 83 84 85 ].Fortheoptimalrobustcontroller,thegainmatrixaredesignedas Aswediscussedinpreviouschapter,controlperformancewillbedeterminedbymodelingperformanceaswellascontrollerperformance.Undertheconditionthatthecontrollerisstable,thecontrolperformancewilleventuallyconvergetotheminimumofmodelingerror.Inordertodelivertheresultsmoreclearly,weshowtheguresin

PAGE 93

Figure5{18. PerformanceofPIDcontrollerwithdierentmodelingapproaches. thefollowingsequence:weshowthedierent-model-same-controllerresultsrsttoverifytheeectfrommodelingpart,thenthesame-model-dierent-controllerresults. Figure 5{18 displays,undernoise-freecondition,theset-pointcontrolperfor-manceofPIDcontrollerwithdierentmodelingapproaching,plusTDNNcontrollerasareference.Recallingthesystemfunction( 5{8 ),theregionaround-1.5iswherenonlineardynamicsistakingthelargesteect.Andgure 5{18 clearlyshowsthat,1).thesinglemodelPID(PID-MA)haslongersettlingtimeandthelargestvibra-tionbeforetheconvergencebecauseofitsmodelingperformance;2).TDNN,duetoitsstructuraldierence,hastheshortestfallingtimeyetthelongestsettlingtime;3).theGaussianmixturemodelPIDcontroller(MPID-GMM)hasverycloseper-formancetotheSOMmultiplemodelPIDcontroller(MPID-SOM);4).MPID-SOMgivesthemoststableperformanceanditneedsmoretraining.Fornowwecancon-cludethatGMM-LLM,whencomparedwithothermodelingapproaches,cansavemuchcomputationalcostandthenalcontrolperformancetoalargeextent.

PAGE 94

(a)(b) (c)(d) Figure5{19. GMM-LLMbasedcontrolperformanceonthemissilesystem.(a):Stepresponsewithoutnoise(outputanddesire);(b):Stepresponsewithnoise(outputanddesire);(c):Trackingresponsewithoutnoise(outputanddesire);(d):Trackingresponsewithnoise(outputanddesire). Nowwetesttheperformanceofcontrollersundertheconditionthatmodelinguncertaintyandmeasurementnoiseispresent.Theclosed-loopnonlinearsystemistestedintwocases:onewithvariousstepchangestothedesireoutputandonewithasmoothlychangingdesiredoutput,whichisthesumofseveralsinusoidswithdierentamplitudes,phasesandfrequencies.Tosimulatetheworstcondition,thedisturbanceisconsideredmodelinguncertainty.Aftergeneratingthetrainingdata,weadd30dBSNRnoisetothesystemmodelduringtesting.Inthatcase,modeling

PAGE 95

Table5{5. Controlperformancecomparisononmissilesystemconsideringnoise Set-point Set-point Tracking Tracking ErrorMean ErrorVariance ErrorMean ErrorVariance MPID-GMM -0.0289 0.0272 0.0070 0.0108 Optimal -0.0222 0.0186 0.0048 0.0072 Slidingmode -0.0272 0.0202 -0.0024 0.0101 (a)(b) Figure5{20. ESNcontrolperformanceonthemissilesystem.(a):Noisefreeset-pointperformance;(b):Set-pointperformancewith30dBnoise. eectisnegligibleandcontrolperformanceisemphasized.Fromgure 5{19 itshowsthatthestepandtrackingperformanceofallthedesignednonlinearcontrolschemesinclose-loopoperationwithactualplantmodelisverysatisfactory.GMM-LLMmodelingerrorisinsignicantinthenalperformance.Comparingwithothertwoapproaches,PIDneedstheleastcalculationforcoecients.Consideringthedetailofgure 5{19 ,asintable 5{5 ,thedierenceamongthoseapproachesarenoticeable,inwhichtheoptimalrobustcontrollerdemonstratesthesuperiorperformancefordisturbancerejectioncomparedwithothers. Finally,wetrytheESNcontrollerforset-pointcontrolinordertoverifythefeasibilityofESNcontroller.Followingthestructureinchapter4,weconstructa300-PEESNcontrollerwithrandomreadoutvector.Theinputtothecontrolleristhecurrentsystemstateswithouttimeembedding,andthemodelinformationis

PAGE 96

derivedfromtheGaussianmixturereferencemodel.BPTTlengthissetto100step,whichcorrespondsto5secondinrealsystem.Thewholesystemisrecursivelytrainedin15iterations.Thenalcontrolperformanceisshowingure 5{20 .Inthegure,wecanseethattheESNcontrollercanaccomplishtheregularcontrolmission,yetitsperformanceisnotcomparabletotheoptimalrobustcontroller.SincetheadvantageofESNisitsgeneralizationcapacity,simplicityfortraining,anditsindependencyoftimeembeddinginput,furtherinvestigationneedstobemadeforESNbasedrobustcontroller.

PAGE 97

Figure5{21. Generaldescriptionofanaircraft ghtagainsttheshockwaves.ThewaveridershapeimprovesfuelconsumptionbydecreasingairresistanceatspeedsgreaterthanMachone.Thisfullscaleaircraftwilltakeohorizontally,thenitwilluseair-breathingenginestoacceleratetoacruisingspeedofMachveataveryhighaltitude.Anditwillenditsightbylandingonaconventionalrunway.ThetaskhereistodevelopmodelingandcontrolstrategiesforLoFLYTEbasedsolelyoninput-outputdata. Accordingtoclassicalaerodynamics,theightdynamicsofanyrigidbodyaredeterminedbymovementsalongandaroundthreeaxes:roll,pitch(longitudinalmotion),andyaw(lateralmotion).Besidesthestrongcouplingbetweensystemstates,themaincontroleectcanstillbeclassied.Theelevatoreisthemaineectforcontrollingthelongitudinalmotionstatevariables(pitchangle,,andpitchrate,q).Therudderrisprimarilycontrolsthelateralmotionstatevariables(yawangle,psi,andyawrate,r).Theaileronamainlycontrolstherollmotionstatevariables(rollangle,phi,androllrate,p).Finally,thethrottletlargelycontrolstheaircraft'slongitudinalspeed,andforsomeplanes,deectablethrustvectorsmightallowyaw

PAGE 98

(a)(b) Figure5{22. DynamicsoftheLoFLYTEsystem.Left:p(top),r(bottom);Right:Controlsignal,aileron(top),rudder(bottom). androllcontributionsfromtheenginepowerinsomecase.Undercertainsymmetryassumptionsfortheaircraftbody,thestatedynamicsoftherigid-bodyaircraftcanbedescribedasfollows _u=(wqvr)gsin+Fx=m _v=(urwp)+gcossin+Fy=m_w=(vp+uq)+gcoscos+Fz=m_p=[(IyyIzz)qr+Ixz(qrpq)+L]=Ixx_q=[(IzzIxx)rp+Ixz(r2p2)+M]=Iyy_r=[(IxxIyy)pq+Ixz(pqqr)+N]=Izz_=p+qsintan+rcostan_=qcosrsin_=qsinsec+rcossec

PAGE 99

whereu,v,andwarethespeedcomponentsoftheaircraftalongitsbodyaxesx,y,andzrespectively.p,q,andraretheangularspeedsalongthoseaxes.And,,andaretheEuleranglesthatdenetherotationmatrixbetweenthebodycoordinateframeandtheinertialcoordinateframe.Thegravitygisalongthedowndirectionoftheinertialcoordinate.Thethrottle,enginepower,andaerodynamiceectsgeneratetheforcesFx,Fy,andFzaswellasthemomentsL,M,andN.Them,Ixx,Iyy,andIzzaretheaircraftmassandmomentofinertiarespectivelywhicharedeterminedbytheaircraft'sgeometry. TheLoFLYTEprogramisanactiveighttestprogramattheAirForceFlightTestCenteratEdwardsairforcebase.TheLoFLYTEaircraftissimulatedusingasoftware,C++versionorMatlabversion,byACCandisassumedtobeclosetothetrueplant.Withoutlosingthegenerality,thethrottleisassumedconstantandthestatevariablesp,q,r,u,v,wareavailableforexternalmeasurement.Thegoalofthesystemidenticationandcontrolproblemistodeterminelocallinearmodelsfromfourinputs(aileron,elevator,rudder,andthrottle)tosixstatevariablesandtocontroltheminordertotrackadesiredtrajectoryofight. Inordertosimplifytheproblem,thelongitudinalmotionofthesystem,axisx(forward)andz(down)andpitchrateq,areconsidereddecoupledwithlateralmotions,consistingtheaxisy(right),rollratep,andyawrater.Insteadofdealing

PAGE 100

(a)(b) (c)(d) Figure5{23. SystemIdenticationperformanceontheLoFLYTEsystem. (a):desiredpandGMM-LLMprediction(top),error(bottom);(b):desiredrandGMM-LLMprediction(top),error(bottom);(c):ESNonp(top),error(bottom);(d):ESNonr(top),error(bottom). withsixvariableatthesametime,thewholesystemisbrokenupintotwoparts.AndtheproblemissimpliedtomodelingtheLoFLYTE'spandrdynamicsusinginput-outputdatawhichincludesaileron,rudder,p,andr.Figure 5{22 showsthedynamicsoftheLoFLYTEusingthecorrespondingcontrolsignals.Andlocallinearformatofsystemtransferfunctioncanbeexpressedas

PAGE 101

SampleofweightschangeofGMMkernelsfortheLoFLYTEsystem wherexn=[pn;rn]T,un=[a;n;r;n]T,Ai,BicomefromLLM~Wn.Thusthetrainingdataisconstructedas ThroughtheLoFlyteMatlabmodel,totally6000dataaregenerated.4000ofthedataareusedastrainingdata,2000areusedforsystemidenticationtesting,andthelast1000oftestingdataisusedasdesiresignalsforcontrollertesting.A10-clusterGMMistrainedbasedonthetrainingdataset,twosetsofLLMarecalculatedfromdatareferringpandrnext-stepsignalasdesiresignals.Correspondingly,one300-PEESNisconstructedforsignal\p"and\r"withcouplinginputandwithouttimeembedding.Themodelingperformanceislistedinthetable 5{6 .ComparingwithRBF,GMMisstillacomparableandcomputationecientmethod.Againweemphasizeherethatbecauseofitscomputationaleciency,GMM-LLMwillbringasimple,piece-wiselinearplatformfornonlinearrobustcontrolapproach.

PAGE 102

Table5{6. PerformanceontheLoFLYTEsimulationdata MODEL(#ofPEs) #ofParameters SERonp(dB) SERonr(dB) RBF(100) 908each 41.33 49.98 SOM(64) 1536 42.17 50.74 GMM(10) 330 40.09 50.58 ESN(300) 600 44.55 53.62 WenowstudytherobustcontrolproblemwithGMM-LLMlocalmodeling.BasedonGLL-LLM,wewillapplysecondorderinversecontrollerandoptimalrobustcontroller.Whenwedesignthecontroller,wewillassumetheminimumcouplingbe-tweenthelateralandlongitudinalmovement.Hereweperformasimulationtocontroltheroll-rate(p)andtheyaw-rate(r)oftheLoFlytebyaileronandrudder,settingtheelevatortozeroandthrottletocertainconstant.Thedesignofoptimalrobustcontrollerwillfollowthedetailinperviouschapter.Fortheinversecontroller,oncetheLLMsareavailableforpandr,andtheirdesirevalues,thesecondorderdynamicinversecontrollercanbeexpressedas

PAGE 103

error.Thecostfunctionsfortheoptimalrobustcontrollerforbothdimensionsaredesignedas Table5{7. ControlperformancecomparisonontheLoFlytesystemconsideringnoise Set-point Set-point Tracking Tracking ErrorVar.(p) ErrorVar.(r) ErrorVar.(p) ErrorVar.(r) Inverse 0.0802 0.0026 0.1833 0.0060 Optimal 0.0144 0.0002 0.0322 0.0018 Thecontrolmissionisdividedintoset-pointregulationwhichsetthesystemdy-namicstoasetofxedvalues,andarbitraryoutputtracking,whichcomesfrompartofthedatacollectioninprevioussection.Consideringtherobustnessperformance,30dBSNRnoiseisaddedtothedynamicsofthesystemasmodelinguncertainty.Thecomparisonofcontrolperformanceareshowningure 5{25 5{26 5{27 ,and 5{28 ,thedetailisshownintable 5{7 .Itisclearthatinthenoise-freecase,bothcontrollershavesimilartransientandlong-termperformance.Intherstveseconds,wendstrongcouplingistakingeectsuchthattwoparametersareconvergingataboutthesametime.Inthecaseofnoise,however,theoptimalrobustcontrollerhasobviousadvantageoversecondorderinversecontroller.Asshowninthecomparisongures 5{27 and 5{28 ,controlresultfromoptimalrobustcontrollerobviouslyhassmallererrorvariancewhen30dBnoisepresenttobothdimensionsasmodelinguncertainties.

PAGE 104

(a) (b) Figure5{25. ControlperformanceontheLoFLYTEsystem(set-point,noisefree).(a)Inversecontrollerperformance;(b)Optimalrobustcontrollerperformance.(trackofp(top-left),trackofr(top-right),controlsignala(bottom-left),andcontrolsignalr(bottom-right))

PAGE 105

(a) (b) Figure5{26. ControlperformanceontheLoFLYTEsystem(tracking,noisefree).(a)Inversecontrollerperformance;(b)Optimalrobustcontrollerperformance.(trackofp(top-left),trackofr(top-right),controlsignala(bottom-left),andcontrolsignalr(bottom-right))

PAGE 106

(a) (b) Figure5{27. RobustcontrolperformanceontheLoFLYTEsystem(set-point,30dBSNRnoise).(a)Inversecontrollerperformance;(b)Optimalrobustcontrollerperformance.(trackofp(top-left),trackofr(top-right),controlsignala(bottom-left),andcontrolsignalr(bottom-right))

PAGE 107

(a) (b) Figure5{28. RobustcontrolperformanceontheLoFLYTEsystem(tracking,30dBSNRnoise).(a)Inversecontrollerperformance;(b)Optimalrobustcontrollerperformance.(trackofp(top-left),trackofr(top-right),controlsignala(bottom-left),andcontrolsignalr(bottom-right))

PAGE 108

Localmodelingapproacheshaveattractedlargeinterestbecausetheyhavetheabilitytoexplainlocaldynamicsofanonlinearsystem,whichisdicultespeciallyinthecasewhennonlineardynamicalsystemcharacteristicsvaryfastthroughoutthestatespace.Inanumberofcases,especiallyundertheblack-boxapproach,localmodelinghasbeenproventobeamoreeectiveapproximationmethodologythanglobalapproaches. 97

PAGE 109

AnotherimportantfactorofthisschemeistheusageofGrowing-SOMsimilaralgorithmfortheinitialdistributioncalculationofGMM'sweights.WiththehelpofGrowing-SOM,therstbenetisthattheconvergetimeandspeedaregloballyoptimized.Andsecondly,sincetheGrowing-SOM'sweightsandGMM'smeanholdingthesamephysicalmeaning,GMMisprovidedaexibleclustergrowingframeworkwhichisextremelyusefulwhenaddingmoreGaussianclustersisnecessary. AmongseveralLLM-basedcontrollers,optimalrobustcontrollerisoneofthebestresultswecanobtain.Theproblemforreferencemodelbasedpredictioncontrolisthatwhenreferencemodelhaserror,thecontrolwillhaveerrorstoo.Especiallywhenmodelinguncertaintyandmeasurementnoisepresent,thecontrolperformancewilldecreasedramatically.Optimalrobustcontrollerconsidersthedisturbanceeectsinalong-termsense,thusdecreasethenoiseeectinthelargestway. 1.BesidestheconclusiononprecisionandrobustnessanalysisforcontrollerbasedonGMM-LLM,adaptivedisturbancecancelationapproachesshouldbefurtherexploredasmodelprecisionisalwaysthepriorconcerninourmodeling-controlframework. 2.ESNmodelingdoesnotneedembeddingatinput.YetESNcontrollertrainingisstillnotastraightforwardsolutionformanyproblems.StableconvergenceofESNcontrollertrainingwillbeaninterestingquestiontoask. 3.Optimallocalmodeling.GMMcandecreasequantizationerror,yetLLMstillneedtomakeatradeobetweentheeectiveregionandthecurseofdimension.

PAGE 110

Nonlinearmodelingneedmorecomplextrainingeortandithasmodeexiblemodelingcapability. 4.Othercontrollerdesignmethodology.Theaccomplishedcontrollerdesignfol-lowsthesamesoft-combinationcriterionfromGaussianlocalmodeling,whichistrainedthroughinputspace.Aninnovativeattemptcouldbemadetodesignthecontrollerthroughinput-outputspace.

PAGE 111

ThecurrentparameterdesignprocedureforESNfollowsJaeger'sstabilityen-hancedwithOzturk'sentropycriterion,whichsetsspectralradiuslessthanoneandmaximizeentropyfortheinternalweights.Theadvantageoftheabovemethodisitssimplicity,theonlymajorfactorswecancontrolare:thenumberofinternalweightsandthemaximumspectralradius.BasedonOzturk'sapproach,theeigenvaluesofESNwillbeevenlydistributedwithinthecirclewhichisdenedbymaximumspec-tralradius.Sincewedonothaveanyfurthercontrolontheinternalweights,theoptimalESNperformanceislargelylimitedtoitssizeanddistribution.Consideringthecomputationalcostonreadoutweightsandpossiblememorylimitoncertainex-perimentenvironment,theESN'ssizecannotbeincreasedbeyondreasonablelimits.Undertheconditionofstability,themaximumspectralradiusalsocannotbesettovaluelargerthan\1".ThusthereisonepossiblemodicationcanbemadetotheESNdesign,thedistributionofitsspectralradius. Accordingtothedenition,themaximumspectralradiusofESNcorrespondstoitsmemorylength.WhentheembeddinglengthrequirementislargerthantheESNmemorylength,itstartsto\forget"theinput.WhatwewanttoimproveistoincreasetheaveragememorylengthofESNandnottoincreasetherelatedtrainingcost. Fromgure 2{2 ,theweightswithlargestspectralradiuswithinESNonlyaccountforasmallportionofthewholeset.Wecanalsounderstandthoselocationsaspoles 100

PAGE 112

FigureA{1. DistributionofimprovedESNinternalweightswithmaximumspectralradiusis0.9,withunitcircleasareference. ofthelinearizedsystem.Sincethepolesofthetargetsystemareunknown,wefoundinexperimentsthatthemodelingperformancewilldecreasewhenthereisnot\reasonable"amountofESNpoleslocatednearthesystempoles.Weproposeherea\dominantpole"approximationtoESNdesign,i.e.wewillplacemostofthepolesnearthelargestspectralradius.Consideringthetargetsystemmighthaveseveralpoles(eigenvalue)distributedalongthemaximumvalue,andthemaximumpolesdominatetheinternalembeddinglength,largeramountofpolesclosetomaximumeigenvaluecouldbeagoodanswerforsystemidentication.Also,wefoundoutthatwhenalargeportionofESNpolesareclosetothemaximumspectralradius,theweightsentropywillnotdecreaseappreciablycomparedwiththemaximumentropyfromevenlydistributedpoles.Meanwhile,incasewherethedominantpoleisareasonableapproximation,ESNmodelingperformanceisalsoimproved.Comparing

PAGE 113

gure 2{2 andgure A{1 ,wecanndthattherearemorethan80%ofpolesingure A{1 equaltothemaximumspectralradius. TherealizationofsuchoptimizedESNweightsmatrixcouldbeeasilyachievedthroughmultiplerootdesign.First,wedesignaregularESN,thencalculatethepolynomialofitseigenvalues. Settingai=0,forasetofi(i>d)inordertomakethepolynomialhavemultiplecomplexrootswithidenticalmagnitudeanddierentphase.Andthenewpolynomialsn+a1sn1++adsnd=0willdeterminehowmanypolesarecomplexandtheirmagnitudesareclosetomaximumeigenvalue.InordertokeepthepolesofthenewpolynomialasthepolesofESN,were-arrangetheweightsmatrixas A{1 .Thenewmatrixwillholdthesameeigenvaluesfromthepolynomial,andsincewecandesigntherootsofpolynomial,we\design"theESN'sinternalweights.OnepossibleproblemforthemodiedESNdesignisthatitmaynotbeappropriateforalldynamicsystemmodelingproblems.Incaseswherethedominantpoleapproximationisvalidshouldyieldbetterperformance.Forinstance,whenthetaskislimitedbylongterminternalmemory,thenewcriterionisappropriate.Incaseswherethedesignrequiresmodeling

PAGE 114

ofdierenttimeconstants,theproposedESN'smodelingcapacitymightdecreaseversustheoldapproach.

PAGE 115

WehavediscussedthemodelingpropertyofESNinchapter2.ConsideringitsRNN-relatedmodelingcapability,ESNcanactasbothmodelapproximatorandcontrollerasshowningure B{1 .WhenESNisdesignedforsystemidentication,aswewillshowinthefollowingsections,itcanmakeshorttermpredictioneasierthanGMM-LLMframework.AndESN'smodelingperformancewillnotbeworsethanthoseofGaussianmodelingbecauseofitsstructuraladvantages.Intermsofmodeldescriptionsimplicity,ESNisnotanoptimalchoiceforsystemidentication.Underthelocalmodelingcontrolframework,LLMgivesthesimplestlineardescrip-tionofsystemdynamics.ThroughESNmodeling,however,itwilltakemuchmoreeorttoback-propagatethemodelinformationfromESNinternalweights.Insomecases,back-propagatedinformationmaydivergefromtheactualsystemdynamics,andthereferencemodellosesitslocalmodelingcapability.Mostofthetime,lowerorderestimationofthenonlinearmodelwillnotsatisfytheprecisionrequirement.Consequently,wewillfocusonESN-basedcontrollerdesigninthissection. CurrentlytherearelimitedpapersdiscussingapplicationsofESNtocontrol[ 32 33 ].Structurally,ESNcontrolcanbegroupedtoasetofneuralcontrol.Inbothpapers,theESNcontrolleristrainedwhendesirecontrolsignalisavailable,therebyreducingthecontrolproblemsettingtothatofthestandardmodelingproblem.Inotherwords,ESNistryingtomodelothercontrollerinsteadofcontrolthetarget 104

PAGE 116

FigureB{1. DemonstrationsystemwithESNreferencemodelandcontroller system.Thoughinsomeapplicationsinferenceofdesirecontrolsignalmightbedone,inmanyotherssuchinferenceiseitherinfeasibleorunreliable[ 34 ]. ModelbasedcontrolisarealchallengeforESN.ThemaindicultycomesfromtherequirementthatdesirecontrolsbeknownasusedforESNtraining.Thedesirecontrolvaluescansometimesbeinferredfromthedesireoutputsifatransformationfromthecontrolstotheoutputshasasingle-valuedinverseandtherelativedegreeone.However,theplantmodelmightnotbeinvertedanalytically.Furthermore,calculatingdesirecontrolsignalnumericallymightbetimeconsuming,brittleandnallyconvergetolocalminima.Generallyspeaking,determiningdesirecontrolsignalsshouldbereplacedbytheuseofback-propagationthroughtime(BPTT)tocomputethesensitivitiesofplantmodeloutputstothechangesofitscontrolinputs.TheincurredadditionalcomputationalcomplexitymayallbutnullifythemainadvantageoftheESN. ThealternativeschemeofindirectcontrolwithESNwouldplaceESNunderthemethodofderivativeadaptivecritics(DAC)suchasdualheuristicprogramming

PAGE 117

(DHP).DACisderivedfromadaptivecriticdesigns(ACD)whichincludesheuris-ticdynamicprogramming(HDP),DHP,andglobalizeddualheuristicprogramming(GDHP),etc.Prokhorovin[ 3 ]showedthatthereexistssimilaritybetweenaparticu-larformofBPTTandthepopularDACformulationforthegeneralneuralnetworkscase.ThesimilaritiesenableustocreateahybridBPTTandESNbasedDACinwhichDACprovidesderivativesofestimationfromthefuturetimesteps. WesketchanalternativeschemeofindirectcontrolwithESN.Withoutlineariz-ingthesystemfunction,manycontrolsystemscanbeexpressedincontrolsun: whereHisthevectorfunction,Gisthecontrolmatrix.Settingtheinstantaneousperformancemeasurementas( 4{19 )withQ>0;R>0,thenimplementablepara-meterizationofthecontrollermaybeexpressedusingESNwithoutputvectorasthefollowingproduct Thediscreteclose-loopsystemwillrunforward,intheBPTTway,withthesequenceofcontrolsignalsfungn=nmaxn=n0providedbytheESNcontrollerwithconstantinitialvectorWout.InordertotraintheESN'sreadoutvector,weemploythedualequationsasdesireESNoutput

PAGE 118

bygoingbackwardsfromn=nmax,wherewesettheendpointdesire;n+1,tothestartingpointn=n0.TheESNreadoutvectortrainingerrorscanbederivedfromESNoutputas TheerrorisfeedintoMSEtrainingcriteriontoupdatetheESNreadoutweights.Thewholeprocedureissetas:runningthesimulationforwardtocollectdata(systemoutputsandESNoutputs);goingbackwardtoupdateWoutwitherrors( B{3 )and( B{4 ),whichisrepeatedtilltheconvergeofWout.Preliminaryexperimentsindi-catethattheESNbasedderivativeadaptivecriticisviable,especiallyforcontrolofdistributedparametersystems. AsforthemodelinformationofHandG,ESNbasedapproachhastwooptions.OneisthewaylikeothercontrollerswediscussedinprevioussectionsinwhichGMM-LLMsuppliespiece-wiselocallinearestimation.TheGMM-LLMpartwillbringadditionalcomputationrequirement,yetthewholeframeworkwillbedesignedbasedonamoretraditionalstyleasshowingure B{2 .AnotheroptionisusingoneESNforbothsystemidenticationandcontrol.InthiswaytheESNwillhavesinglecolumnofinputandtwoindependentreadoutweightmatrixcorrespondingtostateprediction~yn+1andcontrolcostpredictionn+1.Forthestatepredictionpart,ESNmodelfunctioncanbere-writtenfrom( 2{10 ) ~yn+1=WToutf[WinX(n)+Ws(n)](B{5)

PAGE 119

FigureB{2. SchematicdiagramofcontrolsystemwithESNascontrollerbasedonGaussianmodels ConsideringthesmallrandomvaluesofinputweightsWin,thestatepredictioncouldbelocallymodiedasrstorderTaylor-expansion ~yn+1=WTout[f(Ws(n))+Win(n)_f(Ws(n))X(n)](B{6) ConsideringtheX(n)=[x(n);u(n)]T,and ThelocalmodelinformationofHandGcanbederivedfrom( B{6 )as [H;G]T=WToutWin=cosh2(Ws(n)) Wenoticethatthecalculationforthelocalmodelinformation[H;G]ismainlybasedontheESNparameters.Firstorderestimationcannotguaranteetheprecision

PAGE 120

ofthereferencemorelyetthatkindofcalculationwillnotbringtoomuchcompu-tationalcost.ConsideringthehighmodelingeciencyofGMM-LLMframework,wewillmakeuseofmodelinformationfromGMM-LLMforESNcontrollerinthefollowingchapter.

PAGE 121

[1] K.S.NarendraandK.Parthasarathy,\Identicationandcontrolofdynamicalsystemsusingneuralnetworks,"IEEEJournalonNeuralNetworks,vol.1(1),pp.698{706,March1990. [2] J.Lan,J.Cho,D.Erdogmus,J.C.Principe,M.Motter,andJ.Xu,\Locallinearpidcontrollersfornonlinearcontrol,"InternationalJournalofCISSpecialIssueNonlinearAdaptivePIDControl,vol.33(1),pp.26{35,January2005. [3] J.Si,A.G.Barto,W.B.Powell,andD.C.Wunsch,LearningandApproximateDynamicProgramming,JohnWiley&Sons,NewYork,2004. [4] D.Erdogmus,J.Cho,J.Lan,M.Motter,andJ.C.Principe,\Adaptivelo-callinearmodelingandcontrolofnonlineardynamicalsystems,"inIntelligentControlSystemUsingComputationalIntelligenceTechniques,A.Ruano,Ed.,pp.121{152.IEE,2004. [5] J.C.Principe,L.Wang,andM.A.Motter,\Localdynamicmodelingwithself-organizingmapsandapplicationstononlinearsystemidenticationandcontrol,"inProceedingsofIEEE,1998,pp.2240{2258. [6] J.ZhangandA.J.Morris,\Recurrentneuro-fuzzynetworksfornonlinearprocessmodeling,"IEEETransactionsonNeuralNetworks,vol.10(2),pp.313{326,March1999. [7] M.ZhangandT.Tarn,\Ahybridswitchingcontrolstrategyfornonlinearandunderactedmechanicalsystem,"IEEETransactionsonAutomaticControl,vol.48(10),pp.1777{1782,October2003. [8] J.D.BoskovicandK.S.Narendra,\Comparisonoflinear,nonlinear,andneuralnetworkbasedadaptivecontrollersforaclassoffed-batchfermentationprocess,"Automatica,vol.31(6),pp.817{840,1995. [9] C.T.Chen,IntroductiontoLinearSystemTheory,Holt,Rinehart,andWinston,NewYork,1970. [10] W.T.Miller,\Real-timeneuralnetworkcontrolofabipedwalkingrobot,"IEEEControlsystemMagazine,vol.14(1),pp.41{48,February1994. [11] T.Kohonen,Self-OrganizingMaps,Springer,NewYork,1995. [12] S.C.Douglas,ExactExpectationAnalysisoftheLMSAdaptiveFilterforCor-relatedGaussianInputData,Ph.D.dissertation,MIT,Cambridge,MA,2000. 110

PAGE 122

[13] R.C.DorfandR.H.Bishop,ModernControlSystems,Addison,Wesley,NewYork,NY,1998. [14] K.Ogata,ModernControlEngineering,PrenticeHall,NewYork,NY,2001. [15] B.Schoner,\Probabilisticcharacterizationandsynthsisofcomplexdrivensys-tems,"inProceedingsIEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing,Minneapolis,MN,1993,pp.519{522. [16] J.D.FarmerandJ.J.Sidorowich,\Predictingchaotictimeseries,"PhysicalReviewLetters,vol.59(8),pp.845{848,1987. [17] T.Sauer,\Timeseriespredictionbyusingdelaycoordinateembedding,"inSantaFeInstituteStudiesintheSciencesofComplexity,pp.175{193.Addison-Wesley,Boston,MA,1997. [18] J.Cho,J.Lan,G.Thampi,J.C.Principe,andM.A.Motter,\Identicationaofaircraftdynamicsusingasomandlocallinearmodels,"inIEEEInternationalConferenceMWSCAS,Tulsa,OK,2002,pp.148{151. [19] J.C.PrincipeandL.Wang,\Nonlineartimeseriesmodelingwithself-organizingfeaturemaps,"inProceedingsofNeuralNetworksforSignalProcessing,Cam-bridge,MA,1995,pp.11{20. [20] M.A.Motter,ControloftheNASALangley16-footTransonicTunnelwiththeSelf-OrganizingFeatureMap,Ph.D.dissertation,UniversityofFlorida,Gainesville,FL,1997. [21] E.W.Weisstein,\Dynamicalsystem,"MathWorldwebsite,2006. [22] F.Takens,\Detectingstrangeattractorsinturbulence,"DynamicalSystemsandTurbulence,1981,SpringerLectureNotesinMathematics. [23] T.BuzugandG.Pster,\Optimaldelaytimeandembeddingdimensionfordelay-timecoordinatesbyanalysisoftheglobalstaticandlocaldynamicalbe-haviorofstrangeattractors,"PhysicalReview,vol.10(15),pp.7073{7084,May1992. [24] J.GaoandZ.Zheng,\Directdynamicaltestfordeterministicchaosandoptimalembeddingofachaotictimeseries,"PhysicalReview,vol.5,pp.3807{3814,May1994. [25] X.HeandH.Asada,\Anewmethodforidentifyingordersofinput-outputmodelsfornonlineardynamicsystems,"inProceedingofAmericanControlConference,SanFrancisco,CA,1999,pp.2520{25223. [26] A.M.FraserandH.L.Swinney,\Independentcoordinatesforstrangeattrac-torsfrommutualinformation,"PhysicalReviewA,vol.33(2),pp.1134{1140,February1986.

PAGE 123

[27] A.Weibel,T.Hanazawa,G.Hinton,K.Shikano,andK.J.Lang,\Phonemerecognitionusingtimedelayneuralnetworks,"IEEETransactionsonAcoustics,Speech,SignalProcessing,vol.37(3),pp.328{339,July1989. [28] I.W.SandbergandL.Xu,\Uniformapproximationofmulti-dimensionalmyopicmaps,"IEEETransactionsonCircuitsandSystems,vol.44(6),pp.477{485,June1997. [29] P.Fuhrmann,\Dualityinpolynomialmodelswithsomeapplicationstogeomet-riccontroltheory,"IEEETransactionsonAutomaticControl,vol.26(1),pp.284{295,February1981. [30] J.HeandO.P.Malik,\Anadaptivepowersystemstabilizerbasedonrecurrentneuralnetworks,"IEEETransactionsonEnergyConversion. [31] M.KimuraandR.Nakano,\Learningdynamicalsystemsbyrecurrentneuralnetworksfromorbitz,"NeuralNetworks,vol.11(9),pp.1589{1600,1998. [32] J.Hertzberg,H.Jaeger,andF.Schonherr,\Learningtogroundfactysmbolsinbehavior-basedrobots,"inProceedingofthe15thEuropeanConferenceonArticialIntelligence,Lyon,France,July2002,pp.708{712. [33] P.JoshiandW.Waass,\Movementgenerationwithcircuitsofspikingneurons,"NeuralComputation,vol.17(8),pp.1715{1738,2005. [34] D.Prokhorov,\Echostatenetworks:Appealandchallenges,"inIEEEInter-nationalJointConferenceonNeuralNetworks,2005,pp.1463{1466. [35] M.C.Ozturk,D.Xu,andJ.C.Principe,\Analysisanddesignofechostatenetworks,"NeuralComputation,vol.inpress,2006. [36] H.Jaeger,\The\echostate"approachtoanalysingandtrainingrecurrentneuralnetworks,"Tech.Rep.TechnicalreportGMDreport148,GermanNationalResearchCenterforInformationTechnology,http://www.faculty.iu-bremen.de/hjaeger/pubs.html,June2001. [37] H.Jaeger,\Shorttermmemoryinechostatenetworks,"Tech.Rep.Techni-calreportGMDreport152,GermanNationalResearchCenterforInformationTechnology,http://www.faculty.iu-bremen.de/hjaeger/pubs.html,August2001. [38] Y.N.Rao,S.P.Kim,J.C.Sanchez,D.Erdogmus,J.C.Principe,J.M.Carmena,M.A.Lebedev,andM.A.Nicolelis,\Learningmappinginbrainmachineinter-faceswithechostatenetworks,"inIEEEInternationalConferenceonAcoustics,Speech,andSignalProcessing,2005,pp.233{236. [39] M.Casdagli,\Nonlinearpredictionofchaotictimeseries,"PhysicalReviewD,vol.35,pp.335{356,1989.

PAGE 124

[40] M.B.Priestley,\State-dependentmodels:Ageneralapproachtononlineartimeseriesanalysis,"JournalofTimeSeriesAnalysis,vol.1(1),pp.47{71,1980. [41] C.LiandC.Lee,\Self-organizingneuro-fuzzysystemforcontrolofunknownplants,"IEEETransactionsonFuzzySystems,vol.11(1),pp.135{150,February2003. [42] S.Haykin,NeuralNetowrks,PrenticeHall,NewJersey,July1998. [43] J.Minko,SignalProcessingFundamentalsandApplicationsforCommunica-tionsandSensingSystems,ArtechHouse,Boston,MA,2002. [44] T.TakagiandM.Sugeno,\Fuzzyidenticationofsystemsanditsapplicationtomodelingandcontrol,"IEEETransactionsonSystem,Man,andCybernetics,vol.15(1),pp.116{132,1985. [45] M.A.F.FigueiredoandA.K.Jain,\Unsupervisedlearningofnitemixturemodels,"IEEETransactionsonPatternAnalysisandMachineIntelligence,vol.24(3),pp.381{396,March2002. [46] A.Dempster,N.Laird,andD.Rubin,\Maximum-likelihoodfromincompletedataviatheemalgorithm,"J.RoyalStatisticalSoc.,vol.SeriesB39(1),pp.1{38,1977. [47] C.M.BishopandM.Svensen,\GTM:Thegenerativetopographicmapping,"NeuralComputation,vol.10(1),pp.215{234,1998. [48] G.PataneandM.Russo,\TheenhancedLBGalgorithm,"IEEETransactionsonNeuralNetworks,vol.14(9),pp.1219{1237,November2001. [49] N.VlassisandA.Likas,\Akurtosis-baseddynamicapproachtogaussianmix-turemodeling,"IEEETransactionsonSystem,Man,andCybernetics-PartA:SystemandHumans,vol.29(4),pp.393{399,July1999. [50] Y.Linde,A.Buzo,andR.Gray,\Analgorithmforvectorquantizerdesign,"IEEETransactionsonCommunications,vol.28(1),pp.84{94,January1980. [51] A.Gersho,\Asymptoticallyoptimalblockquantization,"IEEETransactionsonInformationTheory,vol.25(7),pp.373{380,July1979. [52] C.ChinrungruengandC.Sequin,\Optimaladaptivek-meansalgorithmwithdynamicadjustmentoflearningrate,"IEEETransactiononNeuralNetworks,vol.6(1),pp.157{169,1995. [53] B.Fritzke,\Growingcellstructures{aself-organizingnetworkforsupervisedandunsupervisedlearning,"IEEETransactionsonNeuralNetworks,vol.7(9),pp.1441{1460,1994.

PAGE 125

[54] M.M.Polycarpou,\Stableadaptiveneuralcontrolschemefornonlinearsys-tems,"IEEETransactionsonAutomaticControl,vol.41(3),pp.447{451,March1996. [55] K.S.NarendraandC.Xiang,\Adaptivecontrolofdiscrete-timesystemsusingmultiplemodels,"IEEEJournalonAutomaticControl,vol.45(9),pp.1669{1686,September2000. [56] F.Delmotte,L.Dubois,andP.Borne,\Adaptivemulti-modelcontrollerusingtrust,"inIEEEInternationalConferenceonSystems,ManandCybernetics,1995,pp.4155{4160. [57] B.Boulet,B.A.Francis,P.C.Hughes,andT.Hong,\Uncertaintymodelingandexperimentsinh1controloflargeexiblespacestructure,"IEEETransactionsonControlSystemsTechnology,vol.51(5),pp.504{519,September1997. [58] D.HylandandD.Bernstein,\Themajorantlyapunovequation:Anonnegativematrixequationforrobuststabilityandperformanceoflargescalesystems,"IEEETransactionsonAutomaticControl,vol.32(11),pp.1005{1013,November1978. [59] G.ConteandA.Perdon,\Ageometricapproachtothetheoryof2-dsystems,"IEEETransactionsonAutomaticControl,vol.33(10),pp.946{950,October1988. [60] F.Lin,R.D.Brand,andJ.Sun,\Robustcontrolofnonlinearsystems:Compen-satingforuncertainty,"IEEEInt.JournalonControl,vol.56(6),pp.1453{1495,1992. [61] F.LinandR.D.Brand,\Anoptimalcontrolapproachtorobustcontrolofrobotmanipulators,"IEEETransactionsonRoboticsandAutomation,vol.14(1),pp.69{77,February1998. [62] L.ChenandK.S.Narendra,\Intelligentcontrolusingneuralnetworksandmultiplemodels,"inProceedingsofthe41stIEEEConferenceonDecisionandControl,2002,pp.1357{1362. [63] D.G.Lainiotis,\Partitioning:Aunifyingframeworkofadaptivesystems,i:Estimation,ii:Control,"ProceedingsofIEEE,vol.64(8),pp.1126{1142,August1976. [64] D.G.Lainiotis,\Multi-modelpartitioningthemulti-modelevolutionaryframe-workforintelligentcontrol,"inProceedingsofthe2000IEEEInternationalSymposium,July2000,pp.15{20. [65] K.S.Narendra,\Neuralnetworksforcontrol:Theoryandpractice,"ProceedingsofIEEE,vol.84(10),pp.1385{1406,1996.

PAGE 126

[66] D.A.WhiteandD.A.Sofge,HandbookofIntelligentControl:Neural,Fuzzy,andAdaptiveApproaches,VanNostrandandReinhold,NewYork,1993. [67] B.Eulrich,D.Andrisani,andD.G.Lainiotis,\Partitioningidenticational-gorithms,"IEEETransactionsonAutomaticControl,vol.25(3),pp.521{528,June1980. [68] T.YinandC.S.G.Lee,\Fuzzymodel-referenceadaptivecontrol,"IEEETransactionsonSystems,Man,andCybernetics,vol.25(12),pp.1606{1615,December1995. [69] K.Tanaka,M.Iwasaki,andH.O.Wang,\Switchingcontrolofanr/chovercraft:Stabilizationandsmoothswitch,"IEEETransactionsonSystem,Man,andCybernetics{PartB:Cybernetics,vol.31(6),pp.853{863,December2001. [70] J.P.Hespanha,\Stabilizationofnonholonomicintegratorvialogic-basedswitch-ing,"Automatica,vol.35,pp.385{393,March1999. [71] J.Cho,LocalModelingParadigmBasedonSelf-OrganizngMapforNonlinearNonautonomousSystem,Ph.D.dissertation,UniversityofFlorida,Gainesville,FL,December2004. [72] J.Y.Hung,W.Gao,andJ.C.Hung,\Variablestructurecontrol:Asurvey,"IEEETransactionsonIndustrialElectronics,vol.40(1),pp.2{22,February1993. [73] R.E.Brown,G.N.Maliotis,andJ.A.Gibby,\PIDself-tuningcontrollerforaluminumrollingmill,"IEEETrans.IndustryApplications,vol.29(3),pp.578{583,1993. [74] S.Skoczowski,S.Domek,K.Pietrusewicz,andB.Broel-Plater,\AmethodforimprovingtherobustnessofPIDcontrol,"IEEETransactionsonIndustrialElectronics,vol.52(6),pp.1669{1676,December2005. [75] J.N.Tsitsiklis,\Ecientalgorithmsforgloballyoptimaltrajectories,"IEEETransactionsonAutomaticControl,vol.40(9),pp.1528{1538,September1995. [76] S.YashikiandN.Matsumoto,\Stabilizationandoptimalregulatorproblemfortime-delaysystemsbasedonthe2driccatimatrixequation,"ElectronicsandCommunicationsinJapanPart3,vol.81(1),pp.1{12,January1998. [77] E.J.KleinandW.F.Ramirez,\Statecontrollabilityandoptimalregulatorcontroloftime-delayedsystems,"INT.JournalControl,vol.74(3),pp.281{289,2001. [78] T.YangandL.O.Chua,\Generalizedsynchronizationofchaosvialineartrans-formations,"Int.JournalofBifurcationandChaos,vol.9(1),pp.215{219,February1999.

PAGE 127

[79] T.Yang,C.Yang,andL.Yang,\Detailedstudyofadaptivecontrolofchaoticsystemswithunknownparameters,"DynamicsandControl,vol.8(3),pp.255{267,March1998. [80] X.Yu,\Trackinginherentperiodicorbitsinchaoticdynamicsystemsviaadap-tivevariabletime-delayedselfcontrol,"IEEETransactionsonCircuitsandSystemsI,vol.46(11),pp.1408{1411,November1999. [81] Y.TianandX.Yu,\Adaptivecontrolofchaoticdynamicalsystemsusinginvari-antmanifoldapproach,"IEEETransactionsonCircuitsandSystemsI:Funda-mentalTheoryandApplications,vol.47(10),pp.1537{1542,October2000. [82] J.Gonzalez,R.Femat,J.Alvarez-Ramirez,R.Aguilar,andM.Barron,\Adiscreteapproachtothecontrolandsynchronizationofaclassofchaoticoscilla-tors,"IEEETransactionsonCircuitsandSystems-I,vol.46(9),pp.1139{1144,September1999. [83] A.Bartoszewicz,\Discrete-timequasi-sliding-modecontrolstrategies,"IEEETransactionsonIndustrialElectronics,vol.45(4),pp.633{637,August1998. [84] W.Gao,Y.Wang,andA.Homaifa,\Discrete-timevariablestructurecontrolsystems,"IEEETransactionsonIndustrialElectronics,vol.42(2),pp.117{122,April1995. [85] L.C.Westphal,\Lessonsfromanexamplein\onthestabilityofdiscrete-timeslidingmodecontrolsystems","IEEETransactionsonAutomaticControl,vol.44(7),pp.1444{1445,July1999. [86] E.A.OotenandW.Singhose,\Commandgenerationwithslidingmodecontrolforexiblesystems,"inProceedings.6thInternationalWorkshoponAdvancedMotionControl,Honolulu,HI,2000,pp.64{68.

PAGE 128

JingLanwasborninBeijing,People'sRepublicofChina,onJune16,1973.HereceivedhisB.S.degreefromBeijingUniversityofAeronauticsandAstronautics,majoringinautomaticcontrol,in1996.HereceivedhisM.S.degreeatTsinghuaUniversity,majoringinautomation,in1999.Since2001,hehasbeenwiththeCom-putationalNeuroEngineeringLaboratoryattheUniversityofFloridatopursuehisPh.D.degree.Hisresearchinterestsincludenon-linearsystemidentication,Gaussianmixturemodelprediction,andneuralnetworksandadaptiveltersonsignalprocess-ing. 117