Citation
Global optimization algorithms for adaptive infinite impulse response filters

Material Information

Title:
Global optimization algorithms for adaptive infinite impulse response filters
Creator:
Lai, Ching-An ( Author, Primary )
Place of Publication:
Gainesville, Fla.
Publisher:
University of Florida
Publication Date:
Copyright Date:
2002
Language:
English

Subjects

Subjects / Keywords:
Adaptive filters ( jstor )
Algorithms ( jstor )
Cost functions ( jstor )
Data smoothing ( jstor )
Entropy ( jstor )
Error rates ( jstor )
IIR filters ( jstor )
Local minimum ( jstor )
Signals ( jstor )
System identification ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Lai, Ching-An. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
12/27/2005
Resource Identifier:
53334197 ( OCLC )

Downloads

This item has the following downloads:

lai_c.pdf

lai_c_Page_11.txt

lai_c_pdf.txt

lai_c_Page_07.txt

lai_c_Page_93.txt

lai_c_Page_89.txt

lai_c_Page_72.txt

lai_c_Page_13.txt

lai_c_Page_30.txt

lai_c_Page_76.txt

lai_c_Page_63.txt

lai_c_Page_86.txt

lai_c_Page_84.txt

lai_c_Page_42.txt

lai_c_Page_44.txt

lai_c_Page_95.txt

lai_c_Page_35.txt

lai_c_Page_80.txt

lai_c_Page_59.txt

lai_c_Page_87.txt

lai_c_Page_25.txt

lai_c_Page_55.txt

lai_c_Page_04.txt

lai_c_Page_69.txt

lai_c_Page_64.txt

lai_c_Page_46.txt

lai_c_Page_17.txt

lai_c_Page_90.txt

lai_c_Page_14.txt

lai_c_Page_02.txt

lai_c_Page_71.txt

lai_c_Page_96.txt

lai_c_Page_32.txt

lai_c_Page_31.txt

lai_c_Page_70.txt

lai_c_Page_06.txt

lai_c_Page_43.txt

lai_c_Page_12.txt

lai_c_Page_40.txt

lai_c_Page_74.txt

lai_c_Page_36.txt

lai_c_Page_45.txt

lai_c_Page_97.txt

lai_c_Page_53.txt

lai_c_Page_47.txt

lai_c_Page_60.txt

lai_c_Page_20.txt

lai_c_Page_22.txt

lai_c_Page_08.txt

lai_c_Page_10.txt

lai_c_Page_09.txt

lai_c_Page_77.txt

lai_c_Page_65.txt

lai_c_Page_81.txt

lai_c_Page_85.txt

lai_c_Page_41.txt

lai_c_Page_23.txt

lai_c_Page_92.txt

lai_c_Page_50.txt

lai_c_Page_24.txt

lai_c_Page_58.txt

lai_c_Page_01.txt

lai_c_Page_79.txt

lai_c_Page_67.txt

lai_c_Page_73.txt

lai_c_Page_51.txt

lai_c_Page_82.txt

lai_c_Page_91.txt

lai_c_Page_75.txt

lai_c_Page_28.txt

lai_c_Page_83.txt

lai_c_Page_19.txt

lai_c_Page_16.txt

lai_c_Page_66.txt

lai_c_Page_49.txt

lai_c_Page_62.txt

lai_c_Page_21.txt

lai_c_Page_26.txt

lai_c_Page_61.txt

UFE0000558_00001_xml.txt

lai_c_Page_57.txt

lai_c_Page_78.txt

lai_c_Page_68.txt

lai_c_Page_27.txt

lai_c_Page_39.txt

lai_c_Page_48.txt

lai_c_Page_05.txt

lai_c_Page_34.txt

lai_c_Page_15.txt

lai_c_Page_37.txt

lai_c_Page_52.txt

lai_c_Page_18.txt

lai_c_Page_29.txt

lai_c_Page_88.txt

lai_c_Page_56.txt

lai_c_Page_94.txt

lai_c_Page_38.txt

lai_c_Page_54.txt

lai_c_Page_33.txt

lai_c_Page_03.txt


Full Text










GLOBAL OPTIMIZATION ALGORITHMS FOR ADAPTIVE INFINITE IMPULSE RESPONSE FILTERS

















By

CHING-AN LAI


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2002














ACKNOWLEDGMENTS

First and foremost, I wish to acknowledge my advisor, Dr. Jos6 C. Principe for providing excellent guidance throughout the development of this dissertation. I also wish to thank Deniz Erdogmus for the invaluable discussion on information theory.

I also wish to thank members of my committee, Dr. Haniph A. Latchman,

Dr. John M. M. Anderson, Dr. Yuguang Fang, and Dr. Murali Rao for their insightful comments on this dissertation. I would also like to thank my former advisor Dr. William W. Edmonson for his kind support of my study.














TABLE OF CONTENTS

page

ACKNOW LEDGMENTS . ii

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

LIST OF FIGURES . . . . . . . . . . . . . . . . vi

A B ST R A CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

CHAPTER

t INTRODUCTION . t

tA M otivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . t
t.2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
t.2.t Adaptive Filtering . . . . . . . . . . . . 2
t.2.2 Optimization M ethod . 4
t.2.3 Proposed Optimization Method . 6
t.3 O u tline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 ADAPTIVE IIR FILTERING . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2. t Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 System Identification with the Adaptive IIR Filter . . . . . . . . . . . . t 2
2.3 System Identification with Kautz Filter . . . . . . . . . . . . . . . . . . t 7

3 STOCHASTIC APPROXIMATION WITH CONVOLUTION SMOOTHING. 20

3. t Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Convolution Function Smoothing . . . . . . . . . . . . . . . . . . . . . 2 t
3.3 Derivation of the Gradient Estimate . . . . . . . . . . . . . . . . . . . . 24
3.4 LM S-SAS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Analysis of Weak Convergence to the Global Optimum for LMS-SAS . 28 3.6 Normalized LMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7 Relationship between LMS-SAS and NLMS Algorithms . . . . . . . . . 36
3.8 Sim ulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.9 Comparison of LMS-SAS and NLMS Algorithm . . . . . . . . . . . . . 40
3A 0 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4 INFORMATION THEORETIC LEARNING . . . . . . . . . . . . . . . . . . . 47

4. t Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Entropy and Mutual Information . . . . . . . . . . . . . . . . . . . . . 48
4.3 Adaptive IIR Filter with Euclidean Distance Criterion . . . . . . . . . . 5 t







4.4 Parzen Window Estimator and Convolution Smoothing Function . .
4.4. t Sim ilarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


4.4.2 Difference . . . . . . . . . . . . . . .
Analysis of Weak Convergence to the Global Contour of Euclidean Distance Criterion . . Simulation Results . . . . . . . . . . . . . .
Comparison of NLMS and ITL Algorithms . Conclusion . . . . . . . . . . . . . . . . . . .


Optimum for ITL


5 RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5. t System Identification with Kautz Filter . . . . . . . . . . . . . . . . . .
5.2 Nonlinear Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 CONCLUSION AND FUTURE RESEARCH . . . . . . . . . . . . . . . . . .

6. t Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .














LIST OF TABLES


Table

3-t NLMS algorithm. . . . . . . . . . .

3-2 System identification of reduced order model . . . . 3-3 Example I for system identification. . . . . . 3-4 Example 11 for system identification . . . . . . 3-5 Example III for system identification. . . . . . 4-t System identification of adaptive hIR filter by NLMS and 4-2 LP for both MSE and ITL criterion. . . . . . 5-t System identification of Kautz filter model. . . . . 5-2 LP for both MSE and ITL criteria in the Kautz example


page

35 38

44


ITL algorithm














LIST OF FIGURES


Figure

2-t Adaptive filter model. . . . . . . . . .

2-2 Block diagram of the system identification configuration 2-3 Kautz filter model. . . . . . . . . . .

3-t Smoothed function using Gaussian pdf . . . . . 3-2 Step size p(n) for SAS algorithm. . . . . . 3-3 Global convergence of 0 in the GLMS algorithm . . . 3-4 Global convergence of 0 in the LMS-SAS algorithm. . 3-5 Global convergence of 0 in the NLMS algorithm. . . . 3-6 Local convergence of 0 in the LMS algorithm . . . . 3-7 Local convergence of 0 in the GLMS algorithm . . . 3-8 Local convergence of 0 in the LMS-SAS algorithm. .


page

9

12 19 23 39

40 40 41 41 41 42 43 43 60 60

61 63


3-9 Contour of MSE


3-10 4-1 4-2 4-3 4-4 4-5 5-1 5-2 5-3

5-4 5-5 5-6


Weight (top) and jVoy(n)j (bottom) Convergence characteristics of weight Euclidean distance of Example I . Entropy f " f(F)dF of Example I Euclidean distance of Example 11 Convergence characteristics of weight Convergence characteristics of weight Convergence characteristics of weight Convergence characteristics of weight Convergence characteristics of weight Impulse response. . . . . . Channel equalization system. . .


for Example


by ITL . .


for Example 11 by ITL . . . . . for Kautz filter by LMS algorithm . . for Kautz filter by LMS-SAS algorithm. for Kautz filter by NLMS algorithm . . for Kautz filter by ITL algorithm . . .


64







5-7 Convergence characteristics of adaptive algorithms for a nonlinear equalizer . . 74 5-8 Performance comparison of global optimizations for nonlinear equalizer . . . . 75 5-9 Average BER for a nonlinear equalizer . . . . . . . . . . . . . . . . . . . . . . 76














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

GLOBAL OPTIMIZATION ALGORITHMS FOR ADAPTIVE INFINITE IMPULSE RESPONSE FILTERS

By

Ching-An Lai

December 2002

Chair: Jos6 C. Principe
Major Department: Electrical and Computer Engineering

The major goal of this dissertation is to develop global optimization algorithms for adaptive IIR filters. Since the performance surface of adaptive IIR filters is nonconvex with respect to the filter coefficients, conventional gradient-based algorithms can easily be trapped at an unacceptable local optimum. We need to exploit global optimization methods in adaptive IIR filtering and overcome the problem of converging to the local minima, preserving stability throughout adaptation.

One approach for adaptive IIR filtering suggests a stochastic approximation with convolution smoothing (SAS). We modify the perturbing noise by multiplying it with its cost function. The modified algorithm results in better performance when compared to the original algorithm. We also analyze the global optimization behavior of the proposal algorithm by analyzing the transition probability density of escaping from a local minimum.

A gradient estimation error can be used to act as the perturbing noise, provided it is properly normalized. Consequently, another approach for global IIR filter optimization is the normalized LMS (NLMS) algorithm. The behavior of the NLMS algorithm with decreasing step size is similar to that of the LMS-SAS algorithm from a global optimization perspective.







Another novel approach for global optimization arises from using an entropy criterion for the training of adaptive systems. Our approach uses Renyi's entropy associated with the Parzen window estimator to estimate the pdf directly from a set of samples. The kernel size of the Parzen window estimator is an important parameter in the global optimization procedure. We propose to start the training with a large kernel size, and then slowly decrease this parameter to a predetermined suitable value. We show that the finite sample size in the estimation works as an additive uncorrelated white noise source that allows the training algorithm to converge to the global minimum of the cost function.

One issue in the identification of the autoregressive moving average (ARMA)

system is that filter structures are used to avoid instabilities during training. Here we use the class of orthogonal filters called the Kautz filters for ARMA modeling. The proposed global optimization algorithms have been applied to system identification together with Kautz filters and nonlinear equalization to show the global optimum search capability.














CHAPTER t
INTRODUCTION

1.1 Motivation

The objective of this dissertation is to develop global optimization algorithms for

adaptive infinite impulse response (11R) filtering by using the stochastic approximation

with convolution smoothing function (SAS) and information theoretic learning (ITL).

This work is particularly motivated by the following facts. " Adaptive filtering has wide application in the digital signal processing, communication,
and control fields. A finite impulse response (FIR) filter [t, 2] is a simple structure
for adaptive filtering and has been extensively developed. Recently researchers have
attempted to use IIR structures because they perform better than FIR structures
with the same number of coefficients. However, some major drawbacks inherent
to adaptive IIR structures are slow convergence, possible convergence to a bias or
unacceptable suboptimal solutions, and the need for stability monitoring.

" Stochastic approximation methods [3] have the property of converging to the global
optimum with a probability of one, as the number of iterations tends to infinity.
These methods are based on a random perturbation to find the absolute optimum
of the cost function. In particular, the method of stochastic approximation with
convolution smoothing has been successful in several applications. It has been
empirically proven to be efficient in converging to the global optimum in terms of computation and accuracy. The convolution smoothing function can "smooth out"
a nonconvex objective function by convolving it with a suitable probability density
function (pdf). In the beginning of adaptation, the variance of the pdf is set to a
sufficient large value, such that the convolution smoothing function can "smooth out"
the nonconvex objective function into a convex function. Then the variance is slowly
reduced to zero, whereby the smoothed objective function returns to the original
objective function, as the algorithm converges to the global optimum. Such variance
is determined by a cooling schedule parameter. This cooling schedule is a critical
factor in global optimization, because it affects the performance of the global search
capability.

" Convolution smoothing has been used exclusively with the mean square error (XISE)
criterion. MSE has been used extensively in the theory of adaptive systems because
of its analytical simplicity and the common assumption of Gaussian distributed
error. However, recently more sophisticated applications (such as independent
component analysis and blind source separation) require a criterion that considers
higher-order statistics for the training of adaptive systems. The computational neural
engineering laboratory studied entropy cost function [4]. Shannon first introduced







entropy of a given probability distribution function, which provides a measure of the
average information in that distribution. By using the Parzen window estimator, we can estimate the pdf directly from a set of samples. It is quite straightforward
to apply the entropy criterion to the system identification framework [5]. As shown in this thesis, the kernel size of the Parzen window estimator becomes an important
parameter in the global optimization procedure. Deniz et al. [6] conjectured that for a sufficiently large kernel size, the local minima of the error entropy criterion
can be eliminated. It was suggested that starting with a large kernel size, and then
slowly decreasing this parameter to a predetermined suitable value, the training
algorithm can converge to the global minimum of the cost function. The error entropy
criterion considered by Deniz et al. [6], however, does not consider the mean of the error signal, since entropy is invariant to translation. Here we modify the criterion
and study the reason why annealing the kernel size produces global optimization
algorithms.

1.2 Literature Survey

We surveyed the literature in the areas of adaptive filtering, optimization method, and mathematics used in the analysis of the algorithm.

1.2.1 Adaptive Filtering

Numerous algorithms of adaptive filtering are proposed in the literature [7, 8],

especially for system identification [9, tO]. Some valuable general papers on the topic of adaptive filtering are presented by Johnson [tt], Shynk [t2], Gee et al. [M] and Netto [t4]. Johnson's paper focused on the common theoretical basis between adaptive filtering and system identification. Shynk's paper dealt with various algorithms of adaptive IIR filtering for their error formula and realization. Neto's paper presented the characteristics of the most commonly used algorithms for adaptive IIR filtering in a simple and unified framework. Recently a full book was published on IIR filters [t5].

The major goal of an adaptive filtering algorithm is to adjust the adaptive filter coefficients in order to minimize a given performance criterion. Literature about adaptive filtering can be classified into three categories: adaptive filter structures, adaptive algorithms, and applications.
Adaptive filter structure. The choice of the adaptive filter structures affect the
computational complexity and the convergence speed. Basically, there are two kind of
adaptive filter structures.
- Adaptive FIR filter structure. The most commonly used adaptive FIR filter
structure is the transversal filter which implements an all-zero filter with a canonic
direct form (without any feedback). For this adaptive FIR filter structure, the







output is a linear combination of the adaptive filter coefficients. The performance surface of the objective cost function is quadratic [t] which yields a single optimal
point. Alternative adaptive FIR filter structures [t6] improve performance in terms
of computational complexity [t7, t8] and convergence speed [0, 20].

- Adaptive 11R filter structure. White [2t] first presented an implementation
of an adaptive IIR filter structure. Later, many articles were published in this
area. For simple implementation and easy analysis, most adaptive IIR filter
structures use the canonic direct form realization. Some other realizations are
also presented to overcome some drawbacks of canonic direct form realization, like
slow convergence rate and the need for stable monitoring [22]. Commonly used
realizations are cascade [23, 24], lattice [25, 26], and parallel [27, 28] realizations.
Other realizations have also been presented recently by Shynk et al. [29] and Jenkin
et al. [30].

Algorithm. An algorithm is a procedure used to adjust adaptive filter coefficients in order to minimize the cost function. The algorithm determines several important features of the whole adaptive procedure, such as computational complexity, convergence to suboptimal solutions, biased solutions, objective cost function and error signal. Early local adaptive filter algorithms were Newton method, Quasi-Newton method, and gradient method. Newton's method seeks the minimum of a second-order approximation of the objective cost function. Quasi-Newton is a simple version of the Newton method using a recursively calculated estimate of the inverse of a second-order matrix. The gradient method searches the minimum of the objective cost function by tracking the opposite direction of the gradient vector of the objective function [3t]. It is well known that the step size controls stability, convergence speed, and misadjustment [t]. For FIR adaptive filtering, local methods were sufficient since the optimization was linear in the weights. However in IIR adaptive filtering this is no longer the case. The most commonly known approaches for adaptive IIR filtering are equation error algorithm [32], output error algorithm [t2, tt], and composite algorithms [33, 34] such as the Steiglitz- McBride algorithm [35].
- The main characteristics of the equation error algorithm are unimodality of the
Mean- Square-Error (MSE) performance surface because of the linear relationship
of the signal and the adaptive filter coefficients, good convergence, and guaranteed
stability. However, it comes along with a biased solution in the presence of noise.

- The main characteristics of the output-error algorithm are the possible existence of
the multiple local minima, which affect the convergence speed, an unbiased global
optimal solution even in the presence of noise, and the requirement of stability
checking during the adaptive processing.

- The composite error algorithm attempts to combine the good individual
characteristics of both output error algorithm and equation error algorithm
[36]. Consequently, many papers were written to overcome the problem mentioned
above.







- Cousseau et al. [37] proposed an orthogonal filter to overcome the instability
problem of adaptive IIR filters, while Radenkovic et al. [38] used an output error
method to avoid it.

- The quadratic constraint equation error method [39] was proposed to remove the
biased solutions for the equation-error adaptive IIR filters [40, H]. New composite
adaptive IIR algorithms are presented in literature [42, 36].

Application. Adaptive filtering has been successful in many applications, such
as echo cancellation, noise cancellation, signal detection, system identification,
channel equalization, and control. Some useful information about adaptive filtering
application appears in the literature [t, 2, 43].

In this dissertation, we focus on adaptive IIR filter algorithms for system identification.

1.2.2 Optimization Method

There are two adaptation methodologies for IIR filters: gradient descent and global optimization. The most commonly used method is the gradient descent method, such as least mean square (LMS) [t]. These methods are well established for the adaptation of FIR filters and have the advantage of being less computationally expensive. The problem with gradient descent methods is that they might converge to any local minima. The local minima normally imply poor performance. This problem can be overcome through global optimization methods. Such global optimization algorithms include simulated annealing (SA) [44], genetic algorithm [45], random method [46], and stochastic approximation [3]. However, global optimization methods have the problem of computational complexity, especially for high order adaptive filter.

Several recent researchers have modified global optimization algorithms to improve their performance. Khargonekar [47] used an adaptive random search algorithm for the global optimization of control systems. This type of global optimization algorithm propagates a collection or a simplex of points but uses more geometrically intuitive heuristics. The most commonly used direct search method for optimization is the Nelder-Mead algorithm [46]. Despite the popularity of the Nelder-Mead algorithm, it does not provide any guarantee of convergence or performance. Recent studies relied on numerical results to determine the effectiveness of the algorithm. Duan proposed





5

the shuffled complex evolution algorithm [48], which uses several Nelder-Mead simplex algorithms running in parallel (that also share information with each other). Tang [49] proposed a random search that partitions the search region of the objective function into a certain number of subregions. Tang [49] showed that the adaptive partitioned random search in general can provide a better-than-average solution within a modest number of function evaluations.

Yim [50] used a genetic algorithm in his adaptive HR filtering algorithm for active noise control. He showed that genetic algorithms overcome the problem of converging to the local minimum for gradient decent algorithms. Wah [5t] improved constrained simulated annealing, a discrete global minimization algorithm with asymptotic convergence to discrete constrained global minima with a probability of one. The algorithm is based on the necessary and sufficient conditions for discrete constrained local minima in the theory of discrete Lagrange multipliers. He extended this algorithm to solve nonlinear continuous constrained optimization problems. Maryak [52] injected extra noise terms into the recursive algorithm, which may allow the algorithm to escape the local optimum points, and ensure global convergence. The amplitude of the injected noise is decreased over time (a process called annealing), so that the algorithm can finally converge to the global optimum point. He argues that, in some cases, the naturally occurring error in the gradient approximation effectively introduced injected noise that promotes convergence of the algorithm to the global optimum. Treadgold [53] combined gradient descent and the global optimization technique of simulated annealing

(SA). This combination escapes local minima and can improve training time. Staus [54] used spatial branch and bound methodology to solve the global optimization problem. The spatial branch and bound technique is not practical for identification. Advances in convex algorithm design using interior point methods, exploitation of structure, and faster computing speeds have altered this picture. Large problems, including interesting classes of identification problems can be solved efficiently. Fujita [55] proposed a method (taking advantage of chaotic behavior of the nonlinear dissipation system) that has inertia and nonlinear damping terms. The time history of the system, whose energy







function corresponds to the objective function of the unconstrained optimization problem, converges at the global minima of energy function of the system by means of appropriate control of parameters dominating occurrence of chaos. However none of these global optimization techniques can reveal gradient descent in terms of efficiency in number of computations. therefore in this thesis we revisit the problem of stochastic gradient descent for IIR filtering.

1.2.3 Proposed Optimization Method

The proposed global optimization methods in this dissertation are based on stochastic approximation methods on the MSE cost function and in information theoretic learning. The stochastic approximation represents a simple approach to minimizing a nonconvex function, which is based on a randomly distributed process in evaluating the search space [56]. In particular, two methods were investigated. The first method [57] is implemented by adding random perturbations estimate of the system's dynamic equation. Variance of the random fluctuation must decay according to a specific annealing schedule, which can ensure convergence to a global optimum. The goal of the early large perturbations is to allow the system to quickly escape from the local minima. The second method is based on stochastic approximation with convolution smoothing [56]. The objective of convolution smoothing is to smooth out the nonconvex objective function by convoluting it with a noise probability density function (pdf). Also in this method, the variance of the pdf must decay according to a cooling schedule. The amount of smoothing is proportional to the variance of the noise pdf. The idea of this method is to create a sufficient amount of smoothing in the beginning of the optimization process so that the outcome is a convex performance surface. When the variance of the noise pdf is gradually reduced to zero, the performance surface gradually converges to the original nonconvex form. Both of these methods use the MSE cost function.

We also propose annealing the kernel size in entropy optimization. Entropy

can be estimated directly from data using the Parzen estimation if Renyi's entropy definitions are issued [58, 59]. It is possible to also derive a gradient-based algorithm to








search the minimum of this new cost function. Recently, Erdogmus [4, 5] used ITL in adaptive signal processing. We developed a global optimization algorithm for entropy minimization by annealing kernel size (similar to the stochastic approximation with convolution smoothing method for MSE criterion). We showed that this is equivalent to adding an additive noise source to the theoretical cost function. However the two methods differ since the kernel function smooths the entropy cost function.

1.3 Outline

In Chapter 2, the basic idea of an adaptive filter and adaptive algorithm is

reviewed. Especially, we reviewed the LMS algorithm for adaptive hIR filtering, which is the basic form of our proposal algorithms. Since we focus on global optimization algorithms for adaptive hIR filtering, some important properties on global optimization for system identification are reviewed. The system identification framework with Kautz filters is also presented.

In Chapter 3, we introduce the stochastic approximation with convolution smoothing (SAS) technique and apply it to adaptive hIR filtering. Similar to the GLMS algorithm by Srinivasan [561, we derive the LMS-SAS algorithm. The global optimization behavior of the LMS-SAS algorithm has been analyzed by evaluating the transition probability density of escaping out from a steady state point for the scalar case. Because of the noisy estimate gradient, the behavior of the NLMS algorithm with decreasing step size is shown to be similar to that of the LMS-SAS algorithm from a global optimization perspective. The global search capability of LMS-SAS and NLMS algorithms are then compared.

In Chapter 4, the entropy criterion is proposed as an alternative to MSE for

adaptive hIR filtering. The definition of entropy (mutual information) is first reviewed. By using the Parzen window estimator for the error pdf, the steepest descent algorithm (ITL algorithm) with the entropy criterion for the system identification framework of adaptive filtering is derived. The weak global optimal convergence of ITL algorithm in simulation examples is given. Finally, we compare the performance of the ITL algorithm with that of LMS-SAS and NLMS algorithms in terms of global optimization capability.





8

In Chapter 5, the associated LMS, LMS-SAS, NLMS, and ITL algorithms for the Kautz filter are first derived. Similarly, we compare the global optimization performance of proposed global optimization algorithms for the Kautz filters. Finally, the associated algorithms are applied to nonlinear equalization. In Chapter 6, we conclude the dissertation and outline future work.














CHAPTER 2
ADAPTIVE hIR FILTERING

2.1 Introduction

Figure 2-t shows the basic block diagram of an adaptive filter. At each iteration, a sampled input signal x(n) is passed through an adaptive filter to generate the output signal y(n). This output signal is coiipared to a desired signal d(n) to generate the error signal F>n) Finally, an adaptive algorithm uses this error signal to adjust the adaptive filter coefficients in order to minimize a given objective function. The most widely used filter is the finite impulse response (FIR) filter structure.

In recent years, active research has attempted to extend the FIR filter into the more general infinite impulse response configuration that offers potential performance improvements and less computational cost than equivalent FIR filters [601. However, some practical problems still exist in the use of adaptive IIR filters. As the error surface of IIR filters is usually multimodal with respect to the filter coefficients, learning algorithms for IIR filters can easily be trapped at local minima and be unable to converge to the global optimum [t]. One of the common learning algorithms for adaptive filtering is the gradient-based algorithm, for instance the least-mean-square algorithm (LMS) [611. The algorithm aims to find the minimum point of the error


d(n)


x(n)


Figure 2-1: Adaptive filter model.







surface by moving in the direction of the negative gradient. Like most of the steepest descent algorithms, it may lead the filter to a local minimum when the error surface is multimodal. In addition, the convergence behavior of the LMS algorithm depends heavily on the choices of step size and the initial values of filter coefficients.

Learning algorithms such as maximum likelihood [62], LMS [1], least-square [2], and recursive-least- square [2] are well established for the adaptation of FIR filters. In particular, the gradient-descent algorithms (such as LMS) are very suitable for adaptive FIR filtering, if the error surface is unimodal and quadratic. Generally, LMS is the best choice for many applications of adaptive signal processing [1], because of its simplicity, its ease of computation, and the fact that it does not require off-line gradient estimations of data. It is also possible to extend the LMS algorithm to adaptive IIR filters; however, it may face the local minimum problem when the error surface is multimodal. The LMS algorithm adapts the weight (filter coefficients) vector along the negative gradient of the mean-square-error performance surface until the minimum of the MSE is reached. In the following, we will present the formulation of the IIR-LMS algorithm. The IIR filter kernel in direct form is constructed as L M
y(n) - ax(n - ) + > bjy(n j) (2-t)
iO j1

Let the weight vector 0, X(n) be defined as


0 [ao,'" ,aL, bl ,bM]T (2-2)

X(n) [x(n), ,x(n - L),y(n- ),. ,y(n M)]T (2-3)


and d(n) is the desired output. The output is


y(n) oT(n)X(n) (2-4)


We can write the error F as


(n) d(n) - y(n) d(n) _ OT(n)X(n)


(2-5)







So the gradient is


2E()[ O(n)
aoa0
2E(n)[ )
Oao


V77
a (n) a (n) ' Oa ' 8b1

' Oa ' 8b1


Let us define


OaL ' Ob1


E2 2 O ao a]
a (n)u'abM

'abM



abM
OYM T/


From Equation (2-1), obtain


+[ bj oy(n
j= 1


Voy(n) = [x(n), . , x(n - L), y(n - 1), - - , y(n - M)]T M .M . M .

"' Lbl Om
j=1 j=1 j=1
M
SX(n) + bjVoy(n -j)
= 1


(2-10) (2-11)


Where the gradient estimate is given by


Vo = -2E(n)Voy(n)


(2-12)


Based on the gradient descent algorithm, the coefficients update is


0(n + 1) = 0(n) - pVo


(2-13)


Therefore, in IIR-LMS, the coefficient update becomes


0(n + 1) = 0(n) + 2p[d(n) - y(n)]Voy(n)


(2-14)


where 2/ is a constant step size.

For each value of n, Equation (2-4) produces the filter output and Equation (2-10) and (2-14) are then used to compute the next set of coefficients 0(n + 1). Regarding the computational complexity, the IIR-LMS algorithm as described in Equation (2-4) through (2-14) requires approximately (L + M)(L + 2) calculations for each iteration while the FIR-LMS requires only 2N calculations for each iteration (with filter length


(2-6)

(2-7) (2-8)


Voy(n) [ O
ao


(2-9)

























Figure 2-2: Block diagram of the system identification configuration.


- N). Being one of the gradient-descent algorithms, the LMS algorithm may lead the filter to a local minimum when error surface is multimodal, and the performance of the LMS algorithm will depend heavily on the initial choices of step size and weight vector.

Stability check. Jury's stability test [63] was used in this thesis. This stability test ensure that all roots lie inside the unit circle. Since the test does not reveal which poles are unstable, the polynomial must be factored to obtain this information. If the polynomial order is larger then 2 (M > 2), the test becomes computationally expensive. If this was done, any unstable set of weights could easily be projected back into the unit circle. The difficulty of the stability check is polynomial factorization.

To simplify the stability check, one may use the cascade of first- or second-order sections instead of the canonical direct form. In particular, the stability of the Kautz filter, a structure of cascades of second-order sections with complex poles, is easily checked.

2.2 System Identification with the Adaptive IIR Filter

In the system identification configuration, the adaptive algorithm adapts the coefficients of the filter such that the adaptive filter matches the unknown system as closely as possible. Figure 2-2 is a general block diagram of the adaptive system







identification configuration, where the unknown is described as


y(n) -[ B(z' I]x(n) � v(n) (2-1t5)
A(z-1)

where A> 1) 1 z 1 aiz and B> 1) - 2 1 bJZ-j are polynomials, and x(n>) and v(n) are the input signal and the perturbation noise, respectively. The adaptive filter is described as

y(n) [B> ] 1 ) (2-16)
A> 1)
where A>"1) 1 z aiz and B> 1) zyb1 bJz-. The issues in system

identification with adaptive filters are usually divided into the following: " Adaptive filter order:
insufficient order: V* < 0;

strictly sufficient order n* 0;

more than sufficient order n* > 0;
where n* min[(na - a); ( b- hb)]. In many cases, features (b) and (c) are grouped
in one class, called sufficient order, where n* > 0. " Identification type
without additional noise;

with additional noise correlated with the input signal;

with additional noise uncorrelated with the input signal;

The basic objective function of the adaptive filter is to adapt the coefficients of the adaptive filter such that it describes the unknown system in an equivalent form. The equivalence is usually determined by an objective function W(n) of the input, available unknown system output, and the adaptive filter output signals. The objective function W(n) must satisfy the following properties in order to fit the consistent definition: " Nonnegativity W(n) > 0.

" Optimality W(n) 0.

There are many ways to describe an objective function that satisfies the optimality and nonnegativity properties. The following forms of the objective function are the most commonly used in deriving the adaptive algorithm:







" Mean square error (MSE) W[(n)]- E[F2(n)]. � Least square (LS) W[ (n)] - 71 2(n -) " Instantaneous square error (ISV) W[F(n)] -2(n). In a strict sense, MSE is a theoretical value that is not easy estimated. In practice, it can be approximated by the other two objective functions. In general, ISV is easily implemented but it is heavily affected by perturbation noise. Later we present the entropy of the error as another objective function, but first we must discuss MSE.

The adaptive algorithm attempts to minimize the mean square value of the output error signal, where the output error is given by the difference between the unknown system and the adaptive filter output signal.That is, [B(z-1) B (z-1)
( ) B ]x(n) � v(n) (2-17)


The gradient of the objective function estimate with respect to the adaptive filter coefficients is given as

V [F2(a)] -2F(n)V [F(a)] -2F(n)74y (n)] (2-1t8)


with
1( - ) + (n) 1 � g- k (
Vy(n] y k- Uk a, a-(n) (2-19)

x(n )+ z2a y(n-k)
25"17 --j)+ k- 1 k(n7) abj lbj Vj(n)

where 0 is the adaptive filter coefficient vector.

This equation requires a relatively large memory allocation to store data. In practice, a small step approximation that considers the adaptive filter coefficients slowly varying can overcome this problem [64]. Therefore, by using the small step approximation, the adaptive algorithm is described as 0(n + 1) 0 + p (n)Q(n) (2-20)

where O(n) {(n- i) lx(n - j)} for i ,. , ha;j ,. , ib ,and p is a small step size that satisfies the following property. The adaptive algorithm is characterized by the following properties:







Property 1 [65] The Euclidean square-norm of the error parameter vector defined by 0l(n) - 0(n) II is convergent if p satisfies

0 < p < (2-21)

Property 2 [31, 66, 67] The stationary points of the MSE performance surface are given by

A(z-, n)B(z-1) - A(z-l, n)B(z-1) B(z-l n)x(n )} 0 (2-22)
A(z-1, n)A(z-1, ) A2 (-1, n)
A(z-1 n)B(z-1) - A(z-1, n)B(z-1)1 x(n ) 0 (2-23)
A(z-1, n)A(z-1, n) A(-1, n)

In practice, only the stable stationary points, so called equilibria, are of interest and usually these points are classified as

* Degenerated point: The degenerated points are the equilibrium points where B(z- , n)=O : ub < ha
B(z-1, n)=L(z-1)A(z-1, n) : ab > ha where L(z-1) ona Ik -k

* Nondegenerated points: All the equilibria that are not degenerated points.

The equilibrium points that influence the form of the error performance surface have the following property. Property 3 [12] If n* > 0, all global minima of the MSE performance surface are given by

{A*(z -l) A(z-1)C(z-1) (2-25)
B*(z-1)=B(z-1)C(z- )

where C(z-1) = 0 CkZ-k. It means that all global minimum solutions have included

the polynomials describing the unknown system plus a comm factor C(z-1) present in the numerator and denominator polynomials of the adaptive filter.







Property 4 [68] If n* > 0, all equilibrium points that satisfy the strictly positive realness condition
A*(z-1)
Re[A( > 0 : z= 1 (2-26)
A(z-1)
are global minima.

Property 5 [68] Let the input signal x(n) be given by x(n) [F(j ]w(n), where F(z-1) = o fk - and G(z-1) 1 Z- 1 gkz k ari coprime polynomials, and w(n)

is a white noise. Then if

n* > nf
(2-27)
ib - iha + 1 > ng

all equilibrium points are global minima.

This property is actually the most common used result for the unimodality of the MSE performance surface in cases of identification with sufficient order models. It has two important facts which are

* If hia na 1 and ib > nb > 1, then there is only one equilibrium point, which is

the global minimum.

* If x(n) is white noise (nf = ng = 0), and the orders of the adaptive filter are

strictly sufficient( hia na and ib = b, and ib - na + 1 > 0), then there is only

one equilibrium point, which is the global minimum.

Nayeri [69] further investigated this property and he obtained a less restrictive

sufficient condition to guarantee unimodality of the adaptive algorithm, when the input signal is a white noise and the order of the adaptive filter exactly match the unknown system. The result is given as Property 6 [69] If x(n) is a white noise sequence (nf = ng = 0) , the orders of the adaptive filter are strictly sufficient (ha = na and fib = nb, and fib - na + 2 > 0), then there is only one equilibrium, which is the global minimum.

There is another important property which is







Property 7 [67] All degenerated equilibrium points are saddle points and their existence implies multimodality (existence of stable local minimum) of the performance surface if either ha > nb 0 or ha 1.

This property is also valid for the insufficient order cases.

In 1981, Stearns [70] conjectured that if n* > 0 and the input signal x(n) is white noise, then the performance surface defined by MSE objective function is unimodal. This conjecture stayed valid until Fan offered numerical counterexamples for it in 1989 [71].

The most important characteristic of IIR adaptation is the possible existence of multiple local minima which can affect the overall convergence. Moreover, global minimum solution is unbiased by the presence of zero-mean perturbation noise in the unknown system output signal. Another important characteristic of IIR adaptation is the requirement for stability checking during the adaptive process. This stability checking requirement can be simplified by choosing an appropriate adaptive filter realization.

2.3 System Identification with Kautz Filter One of the major drawbacks in adaptive IIR filtering is the stability issue. Since the filter parameters are changing during adaptation, a practical approach is to use cascades of first and second order ARMA sections, where stability can still be checked simply and locally. A principled way to achieve the expansion of general ARMA systems is through orthogonal filter structures [72]. Here we uses Kautz filters, because they are very versatile (cascades of second order sections with complex poles but still with a reasonable number of parameters). The Kautz filter, which can be traced back to the original work of Kautz [73], is based on the discrete time Kautz basis functions. The Kautz filter is a generalized feedforward filter which produces an output y(n) -p(n, ()'0, where 0 is set of weights and the entries of ,(n, () are the outputs of first order IIR filters with a complex pole at ( [74]. Stability of the Kautz filter is easily guaranteed if the pole is located within the unit circle (that is I(1 < 1). Although the







adaptation is linear in Oi, it is nonlinear in the poles, yielding a nonconvex optimization problem with local minima.
The continuous time Kautz basis functions are the Laplace transform of continuous time orthonormal exponential functions which can be traced back to the original works of Kautz [73]. The discrete time Kautz basis functions are the Z-transforms of discrete time orthonormal exponential functions [74]. The discrete time Kautz basis functions are described as

2k(Zk,1k) - 1 k 1 1
2k k kk k : 1(t(Z
2 (1 k- _ ( 1 z-1)
k- 1 _ 1
(z-1 ()(z-1 () (2-28)
1-o (1 (lz-l)(1 - (1z_1) I - ' 1J - (k ' k 1q'2k+l (Zk, (k) k 1k
2 (1 _ 1--)(1 _ 1) S (Z-11 ) (2-29)
1-o (1 (1z-l)(1 - (1z_1) where (k ak + j/3k, (6k k) are the kth pair of complex conjugate poles, and I(k < because of its stability, and k is always even.

The orthonormality of the discrete time Kautz basis functions is represented as

1 dz
wjJ4((z, (k)q(1/Z,(k) k 6p,q (2-30)

where the integral unit circle tour is analytic in the exterior of the circle.
All pairs of complex conjugate poles can be integrated in real second order sections to reduce the degrees of freedom. The resulting basis functions can be describes as discrete-time 2-pole Kautz basis functions. The discrete-time Kautz basis functions can be simplified as Figure 2-3, where


y(n) - n)T0 (2-31)

p(n) [ o(n), - - - , 1d- ()]T (2-32)

K2k (z, ) K2k- 2(z, ()A(Qz, ) (2-33)

K2k+1(z, ) K2k- 1 (z, ()A(z, ) (2-34)

















Figure 2-3: Kautz filter model.


Ko(z, () Ki(z, )


KO (1 1 (1


A((X) - (- ) (-1 + ( *) A(z () =


(1 - (z-l)(1 - (*Z-1)
1 Ko 1 + 2
2'


1 -( * {1 =|1 - 2
2'


Here ( is a complex conjugate pole (that is ( = a + jp).


z-1 -1
z-1)(1 - *-1)
z-1+1
(z-1)(1 - (*z-1)


(2-35) (2-36)


(2-37) (2-38)

(2-39)


,














CHAPTER 3
STOCHASTIC APPROXIMATION WITH CONVOLUTION SMOOTHING

3.1 Introduction

Adaptive filtering has become a major research area in digital signal processing,

communication and control, with many applications, such as adaptive noise cancellation, echo cancellation, and adaptive equalization and system identification [t, 2]. For simplicity, finite impulse response (FIR) structures are used for adaptive filtering and have many mature practical implementations. However, infinite impulse response structures can reduce computational complexity and increase accuracy. Unfortunately, IIR filtering has some drawbacks, such as slow convergence, possible convergence to a bias or unacceptable suboptimal solutions, and the need for stability monitoring. The major issue is that the objective function of the IIR filtering with respect to the filter coefficients is usually multimodal. The traditional gradient search method may converge to a local minimum depending on its initial conditions. The other unresolved problems of adaptive IIR filtering are discussed by Johnson [tt] and Regalia [t5].

Several methods have been proposed for the global optimization of the adaptive IIR filtering [75, 45, 76]. Srinivasan et al. [56] used stochastic approximation with convolution smoothing (SAS) in the global optimization algorithm [3, 76, 77] for adaptive IIR filtering. They showed that the smoothing behavior can be achieved by appending a variable perturbing noise source to the error signal. Here, we modify this perturbing noise by multiplying it with its cost function. The modified algorithm, which is referred to as the LMS-SAS algorithm in this dissertation, results in better performance in global optimization than the original algorithm by Srinivasan et al. We have also analyzed the global optimization algorithm behavior by looking at their transition probability density of escaping out from a steady state point.








Since we use the instantaneous (stochastic) gradient instead of the expected

value of the gradient, error in estimating the gradient naturally occurs. This gradient estimation error, when properly normalized, can be used to act as the perturbing noise. Consequently, another approach in global HR filter optimization is the normalized LMS (NLMS) algorithm. The behavior of the NLMS algorithm with decreasing step size is similar to that of the LMS-SAS algorithm from a global optimization perspective.

3.2 Convolution Function Smoothing According to Styblinski [31, a multi-optimal function f (O) c R', 0 G R, can be represented as a superposition of a convex function (i.e., having just one minimum) and other multi-optimal functions that add some "noise" to the convex function. The objective of convolution smoothing can be viewed as "filtering out" the noise and performing minimization on the "smoothed" convex function (or on a family of these function), in order to reach the global optimum. Since the optimum of the smoothed convex function dues nut, in general, coincide with the global function minimum, a sequence of optimization steps are required with the amount of smoothing eventually reduced to zero in the neighborhood of the global optimum. The smoothing process is performed by averaging f (0) over some region of the parameter space R' using the proper weighting (or smoothing) function h(0) defined below. Formally, let us introduce a vector of random perturbation TI G R' and add TI to 0, thus creating the convolution function.

f(,3 (TI,0) f (0 -T) dq j h(0 -qT,0) f(TI) dq (3-1t)

Hence,



where f (0, 3) is the smoothed approximation to the original multi-optimal function f (0), and the kernel function h(qI,3) is the pdf used to sample q1. Note that f(j)can be regarded as an averaged version of f (0) weighted by h(q, 3).The parameter 3 controls the dispersion of h, i.e., the degree of f (0) smoothing (e.g.j3 can control the standard deviation of T1. TIT). Eq [f (0 - T)] is the expectation








with respect to the random variable T1. Therefore, an unbiased estimator f (O, 3) is the average:
N


where TI is sampled with the pdf h(qI,3) ~( i 3

The kernel function h(q, 3) should have the following properties: " hT,3 '~h(g) is piecewise differentiable with respect to T1.

" liml3,0 h(qI, 3) 6%) (Dirac's delta functional). " h (TI,3) is a p df.

Under these conditions lim 3,0 f (TI, fR J(TI)f (0 TI) dql f (O 0) -f (O).

Numerous pdf's satisfy above conditions, e.g., the Gaussian, uniform ,or Cauchy pdf's. Let us consider the function of f (x) -5x4' 16X + 5x, which is continuous and differentiable, and it has two separated minima. Figure 3-t shows the smoothed function, which is the convolution between f (x) and a Gaussian pdf.

Observations. Smoothing is able to eliminate the local minima of f (O, 3), if 3 is sufficiently large. When 3 --i 0, then f (0, 3) -+- f (O): this should actually happen at the end of optimization to provide convergence of the true function minimum. Our objective now is to solve the optimization problem of minimizing the smoothed functional f (O, 3) as -i0. In general, the modified optimization can be viewed as min f (O, 3) as 0-i- 0.
OCR"1
Similarity with simulated annealing algorithms. Development of simulated annealing method was motivated by the behavior of mechanical systems with a very large number of degrees of freedom. According to the general principles of physics, any such system will, given the necessary freedom, tend toward the state of minimum energy. Therefore, a mathematical model of the behavior of such a system will contain a method for minimizing a certain function, namely the total energy of the system. Simulated annealing is a convenient way to find the global minimum of a function that has many minima. The method is a biased random walk that samples the objective function in the space of the independent variables. It is executed in the following manner. Starting at a random chosen initial point, the corresponding value of the





























Smoothed function f(x,3)


1400 1200 1000 800600400200



0



-200
-5


-4 -3 -2 -1 0 1 2 3
x


Figure 3-1: Smoothed function using Gaussian pdf.


4 5








objective function is calculated. Next, a random point is chosen on the surface of the unit n-dimensional objective function, the new corresponding value of the objective function is also calculated. If the step is beneficial, the new corresponding objective function is smaller than the previous one, the new point is unconditional accepted. If the step is detrimental in terms of the cost, the new corresponding objective function is larger than the previous one, the new point is accepted according to a temperature associated function. This temperature associated function has the following property; the lower the temperature, the smaller the probability of transition to a higher energy state. Therefore, the simulated annealing method is often viewed in terms of the energy particle" at any given temperature. Lowering the temperature also reduces the particle energy. Let us consider the similar interpretation to the convolution function smoothing. Perturbing 0 can be viewed as adding some noise energy to the particle. The larger the 3, the larger the energy is. Thus, reducing 3 for the convolution function smoothing is similar to lowering temperature in the simulated annealing algorithm.

3.3 Derivation of the Gradient Estimate When the SAS technique is applied to the IIR-LMS algorithm, we require a

gradient operation of the functional f (0, 3) (that is Vof (0, 0)). Under the assumption that the gradient of functional f (0, 3) is known, the unbiased single-sided gradient estimate of the smoothed functional f (Q A) can be represented as


VOf (0, j3) jZN VOf (0 - tjO (3-4)


where the reflected value is substituted by the empirical average. Likewise. the unbiased double-sided gradient estimate of the smoothed functional f (0, 3) can be represented as

N
V~f (0,30) -2N Y[Vof(0 + 43w) + VOf (0 sOTfll (3-5)
i-i

In order to implement either Equation (3-4) or (3-5) we would used to evaluate the gradient at many points in the neighborhood of the operating point 0, yielding effectively an off-line iterative global optimization algorithm. We will combine the








concept of the SAS gradient estimate with the LMS optimization procedure to develop an on-line iterative global optimization algorithm.

The key to implementing a practical algorithm for adaptive hIR filters is to develop an on-line gradient estimate Vo (0), where F(0) is the error between the derived signal and the output of the adaptive hIR filter. Here we use the SAS derived single-sided gradient estimate together with the LMS algorithm, where the gradient estimate is


V10 (0,4) 7F(0 -3TI) (3-6)


A major characteristic of the LMS algorithm is its simplicity. We hold to this attribute by setting N 1 in Equation (3-6) and substitute the neighborhood averaging by the sequential presentation of data as done in the LMS algorithm. Hence, we obtain the one-sample gradient estimate as


V0 (0,40) -V0 '(0 3 ty) (3-7)


This equation is iterated for each input sample. Theoretically, Equation (3-7) shows that the on-line version of the SAS is given by the gradient value at the randomlyselected neighborhood of the present operating point. The variance of the neighborhood is controlled by 43, which decreases along with the adaptation procedure. Implementing Equation (3-7) requires two filters; one for computing the input-output relationship and the other for computing the gradient estimate at the perturbing point (0 43qT). For large-order systems, this requirement is impractical. We investigate the following simplification, which involves the representation of the gradient estimate at (0 -43Ty) as a Taylor series around the operating point. That is


V0 -(0 43rT) -[-'(0) + 43r- "(0) + (Q/)2 "'//(0) +. 1(3-8) Under this equation, we can use the same filter to compute both the input-output relationship and the gradient estimate. As a first approximation, we only keep the first two terms and assume a diagonal Hessian. This results in the following gradient







estimate

V0 -(0 - 3,y) '() 3, (3-9)

This extreme approximation assumes that the second derivative of the gradient vector is independent of 0 so that its variance is constant throughout the adaptation process. The second term O3T of the right hand side of the above equation can be interpreted as a perturbing noise, which is the important term to avoid convergence to the local minimum.

Recall that the GLMS algorithm is


0(n. + 1) 0(n) p (n)E(n)VoE(n, 0) - (n) (3-10)

where the appending perturbation noise source is 3(n)I.

3.4 LMS-SAS Algorithm

Srinivasan used Equation (3-9) to estimate the gradient in the Global LMS (GLMS) algorithm of Equation (3-10) [56]. Similar to the GLMS algorithm, we derive now the novel LMS-SAS algorithm. The adaptive IIR filtering based on the gradient search essentially minimizes the mean-square difference between a desired sequence d(n) and the output of the adaptive filter y(n). The development of GLMS and LMS-SAS algorithms involve evaluating the MSE objective function. The MSE objective function can be described as
1 1
((0) = -EE2() =E{ [d(u) -3 ytu)]
2 2
where E is the statistical expectation. The output signal of the adaptive IIR filters, represented a direct-form realization of a linear system, is


y(n) = aox(n)+---+aT N+1X(n -N+1)

+bly(n - 1) +. + bn-M+ly(n - M + 1) (3-12)


Which can be rewritten as


y(n) = 0(n) () (3-


(3-13)







where 0(n) is the parameter vector and #(n) is the input vector.

0(n) = [ao(), - - - , aN-l(n),bi(n), - , bM-l()]T (3-14) 4(n) = [x(n), - - - , x(n - N + 1), y(n - 1), ,y(n - M + 1)1T (3-15) The MSE objective function is


(n, 8) = E{ [d(u) - 0T(u)0(u)]2} (3-16)
2

Now we use the instantaneous value as the expectation of E{E2(n)} 2 E2(n) such that

1 1
(n, 0) 1 2( 0) t[d(n) - 0T(n)(n)12 (3-17)
2 2

Considering the LMS algorithm, we must estimate the gradient vector with respect to the parameters 0.

1
V((n, 8) = Vo [E2(n, 8) = (n, )We [(n, 8)

aE(n,0)
--(n, 0)Voy(n) (n,0) aa, (3-18)
aE(nO)
abi

The partial derivative term a (n, 0)/Oai is evaluated as N-1
a= -bk + x( - i) } (3-19)
k=0
Similarly, the partial derivative term O(n, O)/Obi is evaluated as a(, 0) NI (n k)
ab ~{ [bk b k)]+ (n -i)} (3-20)
k=0
From Equation (3-9), we obtain VOe(n, 0 - ) V= VE(n, 0) - jr (3-21)

Using the above equation, we obtain the adaptive algorithm of steepest descent as


0(n + 1) = 0(n) - p(n)E(n)7Vo (n - O3) (3-22)

0(n) - [(n)>(n)7Vo(n 8) - P(n)(n)p (3-23)








where the third therm p(n)F(n)jTq on the right hand side is the appended perturbation noise source. TI represents a single additive random source, p(n) is the step size which decreases over of iterations, and F(n) is the error between the desired output signal and the output signal of the adaptive hIR filter.

The difference between LMS-SAS and GLMS resides in the form of the appending perturbation noise source, where we have modified the appending noise source by multiplying it with the error. This modification brings the error into the noise term which is in principle a better approximation to the Taylor series expansion in Equation (j3-8) than Equation (j3-9). We can therefore foresee better results.

3.5 Analysis of Weak Convergence to the Global Optimum for LMS-SAS

In this section, we obtain the transition probability of escaping out of a local minima by solving a pair of partial differential equations, which are called the Eokker-Planck equations (diffusion equation). We follow the lines of Wong [781. Here we can write the LMS-SAS algorithm as Itu's integral as


64 Oc0t + j. r(08, s)ds + j. a<0, s)dW, (3-24)


Where




Let {0t, a < t < b} be a Markov process, and denote P (0,t 100, to) -P(ot < 0 S t0 00) (3-26)


We call P (0, t 0 O, to) the transition function of the process.

We first discuss the simple case of the scalar 0 assumption and then the more involved case of the vector 0 assumption.

0 is a scalar.

If there is a function p(O, t I0o, to) so that


P (0, t 100, to) -f p(x, t 0 o,to) dx (3-27)







then we call p(0, t 00, to) the transition density function. Since {Ot, a < t < b} is a Markov process, P(0, t Oo, to) satisfies the Chapman-Kolmogorov equations.

P(0, t 00, to) J P(x, t1z, s)dP(z, s |o, to) (3-28)

We now assume the crucial condition on {Ot, a < t < b}, which makes the derivation of the diffusion equation possible. Define for a positive c, Mlk(0, t; -, A) (Y - 0)kdP(y, t + A 0, t)

k = 0, 1,2 (3-29)

M3(0, t; f, A) (Y - 0)3dP(y, t + AlO, t) (3-30)

We assume that the Markov process {Ot, a < t < b} satisfies the following conditions: [1 - Mo(0, t; C, A)] ( 0 (3-31)

M (0, t; ,) A) m(0, t) (3-32)

1 o O

M3( , t; , A) - o0 (3-34)

It is clear that if 1 - Mo(0, t; c, A) - 0, then by dominated convergence,


p(lUt+ - Otl > 0) I [- Mo(0, t;c,A)dP(0, t) " O (3-35)

In addition, suppose that the transition function P(O, tl0o, to) satisfies the following condition:

Assumption. For each (0, t), P(O, tlOo, to) is once differentiable in to and

three-times differentiable at 00, and the derivatives are continuous and bounded at (0o, to).
Kolmogorov [79] has derived the Fokker-Planck equation

8 1 82
P(O, tlOo,to) 2 02[(0, t)p(, t Oo, to) [m(0, t)p(0, tIOo, to)] b > t > to > a (3-36)
00~







The initial condition to be imposed is


/OO
00


that is p(O, t 0, to) equations, we get


at 0' t)


If p(O, t) is a product p(O, t) quantities, then we have


6(0 - 0o). Substituting Equation (3-24) into the Fokker-Planck


1 02
2 802 (tO)E(0)(0, t)]


g(t)W(0O)c(0) reflecting the independence among the


W(0)(0) dg(t)
dt


d l d
g(t)P(t)( { [E(0)W(0)p(0)]
dO 2 dO


(3-39)


-Ve(s)w(o (o) }

Let W(O) be any positive solution of the equation


ld
d (0)W(0)]
2 dO


V (0)W(0)


dg(t)
W (0)(0) dg(t)
dt


1 dg(t) g(t)p(t) dt


d dp(0)
= g(t)j (t) ( [E (0) (0 )])
2 dO dO


1 1 d d((0)
S (0)W(0) ]) (
W(0)c(0) 2 dO dO


The two sides, being functions of different variables, must be constant in order for the equality to hold. Set this constant as -A, then


1 dg(t) A
g(t)p(t) dt
g(t)= e- -Afo(s)ds P(0, t) = e- fto (s* 9 OA (0)


(3-43) (3-44) (3-45)


Where 9 (O) satisfies the Sturm-Liouville equations.


(0) W(0 ) ] + AW(O);(0)
dO


0 (3-46)


f(0)p(, t|0, to)dO f(Oo)


Vf C S


(3-37)


[ (t)V(0, t)p(0, >)] a 1 4 V O l O A I 0


(3-38)


then


Therefore


(3-40)


(3-41)


(3-42)


1 d 2 dO







Under rather general conditions, it can be shown that every solution p(O, t) can be represented as a linear combination of products. Since p(0, t0 O, to) is a function of t, to, 0, 0o, it must have the form of


p(O, too,to) W(o) fe \ " ( (0) l*(Bo)dA (3-47)

where @*(Oo) is conjugate complex of 0 (Oo). Here we want to know the transition probability of the process escaping from the steady-state solution 0*, in which V(0O*) =

0. From Equation (3-40), we obtain


(0*)W(O*) = c (3-48)


where c is a constant. The Sturm-Liouville equation becomes Sd A
2 (0) + (p(O*) = 0 (3-49)

Let 1- V2 then , (O) ej are the bounded solutions. And we know that (9*jV t 1 02 E(O*)T (3-50)



Where T ft' p(s)ds, by the inversion formula of the Fourier integral, we obtain S1 2 = 1 ' (3-51)1
27rT(0*) 2 T) 2O)T

From Equation (3-47), we get the transition probabilities of the process escaping out of the valley as

1 1 (0 - 0*)2
p(0, t0*, to) exp(- -)
27F(*) j,(S)s 2 0*) ft I(s)ds

G(o- 0*, 0*) Jp(s)ds) (3-52)


where G(O, O2) is a Gaussian function with zero mean and variance O2.

Summary. Equation (3-52) is the final transition probability of the process

escaping out from the steady-state 0*. The conditional p(O, t 0*, to) is determined by

0 - 0*, p(n), and E(0*). Because we use a monotonically decreasing p(n), the algorithm







will decrease the probability of the process jumping out the valley over iterations. From Equation (3-52) the transition probability of the process escaping out from the local minimum 0* is larger than the one from the global minimum 0* because of

(0)l < (0k) . Thus, the algorithm will stay most of its time near the global valley and will eventually converge to the global minimum. Equation (3-52) also shows that the larger the 0 - 0* is, i.e., the larger the valley around the steady state point 0*, the less probable is the process from escaping out from this steady state point 0*.

0 is a vector.

Returning to the original case, in which 0 is a vector, we must solve the following Fokker-Planck equation:

a 1VO'[)p-t )
at ( , 2 V o[ (t) (O)p(O, t)] - Vo[/pt)V u(Ot, t)p(O, t)] (3-53) Similarly, we want to know the transition probability of escaping from the steady-state solution 0*, in which V (0*) 0. Equation (3-53) will become




Imposing strict constraint that p(O, t) is a product

P(0, t) - 9(t) O(0) - 9(t) (01) 2 (02)" . N+M I(ON+M _I1) (3-55)


then we have

1 dg(t) _ (0*) V O(0) (3-56)
g(t)p (t) dt 2 p(0)

The two sides, being function of different variables, must be constant, set this constant as -A, then

1 dg(t) A (3-57)

(0*)V2(0 A
(o*)V72(�) -. A(3-58)
2 9p(0) 0







Similarly, Equation (3-58) can be presented as 2(0*) V21 1(01) --A1 2901 (01)
2 * (0 )
2902(02) 02,52(02) A2 F(0*) 772
2N+M I(ON+M I) ON+M MN+M-1(ON+M-1) --AN+M-1 (3-59)
2 NM-1+N+-1

where Z M- 1 Ai - A.

Let =i 'vi then p4(0) e o'0 for i 1, 2,. , N + M - 1 are the bounded
E(6k) 2
solutions. From Equation (3-47), we get the transition probabilities of the process escaping out of the valley as
N+M-1 .
p(0,t 0*,to) IJ J 2 (Otftops)dseJviOidj
i= 1 -o
N+M-1 .
SG(o - 0*, (O*) (s) ds) (3-60)
i-J Git 0,
ii1

Under the constraint of factorization of p(n), the same arguments for the scalar case will hold for the vector case. However 4o(04) for i 1, 2, . , N + M - 1 are not, in general, independent of each other, p(n) must also include the correlated terms beside the independent term of product. Therefore the actual transition probability p(0, t|1*, to) is larger than Equation (3-60). In the more realistic case of dependence, the Fokker-Planck will become very complicated. Thus it is not easy to find out the transition function from a steady state point.

3.6 Normalized LMS Algorithm Because in practice we use the instantaneous gradient instead of the theoretical

gradient, an estimation error naturally occurs. The gradient error can be used to act as the appending perturbing noise. After reviewing the Normalized LMS algorithm [2], we show that the global optimization behavior of the NLMS algorithm is similar to that of the LMS-SAS algorithm because of the noisy estimate gradient. As a result, the NLMS algorithm can also be used for global optimization.







Consider the problem of minimizing the squared Euclidean norm of


60(n + 1) 0= (n + 1) - 0(n), (3-61)


subject to the constraint OT(n + 1)Voy(n) = d(n) (3-62)

To solve this constrained optimization problem, we use the method of Lagrange multipliers. The square norm of 68(n + 1) is


1160(n + 1) 112 6OT(n + 1)(n + 1)

[0(n + >) - 0(n)]T[0(n + >) - 0(n)]
N
= Ok(f�) -Ok(n) 2 (3-63)
k=0

The constraint of Equation (3-62) can be represented as

N
k (n+)VOyk() = d(n) (3-64)
k=0

The cost function J(n) for the optimization problem is formulated by combining Equation (3-63) and (3-64) as N N
J(n) = |Ok( + ) - Ok()12 + A[d(n) - 8k(n + )Vyk(n)] (3-65)
k=0 k=0

where A is a Lagrange multiplier. After we differentiate the cost function J(n) with respect to the parameters and then set the results to zero, we obtain


2[(n + 1) - 0(n)] = AVoyk(n), k = 0,1, ,N (3-66)


By multiplying both sides of the above equation by Vyk (n) and summing over from k 0= to N, we obtain
N N
A 2o(n + )VoYk(n) Y, ok(n)VOYk(n)]
Ei=0o Voyk(n)2 kO kOo
2
2 [0O(n + 1)Voy(n) - oT(n)Voy(n)] (3-67)
|| Voy(n) ||I







Table 3-1: NLMS algorithm y(n) - 0 aix(n - i) + E 1 bjy(n - j)
0(n) = [ao(n), , aN-l(n), bi(n), ,bM-1(2)]T
,(n) = [x(n),. ,x(n- N+ ),y(n- 1),. ,y(n- M+ 1)]T
y(n) - OT(n)I(n) -(n) -d(n) - y(n)
Voy(u) = (n) + Ej bjVoy(n - j) 0(n1 + )= 0(n) + p E(n)Voy(n) 1Voy( n)72 O

Substituting back the constraint of Equation (3-62) into Equation (3-67), we obtain

2
A 2 [d(n) - OT(n)Voy(n)] (3-68)
IVoy(u)||

Define the error E(n) = d(n) - OT(n)Voy(n). We further simplify A as A (2 ) () (3-69)
||Voy(n) |2 (n

By substituting above equation into Equation (3-66), we obtain


60V( + 1) 2OVyk(n)u(n) k = 0, 1,. , N (3-70)
||o y( ) 112

For the adaptive IIR filtering, the above equation can be formulated as


60(n + 1) V VyP(n)E(1) (3-71)

or equivalently, we may write as O(n + 1) 0(n) + n) Voy(n)E(n) (3-72)

This is the so called NLMS algorithm summarized in Table 3-1, where the initial conditions are randomly chosen.

Computation complexity. The computational complexity of the NLMS

algorithm is (N + M)(M + 3). Compared to the computational complexity of the original LMS algorithm which is (M + N)(N + 2), the NLMS algorithm is almost as simple. It only requires a little extra computation.







3.7 Relationship between LMS-SAS and NLMS Algorithms

In this section, we show that the behavior of the NLMS algorithm is similar to that of the LMS-SAS algorithm from a global optimization perspective. Here we follow the lines of Widrow et al. [1] and assume that the algorithm will converge to the vicinity of a steady-state point.

From Equation (3-18), we know that the estimated gradient vector is:


V Mn)) - -(n)VOy(n) (3-73)

Define N(n) as a vector of the gradient estimation noise in the ,th iteration and V((0(n)) as the true gradient vector. Thus ((0(n)) = V((0(n)) + N(n)

N(n) = V(0(n)) - V((0(n)) (3-74)

If we assume that the NLMS algorithm has converged to the vicinity of a local steady-state point 0*, then V((0(n)) will be close to zero. Therefore the gradient estimation noise will be

N(n) = V((0(n)) = -E(n)Voy(n) (3-75)

The covariance of the noise is given by

cov[N(n)] = E[N(n)NT(n)] = E[E2 (n)Voy(n)VyT(n)] (3-76)

We assume that E2(n) is approximately uncorrelated with Voy(n) (the same assumption as [1]), thus near the local minimum cov[N(n)] E[E2(n)]E[Voy(n)VoyT(n)] (3-77)

We rewrite the NLMS algorithm as O(n + 1) O(n) + V((n)) (3-78)
1 O I I'n)11







Substituting Equation (3-74) into the above equation, we obtain O(n � 1) (0(n) � (V (0(Vn)) + N(n)) (3-79)

0(n ) 0(n) + I I(n) + iV0 )+II(n))
Y(n) VOy(n ) (3-6o)

where the last term is the appending perturbing noise. Its covariance, from Equation (3-77), is

co[ N(n) cov[N(n)] E [_ (n)]E[Voy(n)V1oyT(n)]
cOVOY(n)1121 I Voy(n) 12 1Vo1y(n) 2

E [F (n)]A (3-851)


where A is an unit norm matrix. Thus the NLMS algorithm near any local or global minima has the variance of the perturbing random noise determined solely by both p(n) and F(n). This behavior is very different from the conventional LMS algorithm with monotonic decreasing step size where the perturbation noise is determined by p(n), F(n) and Voy(n). Therefore, in the LMS algorithm the variance near the steady state point is small because of Voy(n) 0 0. Hence the LMS algorithm has small probability of escaping out of any local minima because of the small variance of the noisy gradient.

On the other hand, notice that the variance of the perturbing random noise

in the LMS-SAS algorithm is p(n)F(n)/3q which is also independent of the gradient and controlled by both p(n) and F(n). Therefore, we can anticipate that the global optimization behavior of the NLMS algorithm near local minima is similar to that of the LMS-SAS algorithm. Far away from local minima, the behavior of LMS-SAS and NLMS is expected to be rather different from each other.

3.8 Simulation Results

In this section, we compare the performances of the LMS, LMS-SAS, and NLMS algorithms in terms of their capability to seek the global optimum of IIR filters in a system identification framework. According to properties of adaptive algorithm discussed in Chapter 2, we set up the system identification example where its MSE criterion performance surface has one local and one global minima. In this example, we











Method

LMS with LMS with GLMS wit LMS-SAS LMS-SAS NLMS wit NLMS wit NLMS wit


Table 3-2: System identification of reduced order model

Number of hits
Global minimum Local
{0.906, -0.311} {-0.p 0.001 40 60
P2 (n) 10 90
h 03(n) and p 0.001 60 40
with P2(n) for n,x 20000 93 7
with /2(n) for n,x 40000 100 0
h pi(n) 100 0
h P2 (n) 99 1
h P3(n) 98 2


minimum 19, 0.114}


will identify the following unknown system


H() 0.05 - 0.4z-1
1 - 1.1314z-1 + 0.25z -2 by a reduced order HR adaptive filter of the form


H ()


(3-82)


(3-83)


1 - az-1


The main goal is to determine the values of the coefficients {a, b} of the above equation, such that the MSE is minimized to the global minimum. The excitation signal is chosen to be random Gaussian noise with zero mean and unit variance. There exist two minima of the MSE criterion performance surface with the local minimum at {a,b} {- 0.519,0.114} and the global minimum at {a,b} {0.906, -0.311}. Here we use three types of annealing schedule for the step size (see Figure 3-2 which shows that one is linear, one is sub linear and the other one is supra linear),

i { (n) 0.1 cos(nw/2nmax)

/2(n) 0.1 - 0.tn/na, n < n,,,- 20000 (3-84)

P3(n) 2P2(n) -1PI(n)

The cooling schedule parameter for GLMS algorithm is a linear decreased function of O(n) 100/n.

Table 3-2 shows the comparison of the number of global and local minimum hits by various algorithms. The results are given by 100 Monte Carlo simulations with random






















CL





No.01
(0
0a. . .
# fieato 0

Fiue32-tp iepn o SSagrtm

0.04 odtoso 0a ahrn h onegnecaatrstc f0twr h

0.03mnmmfrth LS M-Aan LSagrtmar hw nFgr
3-3,3-4 an 3-, rspecivey. he dapatn prcswih0a ro hngt adte local minimum for the LMS, GLMS, a~~n M-Aagrtmaeas eitdi
Fiur 36,3-, nd3-, esecivly wer 0isintiliedtoth pin narth lca
0iiu .0 Bae o th siuainrulwen su mrzpef macasolw:
" Fgue -6 ndro t,2 n abe 32 ho tht heLM alorth isliel t





" Figre33,3 Figudre 3-2 SnTbep3- se pha) for SAS S algorithm.htjumt


teglobal minimum or y n cneret the GLSgM-SSn lMSbalgornithm, are show lon iue


bcotelocal minimum orey S and LMS-SASvagorithm arecals dpicted Sinian Figur 3-6,m 3-7,and 3-8, resecrtieyher 0ol isnvinitiaie tohe oainmu nea telca


by Figref3-6landhrowin 1,e 2ioTale3- schowul that. The LMSolgorithmdl is ikelyito conrgeer to it loca iium.t edtrie schtgoaptmztoilb


" Figure 3-3, 3-8 and row 3, in Table 3-2 show that the GLMSS algorithm igh jumpeto


[51camthaoh LSaloih ol converge to the global minimum wit rpr ppsze.1nthuhte M-A



















0 0.5 1
# of iteration


1.5 2
x 104


-0.5 0 0.5
pole


3-3: Global convergence of 0 in the GLMS algorithm. A) Weight 0; B) Contour


0
-1


-2
-3
-4-


0 0.5 1
# of iteration


1.5 2
x 104


Figure 3-4: Global convergence of 0 in the Contour of 0.


-0.5

-1' '
-0.5 0
pole


LMS-SAS algorithm. A) Weight 0; B)


algorithm stays most of its time near the global minimum, it still has probability of converging to the local minimum.

Figure 3-5 and row 6, 7, 8 in Table 3-2 show that the NLMS algorithm with proper step size, similarly to the LMS-SAS algorithm, could converge to the global minimum. Figure 3-4 and 3-5 also show that the NLMS algorithm stays much longer time in the global minimum valley than the other algorithms. These figures also show that the step size of the NLMS algorithm doesn't play as crucial a role as the cooling schedule of the GLMS algorithm.

3.9 Comparison of LMS-SAS and NLMS Algorithm

Recall that the LMS-SAS algorithm is described as


0(n + ) 0(n) P(n) (njVO (a, 0) p (n)/


Figure of 0.


1 , P% x


(3-85)















0.5


01

-0.5


-1


-1.5
0


0.5 1
# of iteration


1.5 2
x 10


Figure 3-5: Global convergence of 0 in the of 0.


-0.5 0 0.5
pole


NLMS algorithm. A) Weight 0; B) Contour


K'


0 0.5 1
# of iteration


1.5 2
X 104


3-6: Local convergence of 0 in the LMS algorithm. A) Weight 0; B) Contour of


0.5 1
# of iteration


Figure 3-7: Local convergence

0.


1.5 2
X 10


-0.5


-1
-0.5 0 0.5


of 0 in the GLMS algorithm. A) Weight 0; B) Contour of


Figure
0.


-0.5 0 0.5
pole


L1,11 A 'll 06 1 A I NO







A B


0.5
0
0
S0
N
-1
-0.5
-2

-3 -1
0 0.5 1 1.5 2 -0.5 0 0.5
# of iteration X 104 pole

Figure 3-8: Local convergence of 0 in the LMS-SAS algorithm. A) Weight 0; B) Contour of 0.

On the other hand, the NLMS algorithm is


0(n + 1) 0(n)- (n) VOYM)( (3-86)
1V0y (n)11

The LMS-SAS algorithm adds a perturbing noise to avoid converging to the local minima, while the NLMS algorithm uses the inherent estimate gradient noise to avoid converging to the local minima. Two different types of step size P(n) and p(n)/ Voy(n)j2 are used by LMS-SAS and NLMS, respectively. Therefore, we need to fairly compare the performance of both algorithms in terms of global optimization, so we set up the three following experiments.

Here we use the same system identification scheme, i.e., we identify three unknown systems


Example I: H,(z) 0- 1.1314z- -0.25z2 (3-87)
0.2 - 0.4z-1
Example II: Hii(z) 1 1.1314z-1 + 0.25z-2 (3-88)
0.3 -- 0.4z-1
Example III: Hum(z) 1 1.1314z-1 + 0.25z2 (3-89)


by a reduced order adaptive filter of the form

b


H(z) - t- (z-1


(3-90)




















-0.5 0 0.5 -0.5 0 0.5 -0.5 0 0.5
pole pole pole


Figure 3-9: Contour of MSE. A) Example I; B) Example II; C) Example III.


A B C


1

0.5

0

-0.5

-1
0
5 10


2000


0 2000
# of iteration


Figure 3-10: Weight (top) Example III.


4000


4000 0 2000
# of iteration


and IlVoy(n)ll (bottom)


4000 0 2000 4000
# of iteration


in A) Example I; B) Example II; C)


The main goal is to determine the values of the coefficients {a, b} of the above equation, such that the MSE is minimized (to global minimum). The excitation input is chosen to be random Gaussian noise with zero mean and unit variance. Figure 3-9 depicts the contour of the MSE criterion performance surface in example I, II and III. Here, the step size for the NLMS algorithm is chosen to be a linear decreasing function of


-INLMS(n) 0.1(1 - 2.5 x 10-5). Step sizes for the LMS-SAS algorithm are a family of linear decreasing functions of


PLMS-SAS k(1- 2.5 x 10-n)


k [0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1,0.2,0.3,0.4, 0.5]



where we vary the step size k, but preserve the same annealing rate.


(3-9 1)











Method
LMS with pNLM NLMS with pNL LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p LMS-SAS with p


Table 3-3: Example I for system identification

Number of hits
Global minimum {0.906, -0.311}
S(n) 485
MS(u) 996
LMS-SAS(U) and k 0.01 494 LMS-SAS(U) and k 0.02 875 LMS-SAS(U) and k 0.03 952 LMS-SAS(U) and k 0.04 947 LMS-SAS(U) and k 0.05 931 LMS-SAS(U) and k 0.06 976 LMS-SAS(U) and k 0.07 976 LMS-SAS(U) and k 0.08 974 LMS-SAS(U) and k 0.09 974 LMS-SAS(U) and k 0.1 960 LMS-SAS(U) and k 0.2 840 LMS-SAS(U) and k 0.3 835 LMS-SAS(U) and k 0.4 780 LMS-SAS () and k 0.5 765


Local minimum {-0.519,0.114} 515
4
506 125 48 53 69 24 24 26 26 40 160 165 220 235


Table 3-3, 3-4, 3-5 shows the simulation results of global and local minimum hits by LMS, LMS-SAS, and NLMS algorithms. The value of ||Voy(n) | is depicted in Figure 3-10. The larger ||Voy(n)||, the smaller increments are used by the algorithm, i.e. the less probability of the algorithm escaping out from the steady state point. In cases of example I and II, the global minimum valley has sharper slope than the local valley. Therefore, Table 3-2 and 3-4 show that NLMS algorithm has higher probability in obtaining the global minimum than the other algorithms in cases of example I and II. In example III, the local minimum valley has sharper slope than the global valley. Therefore, Table 3-5 shows that the NLMS algorithm has less probability in obtaining the global minimum than the other algorithms in example III case.

3.10 Conclusion

Several methods have been proposed for the global optimization of adaptive IIR

filtering. We modify the perturbing noise in GLMS algorithm by multiplying it with its cost function. The modified algorithm, which is referred to as the LMS-SAS algorithm


. k J











Table 3-4: Example II for system identification


Method
LMS with constant p NLMS with pNLMS(u)


LMS-SAS with


LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS


with with with with with with with with with with


PLMSPLMS
I-LMSI-LMSI-LMSPLMSPLMSPLMSI-LMSI-LMSPLMS-


SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n)


Number of hits
Global minimum
20 89
and k = 0.01 20 and k = 0.02 9 and k = 0.04 2 and k = 0.06 1 and k = 0.08 1 and k = 0.09 1 and k = 0.1 1 and k = 0.2 2 and k = 0.3 1 and k = 0.4 1 and k = 0.5 2


Local minimum 80 11 80 91 98 99 99 99 99 98 99 99 98


Table 3-5: Example III for system identification

Number of hits


Method
LMS with constant p NLMS with pNLMS(u)


LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS LMS-SAS


with with with with with with with with with with with


I-LMSPLMSPLMSPLMS
IpLMSIpLMSI-LMSPLMSPLMSI-LMSIpLMS-


-SAs(n) SAS(n) SAS(n)
-SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n) SAS(n)


and k and k and k and k and k and k and k and k and k and k and k


Global minimum
92 90
0.01 100 0.02 100 0.04 100 0.06 100 0.08 100 0.09 100 0.1 100 0.2 100 0.3 100 0.4 100 0.5 100


Local minimum
8 10
0
0
0
0
0
0
0
0
0
0
0







in this dissertation, results in better performance in global optimization than the original algorithm.

F rom the diffusion equation, we have derived the transition probability of the LMS-SAS algorithm escaping from a steady state point. Since the global minimum is always smaller than the local minimum, the transition probability of the algorithm escaping out from the local minimum is always larger than the one from the global minimum. Hence, the algorithm will stay most of the time near the global minimum and eventually converge to the global minimum.

Since we use the instantaneous (stochastic) gradient instead of the expected value of the gradient, an estimation error naturally occurs. This gradient estimation error, when properly normalized, can be used to act as the perturbing noise. We have shown that the behavior of the NLMS algorithm with decreasing step size near a minima is similar to that of the LMS-SAS algorithm from a global optimization perspective.

The global optimization performance of LMS-SAS and NLMS algorithm totally

depend on the shape of the cost function. The sharper the local minima, the less likely the NLMS algorithm is of escaping out from this steady state point. On the other hand, the broader the valley around local minima, the more difficult it is for the algorithm to escape out from this valley.














CHAPTER 4
INFORMATION THEORETIC LEARNING

4.1 Introduction

The mean square error criterion has been extensively used in the field of adaptive systems [80]. That is because of its its analytical simplicity and the assumption of Gaussian distribution for the error. Since the Gaussian distribution is totally characterized by its first and second order statistics, the MSE criterion can extract all information from a set of data. However, the assumption of Gausssian distribution is not always true. Therefore, a criterion which considers higher-order statistics is necessary for the training of adaptive systems. Shannon [8t] first introduced a entropy of a given probability distribution function which provides a measure of the average information in the distribution. By using the Parzen window estimator [82], we can estimate the pdf directly from a set of data. It is quite straightforward to apply the entropy criterion to the system identification framework [6, 5]. The pdf of the error signal between the desired signal and the output signal of adaptive filters must be as close as possible to a delta distribution, 6(.). Hence, the supervised training problem becomes an entropy minimization problem, as suggested by Erdogmus et al. [6].

The kernel size of the Parzen window estimator is an important parameter in the global optimization procedure. It was conjectured by Erdogmus et al. [6] that for a sufficiently large kernel size, the local minima of the error entropy criterion can be eliminated. It was suggested that starting with a large kernel size, and then slowly decreasing this parameter to a predetermined suitable value, the training algorithm can converge to the global minimum of the cost function. The error entropy criterion considered by Erdogmus et al. [6], however, does not consider the mean of the error signal, since entropy is invariant to translation. In this dissertation, we propose a modification to the error entropy criterion, in order to take this point into account.







The proposed criterion with annealing of the kernel size is then shown to exhibit the conjectured global optimization behavior in the training of IIR filters.

4.2 Entropy and Mutual Information

Shannon [81] defined the entropy of a probability distribution P {Pi,Pl,"" ,PN} as
N N
HS(P) YPklg( -) Pk 1, Pk > 0 (4-t)
k-1 Pk kl
which measures the average amount of information contained in a random variable X, with probabilities Pk P(x Xk), k 1, 2,. , N at the values of X1, X2,'". , XN. A message contains no information, if it is completely known. The larger information it contains, the less predictable it is. Information theory has broad application in the field of communication systems [83]. But entropy can be defined in a more general form. According to Renyi [58], the mean of the real number X1, X,'" , XN with positive weighting P1, P2,��� ,PN has the form as
N
x - -(ZPk xk)) (4-2)
k-1

where p(x) is a Kolmovov-Nagumo function, which is an arbitrary continuous and strictly monotonic function.

An entropy measure H generally obeys the following formula:
N
H - -(PkO(I(Pk))) (4-3)
k-1

where I(pk) log(pk) is the Hartley's information measure [84].

In order to satisfy the additivity condition, the (.) can be either p(x) -x or p(x) 2(1-)x. When p(x) -x the entropy measure become as Shannon's entropy. When p(x) 2(1-)x, the entropy measure become Renyi's entropy of order a, which is denoted as
1 N
Slog p), > 0 and 0$ 1 (4-4)
HR, -1- a
k-1







The well known relationship between Shannon's and Renyi's entropy is HR,>H,>HRa3 1>a>0 and /3>1 (4-5)

lira HR, H, (4-6)
ca--- 1

In order to further relate Renyi's and Shannon's entropy, the distance of P (P,P2,"" ,PN) to the original of P (0,0,. ,0) is defined as
N
P 1pIIa (4-7)
k-1

where V is called the a-norm of the probability distribution [85].

The Renyi's entropy in the term of V is as

HRa 1log(V) (4-8)


The Renyi's entropy of order a means a different a- norm. Shannon's entropy can be viewed as the limiting case a -+ 1 of the probability distribution norm. Renyi's entropy is essentially a monotonic function of the distance of the probability to the original. The HR2 l-og z 1P is called the quadratic entropy, because of the quadratic form on the probability.

We can further extend the entropy definition to a continuous random variable Y with pdf fy(y) as [58]:

alog( fy(z)dz) (4-9)
tR - a __C

HR2 - log( fy(z)2dz) (4-10)


It is important to mention that Renyi's quadratic entropy involves the use of the square of the pdf.

Because the Shannon entropy is defined as weighted sum of the logarithm of the pdf, it is difficult to directly use the information theoretic criterion. Since we cannot directly use the pdf (unless its form is prior known), we use the nonparametric estimators. Hence, the Parzen window method [82] is used in this dissertation. The







Parzen window estimator is a kernel-based estimator with
N
fy(z,y) (z - y) (4-11)
i= 1

where yi c RM are the observed signal. K(.) is a kernel function. The Parzen window estimator can be viewed as a convolution of the kernel function with the observed signal. The kernel function in this dissertation is chosen of Gaussian function as
1 z z.
j(Z) G(z, c2) (212/ exp(- ) (4-12)
(27,72)M/ 2j

Here, we will further develop an ITL criterion to estimate the mutual information among random variables. Mutual information is able to quantify the entropy between pairs of random variables. Hence mutual information is also very important to engineering problems.

Mutual information is defined in Shannon's entropy term as I(x, y) H(y) H(ylx), which is not easily estimated from samples. An alternative estimated mutual information between two probability density function (pdf) f(x) and g(x) is Kullback-Leibler (KL) divergence [86], which is defined as

K(fg) = f (x) log f(Wdx (4-13)
gf(x)

Similarly Renyi's divergence measure with order a for two pdf f(x) and g(x) is 1ogf d(x)2"
HR(f , g) log f()2 dx (4-14)
(a - 1) g(x)a -1

The relation between KL divergence and Renyi's divergence measures is as lim Hn(f, g) - (f, g) (4-15)
a-l

The KL divergence measure between two random variables Y1 and 2 essentially estimates the divergence between the joint pdf and the marginal pdfs. That is

Is(YI,Y2) KL(fyy2(zi,z2), fy (ZI)fy2(2))

J fyy2 (zi, z2) 10g fYIY2 j12) dzldz2 (4-16)
J (l J) fy (z-I) fYU (-)







where fyy(Zl, z2) is the joint pdf, fyj(zi) and fy (z2) are marginal pdfs. Because those divergence measures mentioned above are non-quadratic in the pdf term, they cannot easily be estimated with the information potential. The following distance measures between two pdfs, which contains only quadratic terms of pdf, are more practical. " Distance measure based on the Euclidean difference of vectors inequality is IIxI12 + IY112- 2X7y > 0 (4-17)

" Distance measure based on the Cauchy-Schwartz inequality is log X > 0 (4-18)
(x T)

Using the Cauchy Schwartz inequality, the distance measure between two pdfs f(x) and g(x) is as

Ics(fg) log (f f(x)2dx)(f g(x)2 dx) (4-19)
(f f(x)g(x)dx)2
It is obvious that Ics(f, g) > 0 and the equality holds true if and only if f(x) g(x).

Similarly, using the Euclidean distance, the distance measure between two pdfs f(x) and 9(x) is as


IED(f, 9) (f(x) g(X))2dx

f(X)2dx � Jg(x)2dx 2Jf(x)g(x)dx (4-20)

It is also obvious that IED(f, g) > 0 and the equality holds true if and only if f(x) g(x)

4.3 Adaptive 11R Filter with Euclidean Distance Criterion

The system identification scheme of adaptive IIR filter is as Figure 2-1. The output signal of adaptive IIR filter in canonic direct form realization is as N-1 M-1
y(n) - ai(n)y(n - ) + bj(n)x(n j) (4-21)
il j10

The error signal e(n) is the difference between desired signal d(n) and the output signal y(n) of the adaptive IIR filter, which is


e(n) d(n) -y(n)


(4-22)








It is obvious that the goal of the algorithm is to adjust the weights such that the error pdf f, is as close as possible to delta distribution 6(.). Hence, the Euclidean distance criterion for the adaptive hIR filters is defined as


IED f,) I (,(-) -J(-)df()2dF 2f,(O)�+c (4-23)

where c stands for the portions of this Euclidean distance measure that do not depend on the weights of the adaptive system. Notice that, the integral of the square of the error pdf appears exactly as in the definition of Renyi's quadratic entropy. Therefore, it can be estimated directly from its N samples by a Parzen window estimator with Gaussian kernel of variance a 2 exactly as described in [6, 5]


fe N yt ei,97) (4-24)


If N -+- oc, then f, (F) f(F) * Kc(F, o7-2), where * denotes the convolution operator. Thus, using a Parzen window estimator for the error pdf is equivalent to adding an independent random noise with the pdf tc(F, a 2) to the error. The error, with the additive noise, becomes d -y + n -(d + n) -y. This is similar to injecting a random noise to the desired signal as suggested by Wang et al. in [871. The advantage of our approach is that we do not explicitly generate noise samples. We simple take advantage of the estimation noise produced by the Parzen estimator, which as demonstrated above, works as an additive, independent noise source. The kernel size, which controls the variance of the hypothetical noise term, should be annealed during the adaptation, just like the variance of the injected noise by Wang et al. [871. From the injected noise point of view, the algorithm behaves similar to the well-known stochastic annealing algorithm; the noise which is added to the desired signal backpropagates through the error gradient, resulting in perturbations in the weight updates proportional to the weight sensitivity. However, since our algorithm does not explicitly use a noise signal, its operation is more similar to convolutional smoothing. For a sufficiently large kernel size,







the local minima of the ITL criterion are eliminated by smoothening of the performance surface. Thus, by starting with a large kernel size, the algorithm can approach to the global minimum, avoiding any local minima that would have existed if the kernel size was to be small. Since the global minimum of the error entropy criterion with large kernel size does not, in general, coincide with the true global minimum, annealing the kernel size is required. This is equivalent to gradually reducing the amount of the noise injected to the desired signal to a small suitable value. At the end, the algorithm with the small kernel size can converge to the true global minimum.

By substituting the Parzen window estimator for the error pdf in the integral of Equation (4-23), and recognizing that the convolution of two Gaussian functions is also a Gaussian, we obtain the ITL criterion as (after dropping all the terms that are independent of the weights):

1NN 2 N N
IED (f,) N2 >> <: :K(c ej, 297) N2- Y i 1j 1 i 1j 1

The gradient vector aIED(fZ)/a0 to be used in the steepest descent algorithm is obtained as

aIED(f) 1 N N2
YN02N [(ei - ej) K(ei - ej, 2)

(aN(n i ) aOq(n ao 2)4-26) 1
00 - 00 2ieia 00 ](4-26)


where the gradient ay/0 is given by

OYn ()N-1 OY(n - 1)
y(n) a(n) (4-27)
ii1

and (n) [y(i- ), y(i- 2),. ,y(i- N), x(i), x(i - , , - (i M)]T.

4.4 Parzen Window Estimator and Convolution Smoothing Function

4.4.1 Similarity

In the ITL algorithm, Parzen window estimator estimates the error pdf as a function of the weights from a set of samples. As the volume of samples tends to infinity, the estimated pdf is equivalent to the actual pdf convoluted with the kernel







function K,(x) used by Parzen window estimator. The behavior of the ITL algorithm is similar to the one of SAS technique in which the smoothed cost function is obtained by convolving the cost function with a smoothing function ha(x). Recall that the smoothing function ha(x) should has following properties " ha(x) -, h(x) is piecewise differentiable with respect to x.

" lim3,0 ha(x) -6(x) (Dirac's delta functional). " ha(x) is a pdf.

The kernel function in this thesis is chosen of Gaussian function as ,(x) G(x,a2) 12 exp(_ -T) (4-28)

(27,72)n/ j

It is obvious that (x) j (fl , lim,,0 ty(x) 6(x), and K,(x) is a Gaussian pdf. Hence K,(x) satisfies the properties of smoothing function.

The objective of the convolution smoothing function is to smooth the nonconvex cost function. The parameter 3 controls the dispersion of ha(x), which controls the degree of cost function smoothing. In the beginning stage of the optimization, the 3 is set to be large such that ha(x) can smooth out all the local minimum of the cost function. Since the global minimum of the smoothed cost function does not coincide with the global minimum of the actual original cost function. The 3 is slowly decreased to zero. As a result, the smoothed cost function can gradually return to the original cost function and the algorithm can converge to the global minimum of the actual cost function.

Therefore the K,(x) has the same role of h 3(x) in smoothing the nonconvex cost function. The parameter a controls the dispersion of K,(x), which can control the degree of the cost function smoothing. Similarly, the parameter a is set to be large and then slowly decreases to zero. Therefore the ITL algorithm with the proper parameter a can converge to the global minimum.








4.4.2 Difference

For the SAS algorithm, the smoothed cost function can be expressed as


V<.? 3 E 3[Jf7(e; 0 )del (4-29)


which is the expectation with respect to the random variable F. The standard deviation of Fis controlled by 3. Hence, the smoothed cost function can be regarded as an average version of actual cost function.

For the ITL algorithm, we change the shape of the pdf by Parzen window estimator at each particular point of 0/. Thus we change the cost function at each point of 0/. The estimated cost function is as

j Jf7�6,(e;O)de (4-30)

where c is a Gaussian noise with zero mean and a variance. We conclude that the SAS method adds an additional noise to the weight in order to force the algorithm to converge to the global minimum, while the ITL algorithm adds an additive noise to the error in order to force the algorithm to converge to the global minimum. The additive noise added to error affects the variance of the weight updates proportionally to the sensitivity of each weight, ac/atj This means that a single noise source is translated internally onto different noise strengths for each weight.

4.5 Analysis of Weak Convergence to the Global Optimum for ITL

Similar to the analysis of weak convergence to the global optimum for LMS-SAS, we obtain the transition probability of ITL algorithm escaping out of a local minimum by solving a pair of Fokker-Plank equations. The f, is estimated from Equation (4-24). If N -+- oc, then fJt(F) -f, (F) * Kc(F, a72), where * denotes the convolution operator. Thus, using a Paizeii wiiiduw estimiator for the error pdf is equivalent to addiiig an independent random noise with the pdf tc(F, a 2) to the error. Thus we define the added error as


(a) - (n)�N (-1


(4- 3 t)







where N is the additive noise. Here the gradient of the cost function used in the steepest algorithm is

Oa O Od(,OF + N(O)) OJ OF + N(O) O (4-32)
aao a & o a ao � OF

where J is the cost function. For the ITL algorithm, the cost function is


J IED(f) ) -( ))2d (4-33)

Therefore
aJ ( (4-34)


Here we write the ITL algorithm as Ito's integral as
t t
ot oa + j. m(0, s)ds + J a(0, s)dW (4-35)

Where
o (4-36)
,7(0t, t) - p(t)Nv(0) (f,(F)-_6(_))2

With the similar derivation of Equation (3-52) for the LMS-SAS algorithm, we obtain the transition probability of the ITL algorithm escaping out a local minimum for the scalar 0 case as

p(0, t0*, to)
1 exp( 1 (0 0*)2 )
/2TrN(O)(f,(F) - 6())2 jtl(~l 2 N (0)(f, (F)- 6(_)) 2 fto p (s) d


G(O-O*,N(O)(f(F) -J(_))2 fp(s)ds) (4-37)

Remark. Equation (4-37) is the transition probability of the ITL algorithm

escaping out from the steady-state 0*. The p(O, t 0*, to) is determined by (0 - 0*) (n), N(0), and (f,(F) _ 6(F))2. Because we use an annealing kernel size, i.e. an annealing variance of N(O), the ITL algorithm will decrease the probability of jumping out the valley over of iterations. The transition probability of the process escaping out from the global minimum is smaller than the one from a local minimum because of the smallest







value of (f(F) - 6(F))2 at the global minimum. Thus, the algorithm will stay most of its time near the global valley and will eventually converge to the global minimum.

4.6 Contour of Euclidean Distance Criterion

The Euclidean distance criterion for the adaptive IIR filters is defined as IEDf) (fc)- ())d

f2(F)dF - 2f,(O) + c (4-38)


If the input signal is set to have a Gaussian distribution, N(p/, J2), then the desired signal will also be Gaussian, N(pid, ad). The output signal of the adaptive filter will be a Gaussian as well, N(py, a 2). Here we want to calculate the analytical expression of the Euclidean distance in the simulation example of the system identification framework for the unknown system of
b + b2z- 1
H(z) 1- alz-1 -a2z-2

by reduced order adaptive filter of Ha(z) 1 b (4-40)
taz 1 - az-1


Here the desired output signal is realized as d(i) blx(i) + b2x(i- 1) + aid(i- 1) + a2d(i- 2) (4-41)


Then
b1�+b2
I-d 1 + 2 lx (4-42)
t - a, - a2

Taking variance on both sides of Equation (4-41), we obtain that

Rd (0) -(b2 + b52 + 2515251)!Rx(O) + (a52 + a 2) Rd(0) + 2a152/ d(1) (4-43)


where Rd(t) and Rx(t) are the variance of desired output signal and input signal, respectively. Right shifting one unit of Equation (4-41), we obtain that


d(i + 1) - 51d(i) a2d(i- 1) + bix(i + 1) + b2x(i)


(4-44)







Taking variance of Equation (4-41) and (4-44), we obtain that


Rd(1) - alRd(0) = (bib2 + bib2a2)R,(O) + ala2Rd(O) + aRd(1) (4-45) From Equation (4-43) and (4-45), we can obtain that (b2 + b2 + 2bib2ai)(1 - a2) + 2bib2ala2 ) (4-46)
(1 + a2)(1 - 1 - a2)(1 + a1 - a2) Similarly, we can calculate the (py, o ) of the output signal of the adaptive filter as

b
PI = Px (4-47)

y(i) = bx(i) + ay(i - 1) (4-48)

Taking variance of above equation, we obtain that R,(0) = b2R(O) + a2R,(0) (4-49)

So that

R,(0) = b RX(0) (4-50)
1 - a2

We also can calculate the covariance of desired output signal and the output signal of the adaptive filter as following. Taking covariance of Equation (4-41) and (4-49), we obtain that


Rdy(O) = (bib + b2ab)R,(O) + alaRdy(O) + a2aRdy(-1) (4-51)

Taking covariance of Equation (4-41) and y(i - 1) = bx(i - 1) + ay(i - 2), we obtain that


Rdy(1) = (b2b + aibib)R(O) + alaRdy(1) + a2aRdy(O) (4-52)

From Equation (4-51) and (4-52), we obtain that (bi + b2a)(l - ala) - a2a(b2 + bal)bR(0) (4-53)
(1 - ala)2 + (a2a)2







Finally, we can obtain that


Pce Pd -Pg (4-54)

a,~ RdO + R -O 2RPd9 (0) +2(4-55)


where a, increases by atwhich is corresponding to the Gaussian kernel function of the Parzen window estimator. The Euclidean distance is calculated as

1 2
IEDf) 4w, wa (4-56)

Figure 4-2 shows the contours of the analytical expression for the ITL criterion (for comparison Figure 4-3 shows the contours of the analytical expression for the Entropy

f f2(F)dF criterion). The convergence characteristics of the adaptation process for the filter coefficients towards the global optimum is shown in Figure 4-t. In the beginning of the adaptation process, the estimated error variance a, is large because of the significantly large value of the kernel size, a ,in the Gaussian kernel function of the Parzen window estimator. Therefore, the first term of the right hand side of Equation (4-56) is considerably smaller than the second term. Thus can be neglected in the beginning stage of the adaptation process. We observe that the second term concentrates more tightly around p, Pd -Pg 0 associated with the increasing a. i.e., the increasing a 2. The straight line in Figure 4-1 b. is the line of p, Pd -Pg 0. It is clear from this figure, Figure 4-1, that the weight-track of the ITL algorithm converges towards the line of P, P ld - /g 0 as we predicted in the theoretical analysis given above. When the size, a2, of the Gaussian kernel function slowly decreases during adaptation, the ITL cost function will gradually converge back to the original one, which might exhibit local minima.

4.7 Simulation Results

We present simulation results using a system identification formulation of the adaptive hIR filtering. We identify the following unknown system. Example I:

HJI(Z) - 0.05 -0.4z 1457
1 1.1314z- 1+0.25z-2(4)






























-0.5 0 0.5
pole


Figure 4-t: Convergence characteristics of weight for Example Contour of weight.


-0.5 0
pole


I by ITL. A) Weight; B)


-0.5 0 0.5
pole


-0.5 0 0.5
pole


Figure 4-2: Euclidean distance of Example I in A)j 2 0 ~2 1 ~2 2 2 3


0 1000 2000 3000
# of iteration


4000 5000


1.5


0.5'
0
wj 0.
N
-0.5







-1.5




1


0.5.
0
w0.
N
-0.5


-0.5 0 0.5
pole


0; B) a' - t; C) a' - 2; D) J2 - 3.








































-0.5 0 0.5
D


-0.5 0 0.5
G


-0.5 0 0.5


-0.5 0 0.5
E


-0.5 0 0.5
H


-0.5 0 0.5


-0.5 0 0.5
F


-0.5 0 0.5
1


-0.5 0 0.5


Figure 4-3: Entropy f F2( )dF of Example I in A) a2 1; B)a2 2; C)-2 3; D)u2
4. E) J2 5; F) J2 6; G)-2 7; H) J2 8; I)J2 9.







Example II:

Hil () - 0.05 + 0.4z 1458
1 1.1314z- 1+0.25z -2 (by the following reduced order adaptive hIR filter H.a() - b (4-59)
1 az-1

The main goal is to determine the values of the coefficients {a, b}, such that the Euclidean distance criterion is minimized. If we assume the error pdf of ]tis Gaussian as
t2
fel2 w) 7 17 x2
Then, we can derive the estimated Euclidean distance as
2
1 2
IE \47(,72 + a2) \/27(,72 +2)(46t

Thus we plot, experimentally, the contour of the Euclidean distance criterion performance surfaces in different a for Example I and 11 in Figure 4-2 and 4-4, respectively. It shows that the local minima of the Euclidean distance criterion performance surface have disappeared with large kernel size. Thus, by carefully controlling the kernel size, the algorithm can converge to the global minimum.

The input signal is a random Gaussian noise with zero mean and unit variance. There exist several minimums on the Euclidean distance criterion performance surface with small kernel size on both examples. However, there exist a sole global minimum of Euclidean distance criterion surface with a sufficient large kernel size. In this simulation, the kernel size is chosen to be sufficient large in the start stage, and then slowly decreased to a predetermined small value, which is the trade-off between low bias and low variance. In this way, the algorithm can converge to the global minimum. The step size for the algorithm is a constant value of 0.002. The simulation results are based on 100 Monte Carlo runs along with randomly initial condition of weight at each Monte Carlo run. It shows from the simulation results that the algorithm converges to the global minimum with 100 %( of the time for both examples. The convergence














































-0.5 0
pole

C


-0.5 0 0.5
pole

D


-0.5 0 0.5 -0.5
pole


4-4: Euclidean distance of Example II in A)> 2 0; B) J2


0 0.5
pole

1; C)a2 2; D)aJ2


1.5
1;

0.5,
0
, 0.
N
-0.5.

-1

-1.5


1.5


0.5'
0
5 0
N
-0.5
-1.

-1.5


Figure
3.







A B
2
1.55

1 0.520
0.5 a) 0
N
0 -0.5
-1
-0.5

0 2000 4000 6000 8000 -0.5 0 0.5
# of iteration pole

Figure 4-5: Convergence characteristics of weight for Example 11 by ITL. A) Weight; B) Contour of weight.


characteristics of the adaptation process with the weight approaching to the global minimum are shown in Figure 4-t and 4-5, respectively, where initial weight are chosen to a point near the local minimum.

4.8 Comparison of NLMS and ITL Algorithms

The NLMS algorithm uses the MSE criterion, while the ITL algorithm uses the Euclidean distance (Entropy) criterion. Both algorithms achieve global optimization. Although the two optima differ in weight space, as will be explained later. Here, we want to compare the performance of these two algorithms in terms of the global optimal searching capability.

Here we use the same system identification scheme, i.e., we identify the unknown system of

Examle 1 H, z) - 0.05 -0.4z- 1462 Example~~ -: Hjzt .1314z-1 + 0.25z-2 (-2

Exampe 11 H11z) - 0.2 -0.4z- 1463 Example~~~ -I Hnzt .1314z-1 + 0.25z-2 (-3

Example III: 111(z)0.3 -0.4z- 1464 Example~~~ -II Hjt) .1314z-1 + 0.25z-2 (-4

by reduced order adaptive filter of

b


H(z) - t - az-1


(4-65)







Table 4-1: System identification of adaptive IIR filter by NLMS and ITL algorithm Number of hits (global/local)
Method Example I Example II Example III
LMS 36/64 20/80 92/8
LMS-SAS 96/4 1/99 100/0
NLMS 100/0 89/11 90/10
ITL 100/0 100/0 100/0


The main goal is to determine the values of the coefficients {a, b} of the above equation, such that the MSE is minimized (global minimum). The input signal is chosen to be random Gaussian noise with zero mean and unit variance. The step size of the LMS-SAS and NLMS algorithms is chosen to be a linear decreasing function of p(n)

0.1(1- 5 x 10-5 n) and constant step size p 0.001 for the LMS and ITL algorithm. The kernel size is chosen to be a linear decreasing function of a 2 0.3(1 - 5 x 10-5) + 0.5 for the ITL algorithm.

Table 4-1 shows the comparison of the number of global and local minimum hits by both NLMS and ITL algorithms. The results are given by 100 Monte Carlo simulations with random initial conditions of 0 at each run. It is clear from Table 4-1 that the ITL algorithm is more successful in obtaining the global minimum than other algorithms.

In order to understand the behavior of the ITL solution, we investigate the LP

norms of the impulse response error vectors between the optimal solutions obtained by the MSE and the ITL criteria. Assuming the infinite impulse response of the unknown system, given by hi, i 0,.,o and the infinite impulse response of the trained adaptive filter, given by hat, i 0,.,o can both be truncated at M, yet preserve most of the power contained within, we consider the following impulse response error norm criterion:


Impulse Response Criterion LP - Ihi- hat P (4-66)
i 0

Table 4-2 shows the impulse response LP error norms for the adaptive IIR filters trained with MSE and ITL criteria. We see from these results that the ITL criterion is more of a minimax-type algorithm, as it provides a smaller L,, norm for the impulse response error compared to MSE, which yields an L2 norm error minimization.







Table 4-2: LP for both MSE and ITL criterion p 1 2 3 4 5 10 100 1000 Do
MSE 0.94 0.29 0.24 0.22 0.22 0.22 0.22 0.22 0.22 ITL 1.59 0.37 0.26 0.22 0.21 0.18 0.17 0.17 0.17


If the MSE solution is derived, either the NLMS is chosen, or if a more robust search is derived, the ITL can be used. However, after ITL converged, the LMS algorithm should be used to start from the ITL solution and seek the global optimum of MSE. As demonstrated, the ITL and MSE global minimum are close to each other.

4.9 Conclusion

We have proposed an adaptive IIR filter training algorithm, referred to as the ITL algorithm, which is based on minimizing Renyi's quadratic entropy by using a non-parametric pdf estimator, Parzen windowing. By exploiting the kernel size used in the Parzen window estimator, we force the proposed algorithm to converge to the global minimum of the performance surface. We compare the performance of the ITL algorithm with that of the LMS-SAS and NLMS algorithms with decreasing step size capable of finding the global optimum and conclude in simulations that the ITL algorithm is superior.

The solution of the ITL is different from the MSE optimization. However their

minima are in the same region of weight space. Therefore for more robust global search, we recommend to use ITL and when it converges, switch to the MSE cost using as initial conditions the weight values found with ITL.












CHAPTER 5
RESULTS
In order to demonstrate the effectiveness of proposed global optimizations, proposed global optimization are applied to two practical examples; system identification with Kautz filter and nonlinear equalization.
5.1 System Identification with Kautz Filter
It is known that the LMS algorithm update the filter gradient along the direction of the negative gradient of the objective function. Hence, the LMS algorithm for the Kautz filter becomes


AOk -


A OE
a
OE
AS= -pg


where p is a step

8 o(n)
a



p0o(n)




a


OE
aek =peC 8 k(u)
d
pe() (Ok
0,
a
k=0

pe(d) Ok )
k=0


(5-1) (5-2) (5-3)


size. The gradient vector 8Ok/8a and O kI/O are given by

S(1 + a)V 1 - (a2 + 32) a(1 + a)2 + u(n))
/2 V(1( + a)2 + 02 ( - (2 + 2)
+2ao 0(n- 1) 2 ) (n- 2)
+2a - (a + 2
+2 o(n - 1) - 2oo(n - 2) (5-4)
1 (0 1 - (a2 + f2) \(t)2� ) 2 u0)
2 V(1 + a)2 + 2 - (+02) )((- ) - ())
82 ao(n - 1) 8 o n - 2)
+2a- ( + .) -o(n 2) _2o(n - 2) (5-5)
-1 (1 - a) / - (a2 + 2) a /( - ) + 02
(( ))(U(n - 1) + u(n)) S2 (1 - a)2 +2 \ - (C2 + 2)
+2a1(n - ) +o2)aOl(n - 2)
+2a - (a + a
+2~i(n - 1)- 2aci(n - 2) (5-6)


=







1 0 1 - (a2 + 2) /3(1 - a)2 + 2)(U( )+(n))
S (1 - a)2 +2 1 - (a2 + 2)
82 p(n - 1) 81 (n - 2)
+2a - (a2 + 2) - 2&1(n - 2)
'go 'go


2 O k( - 1) 2a a

SOk-2 (n
-2a O

-2ack(n - 2)
Ok(I - 1) 2a

SOk-2 (n
-2a

-2/k( - 2)


S k a( - 2) 8 k-2 8)
- + o2) + (2 + 2)O
1-) a k-2(n - 2)
+ + 2 - 1)

+ 2a _k-() - 2 _k-2(n - 1)
2 k (n - 2) + 2 ) k-2 (n)
- (a2 + 02) 2) 2 /3
-) a k-2(n - 2)

+ 2/k-2(n )


(5-10)


d d
O n YO
Vy (n) = [p '(u), Ok k ] k
k=0 Oak=0


Hence, the NLMS algorithm becomes AOd ( 2) e(n) c(n) (5-11)
71Vy( ) |i

Aa = 1 ( ) Ok a (5-12)

|Vy(n) 12 k= 0
a | )| 0) Gk (5-13)

Here, consider the system identification example by Silva [88], which uses the reference transfer function described as

0.0890 - 0.2199z-1 + 0.2866z-2 - 0.2199z-3 +0.0890z-4
(z) 1- 2.6918z-1 + 3.5992z-2 - 2.4466z-3 + 0.8288z-4 The input signal is a colored noise which is generated by passing a white noise, with mean 0 and variance 1, through a first-order filter with a decay factor of 0.8. We


a( )(n)


(5-7)


a(k (n)
00

8 k8)


Here


(5-8)


(5-9)







Table 5-1: System identification of Kautz filter model Number of hits
Method Global minimum Local minimum
ITL 100 0
NLMS 99 1
LMS-SAS 58 42
LMS 48 52


consider the normalized least-error criterion (NMSE)


NMSE 10logl0 Y:lt N(n) - tO))2 (5-15)
Y rz 1 Y/; )

where y is the estimated output of the Kautz filter. The global optimum for the objective function is at ( 0.6212 + j0.5790, which has a normalized criterion of 12.5dB less than that in the FIR filter (( 0). This agree with the result by Silva [88]. The step size is chosen to be a linearly decreasing function of p(n) 0.4(1 - 5 x 10K-5n) for both LMS-SAS and NLMS algorithms, and constant at 0.002 for both ITL and LMS algorithms. The kernel size for the ITL algorithm is chosen to be a linearly decreasing function of iterations, a 2 3(1 - 2.5 x 10K-5) + 0.5. Table 5-1 shows the comparison of the number of global and local minimum hits by ITL, NLMS, LMS-SAS and LMS algorithms. The results are given by 100 Monte Carlo simulations with random initial conditions of 0 and ( at each run. It is clear from Table 5-1 that the ITL algorithm is more successful in obtaining the global minimum compared with the other algorithms. Single characteristic weight tracks representative of each algorithm, LMS, LMS-SAS, NLMS, and ITL, are shown in Figure 5-1, 5-2, 5-3, and 5-4, respectively. Figure 5-5 depicts the closeness between the impulse response of unknown system and the impulse response of the optimized Kautz filter determined with MSE and ITL criterions.

In order to understand better the meaning of the ITL solution, we investigate the LP norms of the impulse response error vectors between the optimal solutions obtained by the MSE and the ITL criteria. Assuming the infinite impulse response of the unknown system, given by hi, i 0, ., cx and the infinite impulse response of the trained adaptive filter, given by hai, i 0, ., oc can both be truncated at M, yet












0.3 0.2 0.1

0
C
-0.1

-0.2

-0.3

-0.4
0


==NW


1 2
# of iteration


3 4
X 10


Figure 5-t: Convergence characteristics of weight for Kautz filter Weight 0; B) Weight ((a + JO).




A


2 3 4
# of iteration X15


by LMS algorithm. A)






B


-0.2


-0.4
0


1 2
# of iteration


3 4
X 105


1 2 3 4
# of iteration X10


5-2: Convergence characteristics of weight for Kautz A) Weight 0; B) Weight ((a + JO).


filter by LMS-SAS algoB


0.3 0.2 0.1

0
C
-0.1

-0.2

-0.3

-0.4
0 1 2
# of iteration


R.U it


3 4
X 105


0 1 2
# of iteration


Figure 5-3: Convergence characteristics of weight for Kautz filter by NLMS A) Weight 0; B) Weight ((a + JO).


3 4
X 105


algorithm.


Figure rithm.


7,
.

















0.3 ,,,1

0.2
0.8

0.

C 0.1 0.4-0.2
0.2
-0.3

-0.4 0
0 1 2 3 4 0 1 2 3 4
# of iteration x 105 # of iteration x 105



Figure 5-4: Convergence characteristics of weight for Kautz filter by ITL algorithm. A) Weight 0; B) Weight ((a +93).













0.15
System MSE ITL
0.1



0.05



0



-0.05



-0.1 -


0 10 20 30 40 50 60 70


Figure 5-5: Impulse response.


80 90 100







Table 5-2: LP for both MSE and ITL criteria in the Kautz example

p 1 2 3 4 10 100 1000 Do
MSE 0.530 0.080 0.052 0.046 0.042 0.042 0.042 0.042 ITL 0.573 0.086 0.054 0.045 0.039 0.039 0.039 0.039


Sk Channel Xk Equalizer Ilk Decision 5k
Filter Device



Z Algorithm


Figure 5-6: Channel equalization system. preserve most of the power contained within, we consider the following impulse response error norm criterion:


Impulse Response Criterion LP > Ih hat P (5-1t6)


Table 5-2 shows the impulse response LP error norms for the Kautz filters trained with MSE and ITL criteria after successful convergence. We see from these results that the ITL criterion is more of a minimax-type algorithm, as it provides a smaller L" norm for the impulse response error compared to MSE, which yields an L2norm error minimization.

5.2 Nonlinear Equalization

In band-limited data communication systems, each transmitted symbol is

deteriorated by the intersymbol interference (ISI) effect. Adaptive equalizers set in the receiver are used to cope with the ISI effect. Figure 5-6 describes the channel equalization system. When an equalizer is used in a data communication system, a sequence of i.i.d., digital signal {5k c C} is sent by the transmitter through the channel exhibiting nonlinear distortion thus generating the output sequence {Xk}. The objective of the equalizer is to recover by inversion the original sequence from the received sequence {Xk}. In this example, the received signal at the input of the equalizer is







described as
nc
xi - YhkSi- +�Ci (5-1t7)
k-0
where the transmitted symbol sequence si is an equiprobable binary sequence {�1}, hi are the channel coefficients, and ej is Gaussian noise with zero mean and variance a.

The equalizer estimates the value of a transmitted symbol as


&d sgn(yi) sgn(wT X) (5-18)

where yj- wT x is the output of the equalizer, w [w0," � ,-w1 1]T is the equalizer coefficients, and x [xi," � ,x- Xi, l]T is the vector of observations.

The output of the equalizer using multilayer perceptron (MLP) with one hidden layer with n neurons is given by


yi T tanh(Wlx + bl) + b2 (5-19)


where W1 is n x m matrix connecting the input layer with hidden layer, bl is n x 1 vector of biases for the hidden neurons, w2 is n x 1 vector of weights connecting the hidden layer to the output neuron, and b2 is a bias for the output neuron.

Consider the example by Santamaria et al. [89], where the nonlinear channel is

composed of a linear channel followed by a memoryless nonlinearity. The linear channel considered is H(z) 0.3482 + 0.8704z-1 + 0.3482z -2, and the static nonlinear function is z - x+0.2x2 0. 1x3, where x is the linear channel output. The nonlinear equalizer is an MLP with 7 neurons in the input layer and 3 neurons in the hidden layer [MLP(7,3,1)], and the equalization delay is d 4. A short window of N - 5 error samples is used to minimize the error criterion.
a aj a is used for the back propagation algorithm of the nonlinear
The gradient ao a o

equalizer training, where the term a is determined by the topology and the term aj is determined by the error signal. Therefore the proposed global optimization techniques can be used in this nonlinear equalization, which are referred to stochastic gradient (SA), Stochastic gradient with SAS (SG-SAS), normalized stochastic gradient (NSG), and ITL algorithms, respectively. The step size is chosen to be a constant







2
10


100

NSG




U)10
/SG






-3
10- ITL


1 0 - 4 1 1
0 500 1000 1500 2000 2500 3000 3500 4000
# of iteration

Figure 5-7: Convergence characteristics of adaptive algorithms for a nonlinear equalizer. of 0.2 for SG, SG-SAS and ITL algorithms, and a linearly decreasing function of p(n) 0.2(1 - n/nmx) for the NSG algorithm, where nmax is the maximum number of iteration. A linear decreasing function of a 2 3(1 - n/nax) + 0.1 is chosen for the kernel size of the ITL algorithm.

Figure 5-7 depicts the convergence of the MSE evaluated over the sliding window for the algorithms, and we conclude that the ITL algorithm provides the fastest convergence. Figure 5-8 depicts the performance comparison of SG, SG-SAS, NSG, and ITL algorithms for the nonlinear equalizer in 100 Monte Carlo runs for the final solutions. This figure shows that both NSG and ITL algorithms have succeeded in obtaining the global minimum. Figure 5-9 shows the average bit error rate (BER) curves. The BER was evaluated by counting error versus several signal to noise rates (SNR) after transmitting symbols. This figure shows that all algorithms provide the same result for the adequate solutions, however the NSG algorithm provides best results for the worse solutions.








10 0 1 1 1 1 1 1 1 1I
-*- SG
+ SG-SAS ++++
x NSG*
* ITL4

101



W 2



~10







104
0 10 20 30 40 50 60 70 80 90 100
Monte Carlo Run

Figure 5-8: Performance comparison of global optimizations for nonlinear equalizer.


5.3 Conclusion

We have proposed the combination of Kautz filters and an alternative information theoretic adaptation criterion based on Renyi's quadratic entropy. The proposed ITL criterion and kernel annealing approach allowed stable adaptation of the poles to their global optimal values. We have also investigated the performance of the proposed criterion and the associated steepest descent algorithm in 11R filter adaptation. We have designed a proposed information theoretic learning algorithm, which is shown to converge to the global minimum of the performance surface. The proposed algorithm successfully adapted the filter poles avoiding local minima 100 %Y of the time and without causing instability.

The performance of this ITL algorithm was compared with the more traditional

LMS variants, which are known to exhibit improved probability of avoiding local minima in previous chapter. Nevertheless, none of them were as successful as ITL in achieving the global solution. An interesting observation was that the ITL criterion yields a






































10-4 NSG

10-6
0 5 10 15
SNR(dB)

C
100






rU






10-5
0 5 10
SNR(dB)


20 25


SNR(dB)

D


0 5 10 15
SNR(dB)


Figure 5-9: Average BER for a nonlinear equalizer, A) over the whole 100 Monte Carlo runs; B) over the 10 best solutions of MSE; C) over the 10 medial solutions of MSE; D.) over the 10 worse solutions of MSE.


20 25





77

smaller L,, error norm between the impulse responses of the adaptive and the reference IIR filters, whereas MSE tries to minimize the L2 error norm. If the designer requires a minimum L2 error norm between the impulse responses, it is possible to use ITL adaptation to converge to the vicinity of this solution and then switch to NLMS to achieve L2 error norm minimization.

The proposed global optimizations algorithms have also successfully applied to

another practical example, nonlinear equalization. The simulation results show that ITL algorithm achieves better performance than the others.














CHAPTER 6
CONCLUSION AND FUTURE RESEARCH

6.1 Conclusion

In this study, we focus on the development of the global optimization algorithm for adaptive IIR filtering. Both MSE and entropy error criterion have been used as the cost function of the adaptive IIR filter training.

Srinivasan et al. have used a stochastic approximation for the convolution

smoothing technique in order to obtain a global optimization algorithm for adaptive IIR filtering. They showed that smoothing can be achieved by the addition of a variable perturbing noise source to the LMS algorithm. We have modified this perturbing noise by multiplying it with its cost function. The modified algorithm, which is referred to as the LMS-SAS algorithm, results in better performance in global optimization than the original algorithm.

From the diffusion equation, we have derived the transition probability of the LMS-SAS algorithm, for the single parameter case, escape local minimum. Since the global minimum is always smaller than the other local minimum, the transition probability of the algorithm escaping out from the local minimum is always larger than the one from the global minimum. Thus, the algorithm will stay most of its time near the global minimum and eventually converge to the global minimum.

Since we use the instantaneous (stochastic) gradient instead of the expected

value of the gradient, error in estimating the gradient naturally occurs. This gradient estimation error can be used to act as the perturbing noise. We have shown that the behavior of the NLMS algorithm with decreasing step size is similar to the one of the LMS-SAS algorithm from a global optimization perspective.

Global optimization performance of LMS-SAS and NLMS algorithm totally depend on the shape of the cost function surface. The sharper the local minima, the less likely







the NLMS algorithm is escaping out from this steady state point. On the other hand, the larger cover range of the steady state point valley, the more difficult the algorithm will escape out from this steady state point valley.

We have investigated another cost function based on entropy find the global

optimum of IIR filters. Based on a previous conjecture that annealing the kernel size in the non-parametric estimator of Renyi's entropy to achieve global optimization, we have designed the proposed information theoretic learning algorithm, which is shown to converge to the global minimum of the performance surface for various adaptive filter topologies. The proposed algorithm successfully adapted the filter poles avoiding local minima tOO (Yo of the time and without causing instability. This behavior has been found in many examples.

The performance of this ITL algorithm was compared with the more traditional LMS variants, which are known to exhibit improved probability of avoiding local minima. Nevertheless, none of them were as successful as ITL in achieving the global solution. An interesting observation was that the ITL criterion yields a smaller L" error norm between the impulse responses of the adaptive and the reference IIR filters, whereas MSE tries to minimize the L2 error norm. If the designer requires a minimum L2 error norm between the impulse responses, it is possible to use ITL adaptation to converge to the vicinity of this solution and then switch to NLMS to achieve L2 error norm minimization.

One of the major drawbacks in adaptive IIR filtering is the stability issue. We use Kautz filters, because their stability is easily to be guaranteed if poles of the Kautz filters are located within the unit circle. In this dissertation, we proposed the combination of Kautz filters and an alternative information theoretic adaptation criterion based on Renyi's quadratic entropy. Kautz filters have been used in the past for system identification [90] of ARMA models, but the poles have been kept fixed during adaptation. The proposed ITL criterion and kernel annealing approach allowed stable adaptation of the poles to their global optimal values.







6.2 Future Research

In this dissertation, we have analyzed the weak global optimal convergence of algorithms with MSE criterion by looking at their transition function of the process, assuming that the weight, 0, is a scalar. We need more works on the transition function of the process in general case, assuming that 0 is a vector, in order to complete the analysis of the weak global optimal convergence of algorithms with MSE criterion.

We have observed that the ITL criterion yields a smaller L,, error norm between the impulse responses of the adaptive and the reference IIR filters, whereas MSE tries to minimize the L2 error norm. This minimalx" property of the proposed ITL criterion deserves further research.

Another observation is that linear scheduling of the kernel size helps achieve

global minima. In annealing-based global optimization algorithms, scheduling of the parameters to be annealed is a major issue. In stochastic annealing, it is known that exponential annealing (at a sufficiently slow rate) guarantees global convergence. In IIR filter adaptation using ITL, we used linear annealing of the kernel size and in all examples, successful global optimization results were obtained. More work is required in the ITL algorithm to select a appropriately the smallest kernel size, which was here set with the rule of thumb properties [9t].

The ITL adaptation used a batch approach, but we believe that the on line versions discussed by Erdogmus et al. [92] could also display the same global optimization properties. The on line versions of ITL adaptation need further studied.

In addition, a general analytical proof that explains the tOO (70 global optimization capability of the proposed algorithm is necessary in order to complete the theoretical work. This, however, stands as a challenging future research project.














REFERENCES


[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1985.

[2] S. S. Haykin, Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, NJ, 1986.

[3] M. A. Styblinski and T. S. Tang, "Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing,"
Neural Networks, vol. 3, pp. 467-4833, 1990.

[4] J. C. Principe and D. Erdogmus, "From adaptive linear to information filtering,"
in Proceedings of Symposium 2000 on Adaptive Systems for Signal Processing, Communications, and Control, Lake Louise, Alberta, Canada, Oct. 2000, pp.
99 104.

[5] D. Erdogmus, K. Hild, and J. C. Principe, "Blind source separation using Renyi's mutual information," IEEE Signal Processing Letters, vol. 8, no. 6, pp. 174 176,
June 2001.

[6] D. Erdogmus and J. C. Principe, "Generalized information potential criterion for adaptive system training," IEEE Transactions on Neural Networks, (to appear)
September 2002.

[7] K. J. Astr6m and P. Eykhoff, "System identification-A survey," Automatica, vol.
AC-27, no. 4, pp. 123 162, Aug. 1971.

[8] B. Friedlander, "System identification techniques for adaptive signal processing,"
IEEE Transactions on Acoustics, Spleech, and Signal Processing, vol. ASSP-30, no.
2, pp. 240 246, Apr. 1982.

[9] L. Ljung, System Identification Theory for the User, Prentice-Hall, Englewood Cliffs, NJ, 1987.

[10] T. Ssderstrsm, L. Ljung, and I. Guatasson, "A theoretical analysis of recursive
identification methods," Autoimica, vol. 14, no. 3, pp. 193 197, May 1978.

[11] C. R. Johnson, "Adaptive IIR filtering: Current results and open issues," IEEE
Transactions on Information Theory, vol. IT-30, no. 2, pp. 237 250, Mar. 1984.

[12] S. S. Shynk, "Adaptive IIR filtering," IEEE Transactions on Acoustics, Speech, and
Signal Processing, vol. 6, no. 2, pp. 4 21, Apr. 1989.

[13] S. Gee and M. Rupp, "A comparison of adaptive IIR echo canceller hybrids,"
Proceedings. International Conference Acoustics, Speech, and Signal Processing,
1991.







[14] S. L. Netto, P. S. Diniz, and P. Agathoklis, "Adaptive IIR filter algorithms for
system identification : A general framework," IEEE Transactions on Education,
vol. 38, pp. 54 66, Feb 1995.

[15] P. A. Regalia, Adaptive IIR Filtering in Signal Processing and control, Marcel
Dekker, New York, NY, 1995.

[16] M. Dentimo, J. M. McCool, and B. Widrow, "Adaptive filtering in the frequency
domain," Proceedings IEE, vol. 66, no. 12, pp. 1658 1659, Dec. 1978.

[17] E. R. Fervara, "Fast implementation of LMS adaptive filters," IEEE Transactions
on Acoustics, Speech, and Signal Proccssing, vol. ASSP-28, no. 4, pp. 474 475, Aug.
1980.

[18] T. K. Woo, "HRLS: a more efficient RLS algorithm for adaptiveFIR," IEEE
Communication Letters, vol. 5, no. 3, pp. 81 84, March 2001.

[19] D. F. Marshall, W. K. Jenkins, and J. J. Murphy, "The use of orthogonal
transforms for improving performance of adaptive filters," IEEE Transactions
on Circuit and System, vol. 36, no. 4, pp. 474 484, Apr. 1989.

[20] S. S. Narayan and A. M. Peterson, "Frequency domain least-mean-square
algorithm," Proceedings IEEE, vol. 69, no. 1, pp. 124 126, Jan. 1981.

[21] S. A. White, "An adaptive recursive digital filter," in Proceedings 9th Asilomar
conference Circuit System Computer, pp. 21 25, 1975.

[22] R. A. David, "An adaptive recursive digital filter," in Proceedings 15th Asilomar
conference Circuit System Computer, pp. 175 179, 1981.

[23] B. D. Rao, "Adaptive IIR filtering using cascade structures," in Proceedings 27th
Asilomar conference on Signal System Computer, vol. 1, pp. 185 188, 1993.

[24] J. K. Juan, J. G. Harris, and J. C. Principe, "Locally recurrent network with
multiple time-scales," IEEE proceedings on Neural Network for signal proccssing,
vol. VII, pp. 645 653, 1997.

[25] P. A. Regalia, "Stable and efficient lattice algorithms for adaptive IIR filtering,"
IEEE Transactions on Signal Proceeding, vol. 40, no. 2, pp. 375 388, Feb. 1992.

[26] R. L. Valcarce and F. P. Gonales, "Adaptive lattice filtering revisited convergence
issues and new algorithms with improved stability properties," IEEE Transactions
on Signal Processing, vol. 49, no. 4, pp. 811 821, April 2001.

[27] J. J. Shynk, "Adaptive IIR filtering using parallel-form realization," IEEE
Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 4, pp. 519
533, Apr. 1989.

[28] J. E. Cousseau and P. S. R. Diniz, "Alternative parallel realization for adaptive IIR
filters," in Proceedings Intcrnational Symposium Circuits System, pp. 1927-1930,
1990.







[29] J. J. Shynk and R. P. Gouch, "Frequency domain adaptive pole-zero filters,"
Proceedings IEEE, vol. 73, no. 10, pp. 1526 1528, Oct. 1985.

[30] B. E. Usevitch and W. K. Jenkin, "A cascade implementation of a new IIR
adaptive digital filter with global convergence and improved convergence rates," in
Proceedings International Symposium Circuits System, pp. 2140 2143, 1989.

[31] D. G. Luenberger, Introduction to Linear and Nonlinear Programming, Wiley, MA:
Addison, 1973.

[32] J. Lin and R. Unbehauen, "Bias-remedy least mean square equation error algorithm
for IIR parameters recursive estimation," IEEE Transactions on Signal Processing,
vol. 40, pp. 62 69, Jan 1992.

[33] H. Fan and W. K. Jekins, "A new adaptive IIR filters," IEEE Transactions on
Circuit and System, vol. CAS-33, no. 10, pp. 939 947, Oct. 1986.

[34] H. Fan and MilosDoroslvaiki, "On global convergence of Steiglitz-McBride adaptive
algorithm," IEEE Transactions on Circuit and System, vol. 40, no. 2, pp. 73 87,
Feb. 1993.

[35] K. Steiglitz and L. E. McBride, "A technique for the identification of linear
systems," IEEE Transactions on Automatic Control, vol. AC-10, pp. 461 464, 1965.

[36] S. L. Netto and P. Agathoklis, "A new composite adaptive IIR algorithm," in
Proceedings 28th Asilomar conference on Signal System Computer, vol. 2, pp.
1506 1510, 1994.

[37] J. E. Cousseau, L. Salama, L. Donale, and S. L. Netto, "Orthonormal adaptive IIR
filter with polyphase realization," in Proceedings of ICIES'99 Electronics, Circuit
and Systems, vol. 2, pp. 835 838, 1999.

[38] M. Radenkovic and T. Bose, "Global stability of adaptive IIR filters based the
output error method," in Proceeings of ICIES'99 Electronics, Circuit and Systems,
vol. 1, pp. 663 667, 1999.

[39] P. L. Hsu, T. Y. Tsai, and F. C. Lee, "Applications of a variable step size algorithm
to QCEE adaptive IIR filters," IEEE Transactions on Signal Processing, vol. 46,
no. 6, pp. 1685 1688, Jun. 1999.

[40] W. J. Song and H. C. Shin, "Bias-free adaptive IIR filtering," in Proceeding IEEE
International Conference on Acoustics, Speech, and Signal Proceeding, vol. 1, pp.
109 112, 2000.

[41] K. C. Ho and Y. T. Chan, "Bias removal in equation-error adaptive IIR filters,"
IEEE Transactions on Signal Processing, vol. 43, pp. 51 62, Jan. 1995.

[42] M. C. Hall and P. M. Hughes, "The master-slave IIR filter adaptation algorithm,"
in Proceeding IEEE International Symposium on Circuit, System, vol. 3, pp.
2145 2148, 1988.








[43] J. R. Treichler, C. R. Johnson, and M. G. Larimore, Theory and Design of Adaptive
Filters, Wiley, New York, 1987.

[44] I. O. Bohachevsky, M. E. Hohnson, and M. L. Stein, "Generalized simulated
annealing for function optimization," American statistical association and the
American society for quality control, vol. 28, pp. 209 217, Aug. 1986.

[45] S. C. Ng, S. H. Leung, C. Y. Chung, A. Luk, and W. H. Lau, "The genetic search
approach- A new learning algorithm for adaptive IIR filtering," IEEE Signal
Processing Magazine, pp. 39 46, Nov. 1996.

[46] J. A. Nelder and R. Mead, "Controlled random search algorithm," Computer
Journal, vol. 7, pp. 308 313, 1965.

[47] P. P. Khargonekar and A. Yoon, "Random search based optimization algorithm in
control analysis and design," in Proceeding of the American Control Conference,
Jun. 1999, pp. 383 387.

[48] Q. Duan, S. Sorooshian, and V. Gupta, "Shuffled complex evolution algorithm,"
Water Resources Research, vol. 28, pp. 1015 1031, 1992.

[49] Z. B. Tang, "Adaptive partitioned random search to global optimization," IEEE
Transactions on Automatic Control, vol. 39, pp. 2235 2244, Nov. 1994.

[50] K. H. Yim, J. B. Kim, T. P. Lee, and D. S. Ahn, "Genetic adaptive IIR filtering
algorithm for active noise control," in IEEE International Fuzzy Systems
Conference Proceedings, Aug. 1999, pp. III 1723 1728.

[51] B. W. Wah and T. Wang, "Constrained simulated annealing with applications
in nonlinear continuous constrained global optimization," in Proceeding 11th
IEEE International Conference on Tools with Artificial Intelligence, Nov. 1999, pp.
381 388.

[52] J. L. Maryak and D. C. Chin, "A conjecture on global optimization using
gradient-free stochastic approximation," in Proceeding of the 1998 IEEE
ISIC/CIRA/ISAS Joint Conference, Sep. 1998, pp. 441 445.

[53] N. K. Treadgold and T. D. Gedeon, "Simulated annealing and weight decay in
adaptive learning : The SARPROP algorithm," IEEE Transactions on Neural
Network, vol. 9, pp. 662 668, July 1998.

[54] G. H. Staus, L. T. Biegler, and B. E. Ydstie, "Global optimization for
identification," in Proceeding of the 36th Conference on Decision and Control,
Dec. 1997, pp. 3010 3015.

[55] T. Fujita, T. Watanabe, K. Yasuda, and R. Yokoyama, "Global optimization
method using intermittency chaos," in Proceeding of the 36th Conference on
Decision and Control, Dec. 1997, pp. 1508 1509.

[56] W. Edmonson, J. Principe, K. Srinivasan, and C. Wang, "A global least square
algorithm for adaptive IIR filtering," IEEE Transactions on Circuit and System,
vol. 45, pp. 379 383, Mar. 1998.







[57] J. M. Thomas, J. P. Reilly, and Q. Wu, "Real time analog global optimization with
constraints: Application to the direction of arrival estimation problem," IEEE
Transactions on Circuit and System, vol. 42, pp. 233 243, Mar. 1995.

[58] A. Renyi, "Some fundamental questions of information theory- selected papers of
Alfred Renyi," Akademia Kiado,Budapest, vol. 2, pp. 565 580, 1976.

[59] A. Renyi, A Diary on Information Theory, Wily, N.Y., 1987.

[60] C. F. Cowan and P. M. Grant, Adaptive Filters, Prentice-Hall, 1985.

[61] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson, "Stationary and
nonstationary learning characteristics of the LMS adaptive filter," Proceedings
IEEE, vol. 64, pp. 1151 1162, Aug. 1976.

[62] J. M. Mendel, Lesson in Digital Estimation Theory, Prentice-Hall, Englewood
Cliffs,NJ, 1987.

[63] E. I. Jury, Theory and Applications of the Z-Transform Method, Wiley, New York,
1964.

[64] T. C. Hsia, "A simplified adaptive recursive filter design," Proceedings IEEE, vol.
69, no. 9, pp. 1153 1155, Sept 1981.

[65] G. C. Goodwin and K. S. Sin, Adaptive Filtering Prediction and Control,
Prentice-Hall, Englewood Cliffs, NJ, 1984.

[66] T. S6derstr6m, "On the uniqueness of maximum likelihood identification,"
Automatica, vol. 14, no. 3, pp. 231 244, Mar. 1975.

[67] M. Nayeri, H. Fan, and W. K. Jenkins, "Some characteristics of error surfaces for
insufficient order adaptive IIR filters," IEEE Transactions on Acoustics, Speech,
and Signal Processing, vol. 38, no. 7, pp. 1222 1227, July 1990.

[68] T. Soderstrom and P. Stoica, "Some properties of the output error method,"
Automatica, vol. 18, pp. 1692 1716, Dec. 1982.

[69] M. Nayeri, "Uniqueness of msoe estimates in IIR adaptive filtering; a search for
necessary conditions," in International Conference Acoustics, Speech, and Signal
Processing, 1989, pp. 1047 1050.

[70] S. D. Stearns, "Error surfaces of recursive adaptive filters," IEEE Transactions on
Acoustics, Speech, and Signal Processing, vol. ASSP-29, no. 4, pp. 763 766, June
1981.

[71] F. Hong and M. Nayeri, "On the error surface of sufficient order adaptive IIR
filters: Proofs and counterexamples to a unimodality conjecture," Proceedings IEEE
Transaction on Acoustics, Speech, and Signal Processing, vol. 37, pp. 1436 1442,
Sep. 1989.

[72] R. Roberts and C. Mullis, Digital Signal Processing, Addison-Wesley, 1987.







[73] W. H. Kautz, "Transient synthesis in the time domain," IRE Transactions on
Circuit Theory, vol. 1, pp. 22 39, Sept. 1954.

[74] P. W. Broome, "Discrete orthonormal sequences," J. Assoc. Comput. Machinery,
vol. 12, no. 2, pp. 151 168, Dec. 1965.

[75] G. A. Williamson and S. Zimmermann, "Globally convergent adaptive IIR filter
based on fixed pole locations," IEEE Transactions on Signal Processing, vol. 44, pp.
1418 1427, Jun. 1996.

[76] P. M. Pardalos and R. Horst, Introduction to Global Optimization, Norwood, MA:
Kluwer, 1989.

[77] H. Robins and S. Monro, "A stochastic approximation method," Annals of
Mathematical Statistics, vol. 22, pp. 400 407, 1951.

[78] E. Wong and B. Hajek, Stochastic Processes in Engineering Systems, Springer,
1985.

[79] A. N. Kolmogorov, "Uber die analytische methoden in der
wahrscheinlichkeits-rechnung," Annals of Mathematical Statistics, vol. 104, pp.
415 458, 1931.

[80] S. Haykin, Introduction to Adaptive filters, MacMillan, NY, 1984.

[81] C. E. Shannon, "A mathematical theory of communication," Bell System Technical
Journal, vol. 27, pp. 379 423,623 653, 1984.

[82] E. Parzen, "On the estimation of a probability density function and the mode,"
Annals of Mathematical Statistics, vol. 33, pp. 1065, 1962.

[83] T. Cover and J. Thomas, Elements of Information Theory, Wiley, 1991.

[84] R. V. Hartley, "Transmission of information," Bell System Technical Journal, vol.
7, 1928.

[85] G. Golub and F. Van Loan, Matrix Computation, John Hopkins Press, 1989.

[86] S. Kullback, Information Theory and Statistics, Dover Publications Inc., New York,
1968.

[87] C. Wang and J. C. Principe, "Training neural networks with additive noise in
the desired signal," IEEE Transactions on Neural Networks, vol. 10, no. 6, pp.
1511 1517, Nov. 1999.

[88] T. O. Silva, "Optimality conditions for truncated kautz networks with two
periodically repeating complex conjugates poles," IEEE Transactions on Automatic
Control, vol. 40, pp. 342 346, Feb 1995.

[89] I. Santamarfa, D. Erdogmus, and J. C. Principe, "Entropy minimization for
supervised digital communication channel equalization," IEEE Transactions on
Signal Processing, vol. 50, no. 5, pp. 1184 1192, May 2002.





87

[90] B. Wahlberg, "System identification using Kautz models," IEEE Transactions on
Automatic Control, vol. 39, no. 6, pp. 1276 1282, Jun. 1994.

[91] D. Erdogmus and J. C. Principe, "An error-entropy minimization algorithm for
supervised training of nonlinear adaptive systems," IEEE Transactions on Signal
Processing, vol. 50, no. 7, pp. 1780 1786, July 2002.

[92] D. Erdogmus and J. C. Principe, "An on-line adaptation algorithm for adaptive
system training with minimum error entropy: stochastic information gradient," in International Conference on ICA and Signal Separation, San Diego, CA, Dec. 2001,
pp. 7 12.














BIOGRAPHICAL SKETCH

Ching-An Lai was born in Chia-L Taiwan, August 2, t963. He earned his bachelor's degree in Physics from the Chinese Military Academy Taiwan in t985 and his Master's degree in Electrical Engineering from Chung-Chen Institute of Technology Taiwan in t992. He began his Ph.D. program in the Electrical and Computer Engineering Department of University of Florida in t995. He pursued his Ph.D degree in the field of adaptive filters. Currently, he is an instructor in the Chinese Military Academy.




Full Text

PAGE 3

TABLEOFCONTENTS page ACKNOWLEDGMENTS................................ii LISTOFTABLES....................................v LISTOFFIGURES...................................vi ABSTRACT.......................................viii CHAPTER 1INTRODUCTION.................................1 1.1Motivation..................................1 1.2LiteratureSurvey..............................2 1.2.1AdaptiveFiltering..........................2 1.2.2OptimizationMethod........................4 1.2.3ProposedOptimizationMethod..................6 1.3Outline....................................7 2ADAPTIVEIIRFILTERING...........................9 2.1Introduction.................................9 2.2SystemIdenticationwiththeAdaptiveIIRFilter............12 2.3SystemIdenticationwithKautzFilter..................17 3STOCHASTICAPPROXIMATIONWITHCONVOLUTIONSMOOTHING.20 3.1Introduction.................................20 3.2ConvolutionFunctionSmoothing.....................21 3.3DerivationoftheGradientEstimate....................24 3.4LMS-SASAlgorithm............................26 3.5AnalysisofWeakConvergencetotheGlobalOptimumforLMS-SAS.28 3.6NormalizedLMSAlgorithm........................33 3.7RelationshipbetweenLMS-SASandNLMSAlgorithms.........36 3.8SimulationResults.............................37 3.9ComparisonofLMS-SASandNLMSAlgorithm.............40 3.10Conclusion..................................44 4INFORMATIONTHEORETICLEARNING...................47 4.1Introduction.................................47 4.2EntropyandMutualInformation.....................48 4.3AdaptiveIIRFilterwithEuclideanDistanceCriterion..........51 iii

PAGE 4

4.4ParzenWindowEstimatorandConvolutionSmoothingFunction....53 4.4.1Similarity...............................53 4.4.2Dierence..............................55 4.5AnalysisofWeakConvergencetotheGlobalOptimumforITL.....55 4.6ContourofEuclideanDistanceCriterion.................57 4.7SimulationResults.............................59 4.8ComparisonofNLMSandITLAlgorithms................64 4.9Conclusion..................................66 5RESULTS......................................67 5.1SystemIdenticationwithKautzFilter..................67 5.2NonlinearEqualization...........................72 5.3Conclusion..................................75 6CONCLUSIONANDFUTURERESEARCH..................78 6.1Conclusion..................................78 6.2FutureResearch...............................80 REFERENCES......................................81 BIOGRAPHICALSKETCH...............................88 iv

PAGE 5

LISTOFTABLES Table page 3-1NLMSalgorithm..................................35 3-2Systemidenticationofreducedordermodel...................38 3-3ExampleIforsystemidentication........................44 3-4ExampleIIforsystemidentication........................45 3-5ExampleIIIforsystemidentication.......................45 4-1SystemidenticationofadaptiveIIRlterbyNLMSandITLalgorithm...65 4-2 L p forbothMSEandITLcriterion........................66 5-1SystemidenticationofKautzltermodel....................69 5-2 L p forbothMSEandITLcriteriaintheKautzexample............72 v

PAGE 6

LISTOFFIGURES Figure page 2-1Adaptiveltermodel................................9 2-2Blockdiagramofthesystemidenticationconguration............12 2-3Kautzltermodel.................................19 3-1SmoothedfunctionusingGaussianpdf......................23 3-2Stepsize ( n )forSASalgorithm.........................39 3-3Globalconvergenceof intheGLMSalgorithm..................40 3-4Globalconvergenceof intheLMS-SASalgorithm................40 3-5Globalconvergenceof intheNLMSalgorithm..................41 3-6Localconvergenceof intheLMSalgorithm...................41 3-7Localconvergenceof intheGLMSalgorithm..................41 3-8Localconvergenceof intheLMS-SASalgorithm................42 3-9ContourofMSE..................................43 3-10Weight(top)and kr y ( n ) k (bottom).......................43 4-1ConvergencecharacteristicsofweightforExampleIbyITL..........60 4-2EuclideandistanceofExampleI..........................60 4-3Entropy R 1 1 f 2 ( ) d" ofExampleI........................61 4-4EuclideandistanceofExampleII.........................63 4-5ConvergencecharacteristicsofweightforExampleIIbyITL..........64 5-1ConvergencecharacteristicsofweightforKautzlterbyLMSalgorithm...70 5-2ConvergencecharacteristicsofweightforKautzlterbyLMS-SASalgorithm.70 5-3ConvergencecharacteristicsofweightforKautzlterbyNLMSalgorithm...70 5-4ConvergencecharacteristicsofweightforKautzlterbyITLalgorithm....71 5-5Impulseresponse..................................71 5-6Channelequalizationsystem............................72 vi

PAGE 7

5-7Convergencecharacteristicsofadaptivealgorithmsforanonlinearequalizer..74 5-8Performancecomparisonofglobaloptimizationsfornonlinearequalizer....75 5-9AverageBERforanonlinearequalizer......................76 vii

PAGE 8

ThemajorgoalofthisdissertationistodevelopglobaloptimizationalgorithmsforadaptiveIIRlters.SincetheperformancesurfaceofadaptiveIIRltersisnonconvexwithrespecttotheltercoecients,conventionalgradient-basedalgorithmscaneasilybetrappedatanunacceptablelocaloptimum.WeneedtoexploitglobaloptimizationmethodsinadaptiveIIRlteringandovercometheproblemofconvergingtothelocalminima,preservingstabilitythroughoutadaptation. OneapproachforadaptiveIIRlteringsuggestsastochasticapproximationwithconvolutionsmoothing(SAS).Wemodifytheperturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithmresultsinbetterperformancewhencomparedtotheoriginalalgorithm.Wealsoanalyzetheglobaloptimizationbehavioroftheproposalalgorithmbyanalyzingthetransitionprobabilitydensityofescapingfromalocalminimum. Agradientestimationerrorcanbeusedtoactastheperturbingnoise,provideditisproperlynormalized.Consequently,anotherapproachforglobalIIRlteroptimizationisthenormalizedLMS(NLMS)algorithm.ThebehavioroftheNLMSalgorithmwithdecreasingstepsizeissimilartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective.viii

PAGE 9

Oneissueintheidenticationoftheautoregressivemovingaverage(ARMA)systemisthatlterstructuresareusedtoavoidinstabilitiesduringtraining.HereweusetheclassoforthogonallterscalledtheKautzltersforARMAmodeling.TheproposedglobaloptimizationalgorithmshavebeenappliedtosystemidenticationtogetherwithKautzltersandnonlinearequalizationtoshowtheglobaloptimumsearchcapability.ix

PAGE 10

CHAPTER1 INTRODUCTION 1.1Motivation Theobjectiveofthisdissertationistodevelopglobaloptimizationalgorithmsfor adaptiveinniteimpulseresponse(IIR)lteringbyusingthestochasticapproximation withconvolutionsmoothingfunction(SAS)andinformationtheoreticlearning(ITL). Thisworkisparticularlymotivatedbythefollowingfacts. Adaptivelteringhaswideapplicationinthedigitalsignalprocessing,communication, andcontrolelds.Aniteimpulseresponse(FIR)lter[1,2]isasimplestructure foradaptivelteringandhasbeenextensivelydeveloped.Recentlyresearchershave attemptedtouseIIRstructuresbecausetheyperformbetterthanFIRstructures withthesamenumberofcoecients.However,somemajordrawbacksinherent toadaptiveIIRstructuresareslowconvergence,possibleconvergencetoabiasor unacceptablesuboptimalsolutions,andtheneedforstabilitymonitoring. Stochasticapproximationmethods[3]havethepropertyofconvergingtotheglobal optimumwithaprobabilityofone,asthenumberofiterationstendstoinnity. Thesemethodsarebasedonarandomperturbationtondtheabsoluteoptimum ofthecostfunction.Inparticular,themethodofstochasticapproximationwith convolutionsmoothinghasbeensuccessfulinseveralapplications.Ithasbeen empiricallyproventobeecientinconvergingtotheglobaloptimumintermsof computationandaccuracy.Theconvolutionsmoothingfunctioncan\smoothout" anonconvexobjectivefunctionbyconvolvingitwithasuitableprobabilitydensity function(pdf).Inthebeginningofadaptation,thevarianceofthepdfissettoa sucientlargevalue,suchthattheconvolutionsmoothingfunctioncan\smoothout" thenonconvexobjectivefunctionintoaconvexfunction.Thenthevarianceisslowly reducedtozero,wherebythesmoothedobjectivefunctionreturnstotheoriginal objectivefunction,asthealgorithmconvergestotheglobaloptimum.Suchvariance isdeterminedbyacoolingscheduleparameter.Thiscoolingscheduleisacritical factoringlobaloptimization,becauseitaectstheperformanceoftheglobalsearch capability. Convolutionsmoothinghasbeenusedexclusivelywiththemeansquareerror(MSE) criterion.MSEhasbeenusedextensivelyinthetheoryofadaptivesystemsbecause ofitsanalyticalsimplicityandthecommonassumptionofGaussiandistributed error.However,recentlymoresophisticatedapplications(suchasindependent componentanalysisandblindsourceseparation)requireacriterionthatconsiders higher-orderstatisticsforthetrainingofadaptivesystems.Thecomputationalneural engineeringlaboratorystudiedentropycostfunction[4].Shannonrstintroduced 1

PAGE 11

2 entropyofagivenprobabilitydistributionfunction,whichprovidesameasureofthe averageinformationinthatdistribution.ByusingtheParzenwindowestimator, wecanestimatethepdfdirectlyfromasetofsamples.Itisquitestraightforward toapplytheentropycriteriontothesystemidenticationframework[5].Asshown inthisthesis,thekernelsizeoftheParzenwindowestimatorbecomesanimportant parameterintheglobaloptimizationprocedure.Denizetal.[6]conjecturedthat forasucientlylargekernelsize,thelocalminimaoftheerrorentropycriterion canbeeliminated.Itwassuggestedthatstartingwithalargekernelsize,andthen slowlydecreasingthisparametertoapredeterminedsuitablevalue,thetraining algorithmcanconvergetotheglobalminimumofthecostfunction.Theerrorentropy criterionconsideredbyDenizetal.[6],however,doesnotconsiderthemeanofthe errorsignal,sinceentropyisinvarianttotranslation.Herewemodifythecriterion andstudythereasonwhyannealingthekernelsizeproducesglobaloptimization algorithms. 1.2LiteratureSurvey Wesurveyedtheliteratureintheareasofadaptiveltering,optimizationmethod, andmathematicsusedintheanalysisofthealgorithm. 1.2.1AdaptiveFiltering Numerousalgorithmsofadaptivelteringareproposedintheliterature[7,8], especiallyforsystemidentication[9,10].Somevaluablegeneralpapersonthetopic ofadaptivelteringarepresentedbyJohnson[11],Shynk[12],Geeetal.[13]and Netto[14].Johnson'spaperfocusedonthecommontheoreticalbasisbetweenadaptive lteringandsystemidentication.Shynk'spaperdealtwithvariousalgorithmsof adaptiveIIRlteringfortheirerrorformulaandrealization.Neto'spaperpresented thecharacteristicsofthemostcommonlyusedalgorithmsforadaptiveIIRlteringina simpleanduniedframework.RecentlyafullbookwaspublishedonIIRlters[15]. Themajorgoalofanadaptivelteringalgorithmistoadjusttheadaptivelter coecientsinordertominimizeagivenperformancecriterion.Literatureabout adaptivelteringcanbeclassiedintothreecategories:adaptivelterstructures, adaptivealgorithms,andapplications. Adaptivelterstructure. Thechoiceoftheadaptivelterstructuresaectthe computationalcomplexityandtheconvergencespeed.Basically,therearetwokindof adaptivelterstructures. { AdaptiveFIRlterstructure. ThemostcommonlyusedadaptiveFIRlter structureisthetransversallterwhichimplementsanall-zerolterwithacanonic directform(withoutanyfeedback).ForthisadaptiveFIRlterstructure,the

PAGE 12

3 outputisalinearcombinationoftheadaptiveltercoecients.Theperformance surfaceoftheobjectivecostfunctionisquadratic[1]whichyieldsasingleoptimal point.AlternativeadaptiveFIRlterstructures[16]improveperformanceinterms ofcomputationalcomplexity[17,18]andconvergencespeed[19,20]. { AdaptiveIIRlterstructure. White[21]rstpresentedanimplementation ofanadaptiveIIRlterstructure.Later,manyarticleswerepublishedinthis area.Forsimpleimplementationandeasyanalysis,mostadaptiveIIRlter structuresusethecanonicdirectformrealization.Someotherrealizationsare alsopresentedtoovercomesomedrawbacksofcanonicdirectformrealization,like slowconvergencerateandtheneedforstablemonitoring[22].Commonlyused realizationsarecascade[23,24],lattice[25,26],andparallel[27,28]realizations. OtherrealizationshavealsobeenpresentedrecentlybyShynketal.[29]andJenkin etal.[30]. Algorithm. Analgorithmisaprocedureusedtoadjustadaptiveltercoecients inordertominimizethecostfunction.Thealgorithmdeterminesseveralimportant featuresofthewholeadaptiveprocedure,suchascomputationalcomplexity, convergencetosuboptimalsolutions,biasedsolutions,objectivecostfunction anderrorsignal.EarlylocaladaptivelteralgorithmswereNewtonmethod, Quasi-Newtonmethod,andgradientmethod.Newton'smethodseekstheminimum ofasecond-orderapproximationoftheobjectivecostfunction.Quasi-Newtonisa simpleversionoftheNewtonmethodusingarecursivelycalculatedestimateofthe inverseofasecond-ordermatrix.Thegradientmethodsearchestheminimumof theobjectivecostfunctionbytrackingtheoppositedirectionofthegradientvector oftheobjectivefunction[31].Itiswellknownthatthestepsizecontrolsstability, convergencespeed,andmisadjustment[1].ForFIRadaptiveltering,localmethods weresucientsincetheoptimizationwaslinearintheweights.HoweverinIIR adaptivelteringthisisnolongerthecase.Themostcommonlyknownapproaches foradaptiveIIRlteringareequationerroralgorithm[32],outputerroralgorithm [12,11],andcompositealgorithms[33,34]suchastheSteiglitz-McBridealgorithm [35]. { Themaincharacteristicsoftheequationerroralgorithmareunimodalityofthe Mean-Square-Error(MSE)performancesurfacebecauseofthelinearrelationship ofthesignalandtheadaptiveltercoecients,goodconvergence,andguaranteed stability.However,itcomesalongwithabiasedsolutioninthepresenceofnoise. { Themaincharacteristicsoftheoutput-erroralgorithmarethepossibleexistenceof themultiplelocalminima,whichaecttheconvergencespeed,anunbiasedglobal optimalsolutioneveninthepresenceofnoise,andtherequirementofstability checkingduringtheadaptiveprocessing. { Thecompositeerroralgorithmattemptstocombinethegoodindividual characteristicsofbothoutputerroralgorithmandequationerroralgorithm [36].Consequently,manypaperswerewrittentoovercometheproblemmentioned above.

PAGE 13

4 { Cousseauetal.[37]proposedanorthogonalltertoovercometheinstability problemofadaptiveIIRlters,whileRadenkovicetal.[38]usedanoutputerror methodtoavoidit. { Thequadraticconstraintequationerrormethod[39]wasproposedtoremovethe biasedsolutionsfortheequation-erroradaptiveIIRlters[40,41].Newcomposite adaptiveIIRalgorithmsarepresentedinliterature[42,36]. Application. Adaptivelteringhasbeensuccessfulinmanyapplications,such asechocancellation,noisecancellation,signaldetection,systemidentication, channelequalization,andcontrol.Someusefulinformationaboutadaptiveltering applicationappearsintheliterature[1,2,43]. Inthisdissertation,wefocusonadaptiveIIRlteralgorithmsforsystem identication. 1.2.2OptimizationMethod TherearetwoadaptationmethodologiesforIIRlters:gradientdescentandglobal optimization.Themostcommonlyusedmethodisthegradientdescentmethod,such asleastmeansquare(LMS)[1].Thesemethodsarewellestablishedfortheadaptation ofFIRltersandhavetheadvantageofbeinglesscomputationallyexpensive.The problemwithgradientdescentmethodsisthattheymightconvergetoanylocal minima.Thelocalminimanormallyimplypoorperformance.Thisproblemcanbe overcomethroughglobaloptimizationmethods.Suchglobaloptimizationalgorithms includesimulatedannealing(SA)[44],geneticalgorithm[45],randommethod[46],and stochasticapproximation[3].However,globaloptimizationmethodshavetheproblemof computationalcomplexity,especiallyforhighorderadaptivelter. Severalrecentresearchershavemodiedglobaloptimizationalgorithmstoimprove theirperformance.Khargonekar[47]usedanadaptiverandomsearchalgorithmfor theglobaloptimizationofcontrolsystems.Thistypeofglobaloptimizationalgorithm propagatesacollectionorasimplexofpointsbutusesmoregeometricallyintuitive heuristics.Themostcommonlyuseddirectsearchmethodforoptimizationisthe Nelder-Meadalgorithm[46].DespitethepopularityoftheNelder-Meadalgorithm,it doesnotprovideanyguaranteeofconvergenceorperformance.Recentstudiesrelied onnumericalresultstodeterminetheeectivenessofthealgorithm.Duanproposed

PAGE 14

5 theshuedcomplexevolutionalgorithm[48],whichusesseveralNelder-Meadsimplex algorithmsrunninginparallel(thatalsoshareinformationwitheachother).Tang[49] proposedarandomsearchthatpartitionsthesearchregionoftheobjectivefunctioninto acertainnumberofsubregions.Tang[49]showedthattheadaptivepartitionedrandom searchingeneralcanprovideabetter-than-averagesolutionwithinamodestnumberof functionevaluations. Yim[50]usedageneticalgorithminhisadaptiveIIRlteringalgorithmforactive noisecontrol.Heshowedthatgeneticalgorithmsovercometheproblemofconvergingto thelocalminimumforgradientdecentalgorithms.Wah[51]improvedconstrained simulatedannealing,adiscreteglobalminimizationalgorithmwithasymptotic convergencetodiscreteconstrainedglobalminimawithaprobabilityofone.The algorithmisbasedonthenecessaryandsucientconditionsfordiscreteconstrained localminimainthetheoryofdiscreteLagrangemultipliers.Heextendedthisalgorithm tosolvenonlinearcontinuousconstrainedoptimizationproblems.Maryak[52]injected extranoisetermsintotherecursivealgorithm,whichmayallowthealgorithmto escapethelocaloptimumpoints,andensureglobalconvergence.Theamplitudeofthe injectednoiseisdecreasedovertime(aprocesscalledannealing),sothatthealgorithm cannallyconvergetotheglobaloptimumpoint.Hearguesthat,insomecases,the naturallyoccurringerrorinthegradientapproximationeectivelyintroducedinjected noisethatpromotesconvergenceofthealgorithmtotheglobaloptimum.Treadgold[53] combinedgradientdescentandtheglobaloptimizationtechniqueofsimulatedannealing (SA).Thiscombinationescapeslocalminimaandcanimprovetrainingtime.Staus[54] usedspatialbranchandboundmethodologytosolvetheglobaloptimizationproblem. Thespatialbranchandboundtechniqueisnotpracticalforidentication.Advances inconvexalgorithmdesignusinginteriorpointmethods,exploitationofstructure,and fastercomputingspeedshavealteredthispicture.Largeproblems,includinginteresting classesofidenticationproblemscanbesolvedeciently.Fujita[55]proposedamethod (takingadvantageofchaoticbehaviorofthenonlineardissipationsystem)thathas inertiaandnonlineardampingterms.Thetimehistoryofthesystem,whoseenergy

PAGE 15

6 functioncorrespondstotheobjectivefunctionoftheunconstrainedoptimization problem,convergesattheglobalminimaofenergyfunctionofthesystembymeansof appropriatecontrolofparametersdominatingoccurrenceofchaos.Howevernoneof theseglobaloptimizationtechniquescanrevealgradientdescentintermsofeciency innumberofcomputations.thereforeinthisthesiswerevisittheproblemofstochastic gradientdescentforIIRltering. 1.2.3ProposedOptimizationMethod Theproposedglobaloptimizationmethodsinthisdissertationarebasedon stochasticapproximationmethodsontheMSEcostfunctionandininformation theoreticlearning.Thestochasticapproximationrepresentsasimpleapproachto minimizinganonconvexfunction,whichisbasedonarandomlydistributedprocess inevaluatingthesearchspace[56].Inparticular,twomethodswereinvestigated.The rstmethod[57]isimplementedbyaddingrandomperturbationsestimateofthe system'sdynamicequation.Varianceoftherandomuctuationmustdecayaccording toaspecicannealingschedule,whichcanensureconvergencetoaglobaloptimum. Thegoaloftheearlylargeperturbationsistoallowthesystemtoquicklyescape fromthelocalminima.Thesecondmethodisbasedonstochasticapproximationwith convolutionsmoothing[56].Theobjectiveofconvolutionsmoothingistosmoothout thenonconvexobjectivefunctionbyconvolutingitwithanoiseprobabilitydensity function(pdf).Alsointhismethod,thevarianceofthepdfmustdecayaccordingto acoolingschedule.Theamountofsmoothingisproportionaltothevarianceofthe noisepdf.Theideaofthismethodistocreateasucientamountofsmoothinginthe beginningoftheoptimizationprocesssothattheoutcomeisaconvexperformance surface.Whenthevarianceofthenoisepdfisgraduallyreducedtozero,the performancesurfacegraduallyconvergestotheoriginalnonconvexform.Bothof thesemethodsusetheMSEcostfunction. Wealsoproposeannealingthekernelsizeinentropyoptimization.Entropy canbeestimateddirectlyfromdatausingtheParzenestimationifRenyi'sentropy denitionsareissued[58,59].Itispossibletoalsoderiveagradient-basedalgorithmto

PAGE 16

7 searchtheminimumofthisnewcostfunction.Recently,Erdogmus[4,5]usedITLin adaptivesignalprocessing.Wedevelopedaglobaloptimizationalgorithmforentropy minimizationbyannealingkernelsize(similartothestochasticapproximationwith convolutionsmoothingmethodforMSEcriterion).Weshowedthatthisisequivalent toaddinganadditivenoisesourcetothetheoreticalcostfunction.Howeverthetwo methodsdiersincethekernelfunctionsmoothstheentropycostfunction. 1.3Outline InChapter2,thebasicideaofanadaptivelterandadaptivealgorithmis reviewed.Especially,wereviewedtheLMSalgorithmforadaptiveIIRltering,which isthebasicformofourproposalalgorithms.Sincewefocusonglobaloptimization algorithmsforadaptiveIIRltering,someimportantpropertiesonglobaloptimization forsystemidenticationarereviewed.ThesystemidenticationframeworkwithKautz ltersisalsopresented. InChapter3,weintroducethestochasticapproximationwithconvolution smoothing(SAS)techniqueandapplyittoadaptiveIIRltering.Similartothe GLMSalgorithmbySrinivasan[56],wederivetheLMS-SASalgorithm.Theglobal optimizationbehavioroftheLMS-SASalgorithmhasbeenanalyzedbyevaluatingthe transitionprobabilitydensityofescapingoutfromasteadystatepointforthescalar case.Becauseofthenoisyestimategradient,thebehavioroftheNLMSalgorithmwith decreasingstepsizeisshowntobesimilartothatoftheLMS-SASalgorithmfroma globaloptimizationperspective.TheglobalsearchcapabilityofLMS-SASandNLMS algorithmsarethencompared. InChapter4,theentropycriterionisproposedasanalternativetoMSEfor adaptiveIIRltering.Thedenitionofentropy(mutualinformation)isrstreviewed. ByusingtheParzenwindowestimatorfortheerrorpdf,thesteepestdescentalgorithm (ITLalgorithm)withtheentropycriterionforthesystemidenticationframeworkof adaptivelteringisderived.TheweakglobaloptimalconvergenceofITLalgorithmin simulationexamplesisgiven.Finally,wecomparetheperformanceoftheITLalgorithm withthatofLMS-SASandNLMSalgorithmsintermsofglobaloptimizationcapability.

PAGE 17

InChapter5,theassociatedLMS,LMS-SAS,NLMS,andITLalgorithmsfortheKautzlterarerstderived.Similarly,wecomparetheglobaloptimizationperformanceofproposedglobaloptimizationalgorithmsfortheKautzlters.Finally,theassociatedalgorithmsareappliedtononlinearequalization.InChapter6,weconcludethedissertationandoutlinefuturework.

PAGE 18

CHAPTER2 ADAPTIVEIIRFILTERING 2.1Introduction Figure2-1showsthebasicblockdiagramofanadaptivelter.Ateachiteration, asampledinputsignal x ( n )ispassedthroughanadaptiveltertogeneratetheoutput signal y ( n ).Thisoutputsignaliscomparedtoadesiredsignal d ( n )togeneratethe errorsignal ( n ).Finally,anadaptivealgorithmusesthiserrorsignaltoadjustthe adaptiveltercoecientsinordertominimizeagivenobjectivefunction.Themost widelyusedlteristheniteimpulseresponse(FIR)lterstructure. Inrecentyears,activeresearchhasattemptedtoextendtheFIRlterintothe moregeneralinniteimpulseresponsecongurationthatoerspotentialperformance improvementsandlesscomputationalcostthanequivalentFIRlters[60].However, somepracticalproblemsstillexistintheuseofadaptiveIIRlters.Astheerror surfaceofIIRltersisusuallymultimodalwithrespecttotheltercoecients,learning algorithmsforIIRlterscaneasilybetrappedatlocalminimaandbeunableto convergetotheglobaloptimum[1].Oneofthecommonlearningalgorithmsfor adaptivelteringisthegradient-basedalgorithm,forinstancetheleast-mean-square algorithm(LMS)[61].Thealgorithmaimstondtheminimumpointoftheerror Adaptive Filter ? x(n)y(n) d(n) + ( n ) Adaptive Algorithm Figure2-1:Adaptiveltermodel. 9

PAGE 19

10 surfacebymovinginthedirectionofthenegativegradient.Likemostofthesteepest descentalgorithms,itmayleadtheltertoalocalminimumwhentheerrorsurface ismultimodal.Inaddition,theconvergencebehavioroftheLMSalgorithmdepends heavilyonthechoicesofstepsizeandtheinitialvaluesofltercoecients. Learningalgorithmssuchasmaximumlikelihood[62],LMS[1],least-square[2], andrecursive-least-square[2]arewellestablishedfortheadaptationofFIRlters. Inparticular,thegradient-descentalgorithms(suchasLMS)areverysuitablefor adaptiveFIRltering,iftheerrorsurfaceisunimodalandquadratic.Generally,LMS isthebestchoiceformanyapplicationsofadaptivesignalprocessing[1],becauseofits simplicity,itseaseofcomputation,andthefactthatitdoesnotrequireo-linegradient estimationsofdata.ItisalsopossibletoextendtheLMSalgorithmtoadaptiveIIR lters;however,itmayfacethelocalminimumproblemwhentheerrorsurfaceis multimodal.TheLMSalgorithmadaptstheweight(ltercoecients)vectoralongthe negativegradientofthemean-square-errorperformancesurfaceuntiltheminimumof theMSEisreached.Inthefollowing,wewillpresenttheformulationoftheIIR-LMS algorithm.TheIIRlterkernelindirectformisconstructedas y ( n )= L X i =0 a i x ( n i )+ M X j =1 b j y ( n j )(2-1) Lettheweightvector X ( n )bedenedas =[ a 0 ; ;a L ;b 1 ; ;b M ] T (2-2) X ( n )=[ x ( n ) ; ;x ( n L ) ;y ( n 1) ; ;y ( n M )] T (2-3) and d ( n )isthedesiredoutput.Theoutputis y ( n )= T ( n ) X ( n )(2-4) Wecanwritetheerror as ( n )= d ( n ) y ( n )= d ( n ) T ( n ) X ( n )(2-5)

PAGE 20

Sothegradientisr=@"2 @(2-6)=2"(n)[@"(n) Letusdenery(n)=[@y(n) FromEquation(2-1),obtainry(n)=[x(n);;x(nL);y(n1);;y(nM)]T+[MXj=1bj@y(nj) Wherethegradientestimateisgivenbyr=2"(n)ry(n)(2-12) Basedonthegradientdescentalgorithm,thecoecientsupdateis(n+1)=(n)r(2-13) Therefore,inIIR-LMS,thecoecientupdatebecomes(n+1)=(n)+2[d(n)y(n)]ry(n)(2-14) where2isaconstantstepsize. Foreachvalueofn,Equation(2-4)producesthelteroutputandEquation(2-10)and(2-14)arethenusedtocomputethenextsetofcoecients^(n+1).Regardingthecomputationalcomplexity,theIIR-LMSalgorithmasdescribedinEquation(2-4)through(2-14)requiresapproximately(L+M)(L+2)calculationsforeachiterationwhiletheFIR-LMSrequiresonly2Ncalculationsforeachiteration(withlterlength

PAGE 21

12 B ( z 1 ) A ( z 1 ) ? x(n) + v ( n ) ^ B ( z 1 ) ^ A ( z 1 ) ? ^ y ( n ) y ( n ) + ( n ) Adaptive Algorithm Figure2-2:Blockdiagramofthesystemidenticationconguration. = N ).Beingoneofthegradient-descentalgorithms,theLMSalgorithmmayleadthe ltertoalocalminimumwhenerrorsurfaceismultimodal,andtheperformanceofthe LMSalgorithmwilldependheavilyontheinitialchoicesofstepsizeandweightvector. Stabilitycheck. Jury'sstabilitytest[63]wasusedinthisthesis.Thisstability testensurethatallrootslieinsidetheunitcircle.Sincethetestdoesnotrevealwhich polesareunstable,thepolynomialmustbefactoredtoobtainthisinformation.Ifthe polynomialorderislargerthen2( M> 2),thetestbecomescomputationallyexpensive. Ifthiswasdone,anyunstablesetofweightscouldeasilybeprojectedbackintotheunit circle.Thedicultyofthestabilitycheckispolynomialfactorization. Tosimplifythestabilitycheck,onemayusethecascadeofrst-orsecond-order sectionsinsteadofthecanonicaldirectform.Inparticular,thestabilityoftheKautz lter,astructureofcascadesofsecond-ordersectionswithcomplexpoles,iseasily checked. 2.2SystemIdenticationwiththeAdaptiveIIRFilter Inthesystemidenticationconguration,theadaptivealgorithmadaptsthe coecientsoftheltersuchthattheadaptiveltermatchestheunknownsystem ascloselyaspossible.Figure2-2isageneralblockdiagramoftheadaptivesystem

PAGE 22

whereA(z1)=1Pnai=1aiziandB(z1)=Pnbj=1bjzjarepolynomials,andx(n)andv(n)aretheinputsignalandtheperturbationnoise,respectively.Theadaptivelterisdescribedas^y(n)=[^B(z1) ^A(z1)]x(n)(2-16) where^A(z1)=1P^nai=1^aiziand^B(z1)=P^nbj=1^bjzj.Theissuesinsystemidenticationwithadaptiveltersareusuallydividedintothefollowing: insucientorder:n<0;{ strictlysucientordern=0;{ morethansucientordern>0; wheren=min[(na^na);(nb^nb)].Inmanycases,features(b)and(c)aregroupedinoneclass,calledsucientorder,wheren0. withoutadditionalnoise;{ withadditionalnoisecorrelatedwiththeinputsignal;{ withadditionalnoiseuncorrelatedwiththeinputsignal; Thebasicobjectivefunctionoftheadaptivelteristoadaptthecoecientsoftheadaptiveltersuchthatitdescribestheunknownsysteminanequivalentform.TheequivalenceisusuallydeterminedbyanobjectivefunctionW(n)oftheinput,availableunknownsystemoutput,andtheadaptivelteroutputsignals.TheobjectivefunctionW(n)mustsatisfythefollowingpropertiesinordertottheconsistentdenition: Therearemanywaystodescribeanobjectivefunctionthatsatisestheoptimalityandnonnegativityproperties.Thefollowingformsoftheobjectivefunctionarethemostcommonlyusedinderivingtheadaptivealgorithm:

PAGE 23

14 Meansquareerror(MSE) W [ ( n )]= E [ 2 ( n )]. Leastsquare(LS) W [ ( n )]= 1 N +1 P N i =1 2 ( n i ) Instantaneoussquareerror(ISV) W [ ( n )]= 2 ( n ). Inastrictsense,MSEisatheoreticalvaluethatisnoteasyestimated.Inpractice, itcanbeapproximatedbytheothertwoobjectivefunctions.Ingeneral,ISViseasily implementedbutitisheavilyaectedbyperturbationnoise.Laterwepresentthe entropyoftheerrorasanotherobjectivefunction,butrstwemustdiscussMSE. Theadaptivealgorithmattemptstominimizethemeansquarevalueoftheoutput errorsignal,wheretheoutputerrorisgivenbythedierencebetweentheunknown systemandtheadaptivelteroutputsignal.Thatis, ( n )=[ B ( z 1 ) A ( z 1 ) ^ B ( z 1 ) ^ A ( z 1 ) ] x ( n )+ v ( n )(2-17) Thegradientoftheobjectivefunctionestimatewithrespecttotheadaptivelter coecientsisgivenas r ^ [ 2 ( n )]=2 ( n ) r ^ [ ( n )]=2 ( n ) r ^ [^ y ( n )](2-18) with r ^ [^ y ( n )]= 2 6 4 ^ y ( n i )+ P ^ na k =1 ^ a k ( n ) ^ y ( n k ) @ ^ a i j ^ a i =^ a i ( n ) x ( n j )+ P ^ na k =1 ^ a k ( n ) ^ y ( n k ) @ ^ b j j ^ b j = ^ b j ( n ) 3 7 5 (2-19) where istheadaptiveltercoecientvector. Thisequationrequiresarelativelylargememoryallocationtostoredata.In practice,asmallstepapproximationthatconsiderstheadaptiveltercoecients slowlyvaryingcanovercomethisproblem[64].Therefore,byusingthesmallstep approximation,theadaptivealgorithmisdescribedas ^ ( n +1)= ^ + ( n ) ( n )(2-20) where ( n )= f ^ y ( n i ) j x ( n j ) g T for i =1 ; ; ^ na ; j =1 ; ; ^ nb ,and isasmallstep sizethatsatisesthefollowingproperty.Theadaptivealgorithmischaracterizedbythe followingproperties:

PAGE 24

15 Property1 [65] TheEuclideansquare-normoftheerrorparametervectordenedby k ^ ( n ) ( n ) k isconvergentif satises 0 2 k ^ ( n ) k 2 (2-21) Property2 [31,66,67] ThestationarypointsoftheMSEperformancesurfacearegiven by E f [ ^ A ( z 1 ;n ) B ( z 1 ) A ( z 1 ;n ) ^ B ( z 1 ) A ( z 1 ;n ) ^ A ( z 1 ;n ) ] x ( n ) gf [ ^ B ( z 1 ;n ) ^ A 2 ( z 1 ;n ) ] x ( n j ) g =0(2-22) E f [ ^ A ( z 1 ;n ) B ( z 1 ) A ( z 1 ;n ) ^ B ( z 1 ) A ( z 1 ;n ) ^ A ( z 1 ;n ) ] x ( n ) gf [ 1 ^ A ( z 1 ;n ) ] x ( n j ) g =0(2-23) Inpractice,onlythestablestationarypoints,socalledequilibria,areofinterestand usuallythesepointsareclassiedas Degeneratedpoint:Thedegeneratedpointsaretheequilibriumpointswhere 8 > < > : ^ B ( z 1 ;n )=0:^ nb< ^ na ^ B ( z 1 ;n )= L ( z 1 ) ^ A ( z 1 ;n ):^ nb ^ na (2-24) where L ( z 1 )= P nb na k =0 l k z k Nondegeneratedpoints:Alltheequilibriathatarenotdegeneratedpoints. Theequilibriumpointsthatinuencetheformoftheerrorperformancesurface havethefollowingproperty. Property3 [12] If n 0 ,allglobalminimaoftheMSEperformancesurfacearegiven by 8 > < > : ^ A ( z 1 )= A ( z 1 ) C ( z 1 ) ^ B ( z 1 )= B ( z 1 ) C ( z 1 ) (2-25) where C ( z 1 )= P n k =0 c k z k .Itmeansthatallglobalminimumsolutionshaveincluded thepolynomialsdescribingtheunknownsystemplusacommfactor C ( z 1 ) presentinthe numeratoranddenominatorpolynomialsoftheadaptivelter.

PAGE 25

16 Property4 [68] If n 0 ,allequilibriumpointsthatsatisfythestrictlypositive realnesscondition Re [ ^ A ( z 1 ) A ( z 1 ) ] > 0: j z j =1(2-26) areglobalminima. Property5 [68] Lettheinputsignal x ( n ) begivenby x ( n )=[ F ( z 1 ) G ( z 1 ) ] w ( n ) ,where F ( z 1 )= P nf k =0 f k z k and G ( z 1 )=1 P ng k =1 g k z k arecoprimepolynomials,and w ( n ) isawhitenoise.Thenif 8 > < > : n nf ^ nb ^ na +1 ng (2-27) allequilibriumpointsareglobalminima. ThispropertyisactuallythemostcommonusedresultfortheunimodalityoftheMSE performancesurfaceincasesofidenticationwithsucientordermodels.Ithastwo importantfactswhichare If^ na = na =1and^ nb nb 1,thenthereisonlyoneequilibriumpoint,whichis theglobalminimum. If x ( n )iswhitenoise( nf = ng =0),andtheordersoftheadaptivelterare strictlysucient(^ na = na and^ nb = nb ,and^ nb na +1 0),thenthereisonly oneequilibriumpoint,whichistheglobalminimum. Nayeri[69]furtherinvestigatedthispropertyandheobtainedalessrestrictive sucientconditiontoguaranteeunimodalityoftheadaptivealgorithm,whentheinput signalisawhitenoiseandtheorderoftheadaptivelterexactlymatchtheunknown system.Theresultisgivenas Property6 [69] If x ( n ) isawhitenoisesequence ( nf = ng =0) ,theordersofthe adaptivelterarestrictlysucient( ^ na = na and ^ nb = nb ,and ^ nb na +2 0 ),then thereisonlyoneequilibrium,whichistheglobalminimum. Thereisanotherimportantpropertywhichis

PAGE 26

17 Property7 [67] Alldegeneratedequilibriumpointsaresaddlepointsandtheirexistence impliesmultimodality(existenceofstablelocalminimum)oftheperformancesurfaceif either ^ na> ^ nb =0 or ^ na =1 Thispropertyisalsovalidfortheinsucientordercases. In1981,Stearns[70]conjecturedthatif n 0andtheinputsignal x ( n )iswhite noise,thentheperformancesurfacedenedbyMSEobjectivefunctionisunimodal. ThisconjecturestayedvaliduntilFanoerednumericalcounterexamplesforitin1989 [71]. ThemostimportantcharacteristicofIIRadaptationisthepossibleexistence ofmultiplelocalminimawhichcanaecttheoverallconvergence.Moreover,global minimumsolutionisunbiasedbythepresenceofzero-meanperturbationnoiseinthe unknownsystemoutputsignal.AnotherimportantcharacteristicofIIRadaptation istherequirementforstabilitycheckingduringtheadaptiveprocess.Thisstability checkingrequirementcanbesimpliedbychoosinganappropriateadaptivelter realization. 2.3SystemIdenticationwithKautzFilter OneofthemajordrawbacksinadaptiveIIRlteringisthestabilityissue.Since thelterparametersarechangingduringadaptation,apracticalapproachistouse cascadesofrstandsecondorderARMAsections,wherestabilitycanstillbechecked simplyandlocally.AprincipledwaytoachievetheexpansionofgeneralARMA systemsisthroughorthogonallterstructures[72].HereweusesKautzlters,because theyareveryversatile(cascadesofsecondordersectionswithcomplexpolesbut stillwithareasonablenumberofparameters).TheKautzlter,whichcanbetraced backtotheoriginalworkofKautz[73],isbasedonthediscretetimeKautzbasis functions.TheKautzlterisageneralizedfeedforwardlterwhichproducesanoutput y ( n )= ( n; ) T ,where issetofweightsandtheentriesof ( n; )aretheoutputsof rstorderIIRlterswithacomplexpoleat [74].StabilityoftheKautzlteriseasily guaranteedifthepoleislocatedwithintheunitcircle(thatis j j < 1).Althoughthe

PAGE 27

18 adaptationislinearin i ,itisnonlinearinthepoles,yieldinganonconvexoptimization problemwithlocalminima. ThecontinuoustimeKautzbasisfunctionsaretheLaplacetransformofcontinuous timeorthonormalexponentialfunctionswhichcanbetracedbacktotheoriginalworks ofKautz[73].ThediscretetimeKautzbasisfunctionsaretheZ-transformsofdiscrete timeorthonormalexponentialfunctions[74].ThediscretetimeKautzbasisfunctions aredescribedas 2 k ( z k ; k )= j 1+ k j r 1 k k 2 z 1 1 (1 k z 1 )(1 l z 1 ) k 1 Y l =0 ( z 1 l )( z 1 l ) (1 l z 1 )(1 l z 1 ) (2-28) 2 k +1 ( z k ; k )= j 1 k j r 1 k k 2 z 1 1 (1 k z 1 )(1 l z 1 ) k 1 Y l =0 ( z 1 l )( z 1 l ) (1 l z 1 )(1 l z 1 ) (2-29) where k = k + j k ,( k k )arethe k thpairofcomplexconjugatepoles,and j k j < 1 becauseofitsstability,and k isalwayseven. TheorthonormalityofthediscretetimeKautzbasisfunctionsisrepresentedas 1 2 j I p ( z; k ) q (1 =z; k ) dz z = p;q (2-30) wheretheintegralunitcircletourisanalyticintheexteriorofthecircle. Allpairsofcomplexconjugatepolescanbeintegratedinrealsecondordersections toreducethedegreesoffreedom.Theresultingbasisfunctionscanbedescribesas discrete-time2-poleKautzbasisfunctions.Thediscrete-timeKautzbasisfunctionscan besimpliedasFigure2-3,where ^ y ( n )= ( n ) T (2-31) ( n )=[ 0 ( n ) ; ;' d 1 ( n )] T (2-32) K 2 k ( z; )= K 2 k 2 ( z; ) A ( z; )(2-33) K 2 k +1 ( z; )= K 2 k 1 ( z; ) A ( z; )(2-34)

PAGE 28

Figure2-3:Kautzltermodel.K0(z;)=0z11 (1z1)(1z1)(2-35)K1(z;)=1z1+1 (1z1)(1z1)(2-36) andA(z;)=(z1)(z1+) (1z1)(1z1)(2-37)0=j1+jr Hereisacomplexconjugatepole(thatis=+j).

PAGE 29

CHAPTER3 STOCHASTICAPPROXIMATIONWITHCONVOLUTIONSMOOTHING 3.1Introduction Adaptivelteringhasbecomeamajorresearchareaindigitalsignalprocessing, communicationandcontrol,withmanyapplications,suchasadaptivenoisecancellation, echocancellation,andadaptiveequalizationandsystemidentication[1,2].For simplicity,niteimpulseresponse(FIR)structuresareusedforadaptivelteringand havemanymaturepracticalimplementations.However,inniteimpulseresponse structurescanreducecomputationalcomplexityandincreaseaccuracy.Unfortunately, IIRlteringhassomedrawbacks,suchasslowconvergence,possibleconvergencetoa biasorunacceptablesuboptimalsolutions,andtheneedforstabilitymonitoring.The majorissueisthattheobjectivefunctionoftheIIRlteringwithrespecttothelter coecientsisusuallymultimodal.Thetraditionalgradientsearchmethodmayconverge toalocalminimumdependingonitsinitialconditions.Theotherunresolvedproblems ofadaptiveIIRlteringarediscussedbyJohnson[11]andRegalia[15]. Severalmethodshavebeenproposedfortheglobaloptimizationoftheadaptive IIRltering[75,45,76].Srinivasanetal.[56]usedstochasticapproximationwith convolutionsmoothing(SAS)intheglobaloptimizationalgorithm[3,76,77]for adaptiveIIRltering.Theyshowedthatthesmoothingbehaviorcanbeachievedby appendingavariableperturbingnoisesourcetotheerrorsignal.Here,wemodifythis perturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithm, whichisreferredtoastheLMS-SASalgorithminthisdissertation,resultsinbetter performanceinglobaloptimizationthantheoriginalalgorithmbySrinivasanetal. Wehavealsoanalyzedtheglobaloptimizationalgorithmbehaviorbylookingattheir transitionprobabilitydensityofescapingoutfromasteadystatepoint. 20

PAGE 30

21 Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpected valueofthegradient,errorinestimatingthegradientnaturallyoccurs.Thisgradient estimationerror,whenproperlynormalized,canbeusedtoactastheperturbingnoise. Consequently,anotherapproachinglobalIIRlteroptimizationisthenormalizedLMS (NLMS)algorithm.ThebehavioroftheNLMSalgorithmwithdecreasingstepsizeis similartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective. 3.2ConvolutionFunctionSmoothing AccordingtoStyblinski[3],amulti-optimalfunction f ( ) 2 R 1 ; 2 R n canbe representedasasuperpositionofaconvexfunction(i.e.,havingjustoneminimum) andothermulti-optimalfunctionsthataddsome\noise"totheconvexfunction.The objectiveofconvolutionsmoothingcanbeviewedas\lteringout"thenoiseand performingminimizationonthe\smoothed"convexfunction(oronafamilyofthese function),inordertoreachtheglobaloptimum.Sincetheoptimumofthesmoothed convexfunctiondoesnot,ingeneral,coincidewiththeglobalfunctionminimum,a sequenceofoptimizationstepsarerequiredwiththeamountofsmoothingeventually reducedtozerointheneighborhoodoftheglobaloptimum.Thesmoothingprocess isperformedbyaveraging f ( )oversomeregionoftheparameterspace R n usingthe properweighting(orsmoothing)function ^ h ( )denedbelow.Formally,letusintroduce avectorofrandomperturbation 2 R n ,andadd to ,thuscreatingtheconvolution function. ^ f ( ; )= Z R n ^ h ( ; ) f ( ) d = Z R n ^ h ( ; ) f ( ) d (3-1) Hence, ^ f ( ; )= E [ f ( )](3-2) where ^ f ( ; )isthesmoothedapproximationtotheoriginalmulti-optimalfunction f ( ),andthekernelfunction ^ h ( ; )isthepdfusedtosample .Notethat ^ f ( ; )can beregardedasanaveragedversionof f ( )weightedby ^ h ( ; ). Theparameter controlsthedispersionof ^ h ,i.e.,thedegreeof f ( )smoothing (e.g., cancontrolthestandarddeviationof 1 n ). E [ f ( )]istheexpectation

PAGE 31

whereissampledwiththepdf^h(;). Thekernelfunctionh(;)shouldhavethefollowingproperties: )ispiecewisedierentiablewithrespectto. Undertheseconditionslim!0^f(;)=RRn()f()d=f(0)=f(). Numerouspdf'ssatisfyaboveconditions,e.g.,theGaussian,uniform,orCauchypdf's.Letusconsiderthefunctionoff(x)=x416x2+5x,whichiscontinuousanddierentiable,andithastwoseparatedminima.Figure3-1showsthesmoothedfunction,whichistheconvolutionbetweenf(x)andaGaussianpdf.

PAGE 32

Figure3-1:SmoothedfunctionusingGaussianpdf.

PAGE 33

wherethereectedvalueissubstitutedbytheempiricalaverage.Likewise.theunbiaseddouble-sidedgradientestimateofthesmoothedfunctional^f(;)canberepresentedasr^f(;)=1 2NNXi=1[rf(+i)+rf(i)](3-5) InordertoimplementeitherEquation(3-4)or(3-5)wewouldusedtoevaluatethegradientatmanypointsintheneighborhoodoftheoperatingpoint,yieldingeectivelyano-lineiterativeglobaloptimizationalgorithm.Wewillcombinethe

PAGE 34

ThekeytoimplementingapracticalalgorithmforadaptiveIIRltersistodevelopanon-linegradientestimater^"(),where"()istheerrorbetweenthederivedsignalandtheoutputoftheadaptiveIIRlter.HereweusetheSASderivedsingle-sidedgradientestimatetogetherwiththeLMSalgorithm,wherethegradientestimateisr^"(;)=1 AmajorcharacteristicoftheLMSalgorithmisitssimplicity.WeholdtothisattributebysettingN=1inEquation(3-6)andsubstitutetheneighborhoodaveragingbythesequentialpresentationofdataasdoneintheLMSalgorithm.Hence,weobtaintheone-samplegradientestimateasr^"(;)=r"()(3-7) Thisequationisiteratedforeachinputsample.Theoretically,Equation(3-7)showsthattheon-lineversionoftheSASisgivenbythegradientvalueattherandomly-selectedneighborhoodofthepresentoperatingpoint.Thevarianceoftheneighborhoodiscontrolledby,whichdecreasesalongwiththeadaptationprocedure.ImplementingEquation(3-7)requirestwolters;oneforcomputingtheinput-outputrelationshipandtheotherforcomputingthegradientestimateattheperturbingpoint().Forlarge-ordersystems,thisrequirementisimpractical.Weinvestigatethefollowingsimplication,whichinvolvestherepresentationofthegradientestimateat()asaTaylorseriesaroundtheoperatingpoint.Thatisr"()=["0()+"00()+()2 Underthisequation,wecanusethesameltertocomputeboththeinput-outputrelationshipandthegradientestimate.Asarstapproximation,weonlykeepthersttwotermsandassumeadiagonalHessian.Thisresultsinthefollowinggradient

PAGE 35

26 estimate r ( ) 0 ( ) (3-9) Thisextremeapproximationassumesthatthesecondderivativeofthegradientvector isindependentof sothatitsvarianceisconstantthroughouttheadaptationprocess. Thesecondterm oftherighthandsideoftheaboveequationcanbeinterpreted asaperturbingnoise,whichistheimportanttermtoavoidconvergencetothelocal minimum. RecallthattheGLMSalgorithmis ( n +1)= ( n ) ( n ) ( n ) r ( n; ) ( n ) (3-10) wheretheappendingperturbationnoisesourceis ( n ) 3.4LMS-SASAlgorithm SrinivasanusedEquation(3-9)toestimatethegradientintheGlobalLMS(GLMS) algorithmofEquation(3-10)[56].SimilartotheGLMSalgorithm,wederivenowthe novelLMS-SASalgorithm.TheadaptiveIIRlteringbasedonthegradientsearch essentiallyminimizesthemean-squaredierencebetweenadesiredsequence d ( n ) andtheoutputoftheadaptivelter y ( n ).ThedevelopmentofGLMSandLMS-SAS algorithmsinvolveevaluatingtheMSEobjectivefunction.TheMSEobjectivefunction canbedescribedas ( )= 1 2 E f 2 ( ) g = 1 2 E f [ d ( n ) y ( n )] 2 g (3-11) where E isthestatisticalexpectation.TheoutputsignaloftheadaptiveIIRlters, representedadirect-formrealizationofalinearsystem,is y ( n )= a 0 x ( n )+ + a n N +1 x ( n N +1) + b 1 y ( n 1)+ + b n M +1 y ( n M +1)(3-12) Whichcanberewrittenas y ( n )= T ( n )( n )(3-13)

PAGE 36

where(n)istheparametervectorand(n)istheinputvector.(n)=[a0(n);;aN1(n);b1(n);;bM1(n)]T(3-14)(n)=[x(n);;x(nN+1);y(n1);;y(nM+1)]T(3-15) TheMSEobjectivefunctionis(n;)=1 2Ef[d(n)T(n)(n)]2g(3-16) NowweusetheinstantaneousvalueastheexpectationofEf"2(n)g"2(n)suchthat(n;)=1 2"2(n;)=1 2[d(n)T(n)(n)]2(3-17) ConsideringtheLMSalgorithm,wemustestimatethegradientvectorwithrespecttotheparameters.r(n;)=r1 2["2(n;)]="(n;)r["(n;)]="(n;)ry(n)="(n;)264@"(n;) Thepartialderivativeterm@"(n;)=@aiisevaluatedas@"(n;) Similarly,thepartialderivativeterm@"(n;)=@biisevaluatedas@"(n;) FromEquation(3-9),weobtainr"(n;)=r"(n;)(3-21) Usingtheaboveequation,weobtaintheadaptivealgorithmofsteepestdescentas(n+1)=(n)(n)"(n)r"(n)(3-22)=(n)(n)"(n)r"(n;)(n)"(n)(3-23)

PAGE 37

28 wherethethirdtherm ( n ) ( n ) ontherighthandsideistheappendedperturbation noisesource. representsasingleadditiverandomsource, ( n )isthestepsizewhich decreasesoverofiterations,and ( n )istheerrorbetweenthedesiredoutputsignaland theoutputsignaloftheadaptiveIIRlter. ThedierencebetweenLMS-SASandGLMSresidesintheformoftheappending perturbationnoisesource,wherewehavemodiedtheappendingnoisesourceby multiplyingitwiththeerror.Thismodicationbringstheerrorintothenoiseterm whichisinprincipleabetterapproximationtotheTaylorseriesexpansioninEquation (3-8)thanEquation(3-9).Wecanthereforeforeseebetterresults. 3.5AnalysisofWeakConvergencetotheGlobalOptimumforLMS-SAS Inthissection,weobtainthetransitionprobabilityofescapingoutofalocal minimabysolvingapairofpartialdierentialequations,whicharecalledthe Fokker-Planckequations(diusionequation).WefollowthelinesofWong[78].Herewe canwritetheLMS-SASalgorithmasIto'sintegralas t = a + Z t a m ( s ;s ) ds + Z t a ( s ;s ) dW s (3-24) Where 8 > < > : m ( t ;t )= ( t ) ( t ;t ) r ( t ;t ) ( t ;t )= ( t ) ( t ;t ) (3-25) Let f t ;a t b g beaMarkovprocess,anddenote P ( ;t j 0 ;t 0 )= p ( t < j t 0 = 0 )(3-26) Wecall P ( ;t j 0 ;t 0 )thetransitionfunctionoftheprocess. Werstdiscussthesimplecaseofthescalar assumptionandthenthemore involvedcaseofthevector assumption. isascalar. Ifthereisafunction p ( ;t j 0 ;t 0 )sothat P ( ;t j 0 ;t 0 )= Z p ( x;t j 0 ;t 0 ) dx (3-27)

PAGE 38

29 thenwecall p ( ;t j 0 ;t 0 )thetransitiondensityfunction.Since f t ;a t b g isa Markovprocess, P ( ;t j 0 ;t 0 )satisestheChapman-Kolmogorovequations. P ( ;t j 0 ;t 0 )= Z 1 P ( x;t j z;s ) dP ( z;s j 0 ;t 0 )(3-28) Wenowassumethecrucialconditionon f t ;a t b g ,whichmakesthederivationof thediusionequationpossible.Deneforapositive M k ( ;t ; ; )= Z j y j ( y ) k dP ( y;t + j ;t ) k =0 ; 1 ; 2(3-29) M 3 ( ;t ; ; )= Z j y j ( y ) 3 dP ( y;t + j ;t )(3-30) WeassumethattheMarkovprocess f t ;a t b g satisesthefollowingconditions: 1 [1 M 0 ( ;t ; ; )] # 0 0(3-31) 1 M 1 ( ;t ; ; ) # 0 m ( ;t )(3-32) 1 M 2 ( ;t ; ; ) # 0 2 ( ;t )(3-33) 1 M 3 ( ;t ; ; ) # 0 0(3-34) Itisclearthatif1 M 0 ( ;t ; ; ) # 0 0,thenbydominatedconvergence, p ( j t + t j > )= Z 1 [1 M 0 ( ;t ; ; )] dP ( ;t ) # 0 0(3-35) Inaddition,supposethatthetransitionfunction P ( ;t j 0 ;t 0 )satisesthefollowing condition: Assumption. Foreach( ;t ) ;P ( ;t j 0 ;t 0 )isoncedierentiablein t 0 and three-timesdierentiableat 0 ,andthederivativesarecontinuousandboundedat ( 0 ;t 0 ). Kolmogorov[79]hasderivedtheFokker-Planckequation @ @t p ( ;t j 0 ;t 0 )= 1 2 @ 2 @ 2 [ ( ;t ) p ( ;t j 0 ;t 0 )] @ @ [ m ( ;t ) p ( ;t j 0 ;t 0 )] b>t>t 0 >a (3-36)

PAGE 39

TheinitialconditiontobeimposedisZ1f()p(;tj0;t0)d#0!f(0)8f2S(3-37) thatisp(;tj0;t0)=(0).SubstitutingEquation(3-24)intotheFokker-Planckequations,weget@ @tp(;t)=1 2@2 @[(t)r(t;t)p(;t)](3-38) Ifp(;t)isaproductp(;t)=g(t)W()'()reectingtheindependenceamongthequantities,thenwehaveW()'()dg(t) df1 2d d["()W()'()]r()W()'()g)(3-39) LetW()beanypositivesolutionoftheequation1 2d d["()W()]=r()W()(3-40) thenW()'()dg(t) 2(d d["()W()d'() Therefore1 2(d d["()W()d'() Thetwosides,beingfunctionsofdierentvariables,mustbeconstantinorderfortheequalitytohold.Setthisconstantas,then1 Where'()satisestheSturm-Liouvilleequations.1 2d d["()W()d'()

PAGE 40

Underrathergeneralconditions,itcanbeshownthateverysolutionp(;t)canberepresentedasalinearcombinationofproducts.Sincep(;tj0;t0)isafunctionoft;t0;;0,itmusthavetheformofp(;tj0;t0)=W()ZeRtt0(s)ds'()'(0)d(3-47) where'(0)isconjugatecomplexof'(0).Herewewanttoknowthetransitionprobabilityoftheprocessescapingfromthesteady-statesolution,inwhichr()=0.FromEquation(3-40),weobtain"()W()=c(3-48) wherecisaconstant.TheSturm-Liouvilleequationbecomes1 2d d2'()+ "()'()=0(3-49) Let "()=1 22then'()=ejaretheboundedsolutions.AndweknowthatZ1ej1 22 22"()T(3-50) WhereT=Rtt0(s)ds,bytheinversionformulaoftheFourierintegral,weobtain1 22 2Z1e1 22"()Tejd(3-51) FromEquation(3-47),wegetthetransitionprobabilitiesoftheprocessescapingoutofthevalleyasp(;tj;t0)=1 2()2 whereG(;2)isaGaussianfunctionwithzeromeanandvariance2.

PAGE 41

@tp(;t)=1 2r2[(t)"()p(;t)]r[(t)r(t;t)p(;t)](3-53) Similarly,wewanttoknowthetransitionprobabilityofescapingfromthesteady-statesolution,inwhichr()=0.Equation(3-53)willbecome@ @tp(;t)=1 2r2[(t)"()p(;t)](3-54) Imposingstrictconstraintthatp(;t)isaproductp(;t)=g(t)'()=g(t)'1(1)'2(2)'N+M1(N+M1)(3-55) thenwehave1 2'()r2'()(3-56) Thetwosides,beingfunctionofdierentvariables,mustbeconstant,setthisconstantas,then1 2'()r2'()=(3-58)

PAGE 42

33 Similarly,Equation(3-58)canbepresentedas ( ) 2 1 ( 1 ) r 2 1 1 ( 1 )= 1 ( ) 2 2 ( 2 ) r 2 2 2 ( 2 )= 2 . ( ) 2 N + M 1 ( N + M 1 ) r 2 N + M 1 N + M 1 ( N + M 1 )= N + M 1 (3-59) where P N + M 1 i =1 i = Let i ( ) = 1 2 2 i then i ( i )= e j i i for i =1 ; 2 ; ;N + M 1arethebounded solutions.FromEquation(3-47),wegetthetransitionprobabilitiesoftheprocess escapingoutofthevalleyas p ( ;t j ;t 0 )= N + M 1 Y i =1 Z 1 e 1 2 2 i ( ) R t t 0 ( s ) ds e j i i d i N + M 1 Y i =1 G ( ;" ( ) Z t t 0 ( s ) ds )(3-60) Undertheconstraintoffactorizationof ( n ),thesameargumentsforthescalarcase willholdforthevectorcase.However i ( i )for i =1 ; 2 ; ;N + M 1arenot, ingeneral,independentofeachother, ( n )mustalsoincludethecorrelatedterms besidetheindependenttermofproduct.Thereforetheactualtransitionprobability p ( ;t j ;t 0 )islargerthanEquation(3-60).Inthemorerealisticcaseofdependence, theFokker-Planckwillbecomeverycomplicated.Thusitisnoteasytondoutthe transitionfunctionfromasteadystatepoint. 3.6NormalizedLMSAlgorithm Becauseinpracticeweusetheinstantaneousgradientinsteadofthetheoretical gradient,anestimationerrornaturallyoccurs.Thegradienterrorcanbeusedtoactas theappendingperturbingnoise.AfterreviewingtheNormalizedLMSalgorithm[2],we showthattheglobaloptimizationbehavioroftheNLMSalgorithmissimilartothatof theLMS-SASalgorithmbecauseofthenoisyestimategradient.Asaresult,theNLMS algorithmcanalsobeusedforglobaloptimization.

PAGE 43

ConsidertheproblemofminimizingthesquaredEuclideannormof(n+1)=(n+1)(n);(3-61) subjecttotheconstraintT(n+1)ry(n)=d(n)(3-62) Tosolvethisconstrainedoptimizationproblem,weusethemethodofLagrangemultipliers.Thesquarenormof(n+1)isjj(n+1)jj2=T(n+1)(n+1)=[(n+1)(n)]T[(n+1)(n)]=NXk=0jk(n+1)k(n)j2(3-63) TheconstraintofEquation(3-62)canberepresentedasNXk=0k(n+1)ryk(n)=d(n)(3-64) ThecostfunctionJ(n)fortheoptimizationproblemisformulatedbycombiningEquation(3-63)and(3-64)asJ(n)=NXk=0jk(n+1)k(n)j2+[d(n)NXk=0k(n+1)ryk(n)](3-65) whereisaLagrangemultiplier.AfterwedierentiatethecostfunctionJ(n)withrespecttotheparametersandthensettheresultstozero,weobtain2[(n+1)(n)]=ryk(n);k=0;1;;N(3-66) Bymultiplyingbothsidesoftheaboveequationbyryk(n)andsummingoverfromk=0toN,weobtain=2

PAGE 44

SubstitutingbacktheconstraintofEquation(3-62)intoEquation(3-67),weobtain=2 Denetheerror"(n)=d(n)T(n)ry(n).Wefurthersimplifyas=2 BysubstitutingaboveequationintoEquation(3-66),weobtaink(n+1)=2 FortheadaptiveIIRltering,theaboveequationcanbeformulatedas(n+1)= orequivalently,wemaywriteas(n+1)=(n)+ ThisisthesocalledNLMSalgorithmsummarizedinTable3-1,wheretheinitialconditionsarerandomlychosen.

PAGE 45

36 3.7RelationshipbetweenLMS-SASandNLMSAlgorithms Inthissection,weshowthatthebehavioroftheNLMSalgorithmissimilartothat oftheLMS-SASalgorithmfromaglobaloptimizationperspective.Herewefollowthe linesofWidrowetal.[1]andassumethatthealgorithmwillconvergetothevicinityof asteady-statepoint. FromEquation(3-18),weknowthattheestimatedgradientvectoris: ~ r ( ( n ))= ( n ) r y ( n )(3-73) DeneN(n)asavectorofthegradientestimationnoiseinthe n th iterationand r ( ( n ))asthetruegradientvector.Thus ~ r ( ( n ))= r ( ( n ))+N( n ) N( n )= ~ r ( ( n )) r ( ( n ))(3-74) IfweassumethattheNLMSalgorithmhasconvergedtothevicinityofalocal steady-statepoint ,then r ( ( n ))willbeclosetozero.Thereforethegradient estimationnoisewillbe N( n )= ~ r ( ( n ))= ( n ) r y ( n )(3-75) Thecovarianceofthenoiseisgivenby cov[N( n )]= E [N( n )N T ( n )]= E [ 2 ( n ) r y ( n ) r y T ( n )](3-76) Weassumethat 2 ( n )isapproximatelyuncorrelatedwith r y ( n )(thesameassumption as[1]),thusnearthelocalminimum cov[N( n )]= E [ 2 ( n )] E [ r y ( n ) r y T ( n )](3-77) WerewritetheNLMSalgorithmas ( n +1)= ( n )+ ( n ) jjr y ( n ) jj 2 ~ r ( ( n ))(3-78)

PAGE 46

SubstitutingEquation(3-74)intotheaboveequation,weobtain(n+1)=(n)+(n) wherethelasttermistheappendingperturbingnoise.Itscovariance,fromEquation(3-77),iscov[N(n) whereisanunitnormmatrix.ThustheNLMSalgorithmnearanylocalorglobalminimahasthevarianceoftheperturbingrandomnoisedeterminedsolelybyboth(n)and"(n).ThisbehaviorisverydierentfromtheconventionalLMSalgorithmwithmonotonicdecreasingstepsizewheretheperturbationnoiseisdeterminedby(n),"(n)andry(n).Therefore,intheLMSalgorithmthevariancenearthesteadystatepointissmallbecauseofry(n)0.HencetheLMSalgorithmhassmallprobabilityofescapingoutofanylocalminimabecauseofthesmallvarianceofthenoisygradient. Ontheotherhand,noticethatthevarianceoftheperturbingrandomnoiseintheLMS-SASalgorithmis(n)"(n)whichisalsoindependentofthegradientandcontrolledbyboth(n)and"(n).Therefore,wecananticipatethattheglobaloptimizationbehavioroftheNLMSalgorithmnearlocalminimaissimilartothatoftheLMS-SASalgorithm.Farawayfromlocalminima,thebehaviorofLMS-SASandNLMSisexpectedtoberatherdierentfromeachother.3.8SimulationResults

PAGE 47

Numberofhits GlobalminimumLocalminimumMethodf0:906;0:311gf0:519;0:114g willidentifythefollowingunknownsystemH(z)=0:050:4z1 byareducedorderIIRadaptivelteroftheformH(z)=b Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimizedtotheglobalminimum.TheexcitationsignalischosentoberandomGaussiannoisewithzeromeanandunitvariance.ThereexisttwominimaoftheMSEcriterionperformancesurfacewiththelocalminimumatfa;bg=f0:519;0:114gandtheglobalminimumatfa;bg=f0:906;0:311g.Hereweusethreetypesofannealingscheduleforthestepsize(seeFigure3-2whichshowsthatoneislinear,oneissublinearandtheotheroneissupralinear),8>>>><>>>>:1(n)=0:1cos(n=2nmax)2(n)=0:10:1n=nmaxn
PAGE 48

39 Figure3-2:Stepsize ( n )forSASalgorithm. initialconditionsof ateachrun.Theconvergencecharacteristicsof towardthe globalminimumfortheGLMS,LMS-SAS,andNLMSalgorithmareshowninFigure 3-3,3-4,and3-5,respectively.Theadaptationprocesswith approachingtowardthe localminimumfortheLMS,GLMS,andLMS-SAS,algorithmarealsodepictedin Figure3-6,3-7,and3-8,respectively,where isinitializedtothepointnearthelocal minimum.Basedonthesimulationresults,wecansummarizeperformanceasfollows: Figure3-6androw1,2inTable3-2showthattheLMSalgorithmislikelyto convergetothelocalminimum. Figure3-3,3-7androw3inTable3-2showthattheGLMSalgorithmmightjumpto theglobalminimumvalleyandconvergetotheglobalminimum,butitalsocanjump backtothelocalminimumvalleyandthenconvergetothelocalminimum.Srinivasan [56]claimsthattheGLMSalgorithmcouldconvergetotheglobalminimumw.p.1 bycarefullychoosingthecoolingschedule ( n ).Thecoolingscheduleisacrucial parameter,butitisdiculttobedeterminedsuchthatglobaloptimizationwillbe guarantee. Figure3-4,3-8androw4,5inTable3-2showthattheLMS-SASalgorithmarelikely toconvergetotheglobalminimumwithproperstepsize.EventhoughtheLMS-SAS

PAGE 49

Figure3-3:GlobalconvergenceofintheGLMSalgorithm.A)Weight;B)Contourof. Figure3-4:GlobalconvergenceofintheLMS-SASalgorithm.A)Weight;B)Contourof.algorithmstaysmostofitstimeneartheglobalminimum,itstillhasprobabilityofconvergingtothelocalminimum.

PAGE 50

Figure3-5:GlobalconvergenceofintheNLMSalgorithm.A)Weight;B)Contourof. Figure3-6:LocalconvergenceofintheLMSalgorithm.A)Weight;B)Contourof. Figure3-7:LocalconvergenceofintheGLMSalgorithm.A)Weight;B)Contourof.

PAGE 51

Figure3-8:LocalconvergenceofintheLMS-SASalgorithm.A)Weight;B)Contourof. Ontheotherhand,theNLMSalgorithmis(n+1)=(n)(n) TheLMS-SASalgorithmaddsaperturbingnoisetoavoidconvergingtothelocalminima,whiletheNLMSalgorithmusestheinherentestimategradientnoisetoavoidconvergingtothelocalminima.Twodierenttypesofstepsize(n)and(n)=jry(n)j2areusedbyLMS-SASandNLMS,respectively.Therefore,weneedtofairlycomparetheperformanceofbothalgorithmsintermsofglobaloptimization,sowesetupthethreefollowingexperiments. Hereweusethesamesystemidenticationscheme,i.e.,weidentifythreeunknownsystemsExampleI:HI(z)=0:050:4z1 byareducedorderadaptivelteroftheformH(z)=b

PAGE 52

Figure3-9:ContourofMSE.A)ExampleI;B)ExampleII;C)ExampleIII. Figure3-10:Weight(top)andkry(n)k(bottom)inA)ExampleI;B)ExampleII;C)ExampleIII. Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimized(toglobalminimum).TheexcitationinputischosentoberandomGaussiannoisewithzeromeanandunitvariance.Figure3-9depictsthecontouroftheMSEcriterionperformancesurfaceinexampleI,IIandIII.Here,thestepsizefortheNLMSalgorithmischosentobealineardecreasingfunctionofNLMS(n)=0:1(12:5105n).StepsizesfortheLMS-SASalgorithmareafamilyoflineardecreasingfunctionsofLMSSAS=k(12:5105n)k=[0:01;0:02;0:03;0:04;0:05;0:06;0:07;0:08;0:09;0:1;0:2;0:3;0:4;0:5](3-91) wherewevarythestepsizek,butpreservethesameannealingrate.

PAGE 53

Numberofhits GlobalminimumLocalminimumMethodf0:906;0:311gf0:519;0:114g Table3-3,3-4,3-5showsthesimulationresultsofglobalandlocalminimumhitsbyLMS,LMS-SAS,andNLMSalgorithms.Thevalueofkry(n)kisdepictedinFigure3-10.Thelargerkry(n)k,thesmallerincrementsareusedbythealgorithm,i.e.thelessprobabilityofthealgorithmescapingoutfromthesteadystatepoint.IncasesofexampleIandII,theglobalminimumvalleyhassharperslopethanthelocalvalley.Therefore,Table3-2and3-4showthatNLMSalgorithmhashigherprobabilityinobtainingtheglobalminimumthantheotheralgorithmsincasesofexampleIandII.InexampleIII,thelocalminimumvalleyhassharperslopethantheglobalvalley.Therefore,Table3-5showsthattheNLMSalgorithmhaslessprobabilityinobtainingtheglobalminimumthantheotheralgorithmsinexampleIIIcase.3.10Conclusion

PAGE 54

Numberofhits MethodGlobalminimumLocalminimum LMSwithconstant2080NLMSwithNLMS(n)8911LMS-SASwithLMSSAS(n)andk=0:012080LMS-SASwithLMSSAS(n)andk=0:02991LMS-SASwithLMSSAS(n)andk=0:04298LMS-SASwithLMSSAS(n)andk=0:06199LMS-SASwithLMSSAS(n)andk=0:08199LMS-SASwithLMSSAS(n)andk=0:09199LMS-SASwithLMSSAS(n)andk=0:1199LMS-SASwithLMSSAS(n)andk=0:2298LMS-SASwithLMSSAS(n)andk=0:3199LMS-SASwithLMSSAS(n)andk=0:4199LMS-SASwithLMSSAS(n)andk=0:5298 Table3-5:ExampleIIIforsystemidentication Numberofhits MethodGlobalminimumLocalminimum LMSwithconstant928NLMSwithNLMS(n)9010LMS-SASwithLMSSAS(n)andk=0:011000LMS-SASwithLMSSAS(n)andk=0:021000LMS-SASwithLMSSAS(n)andk=0:041000LMS-SASwithLMSSAS(n)andk=0:061000LMS-SASwithLMSSAS(n)andk=0:081000LMS-SASwithLMSSAS(n)andk=0:091000LMS-SASwithLMSSAS(n)andk=0:11000LMS-SASwithLMSSAS(n)andk=0:21000LMS-SASwithLMSSAS(n)andk=0:31000LMS-SASwithLMSSAS(n)andk=0:41000LMS-SASwithLMSSAS(n)andk=0:51000

PAGE 55

Fromthediusionequation,wehavederivedthetransitionprobabilityoftheLMS-SASalgorithmescapingfromasteadystatepoint.Sincetheglobalminimumisalwayssmallerthanthelocalminimum,thetransitionprobabilityofthealgorithmescapingoutfromthelocalminimumisalwayslargerthantheonefromtheglobalminimum.Hence,thealgorithmwillstaymostofthetimeneartheglobalminimumandeventuallyconvergetotheglobalminimum. Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpectedvalueofthegradient,anestimationerrornaturallyoccurs.Thisgradientestimationerror,whenproperlynormalized,canbeusedtoactastheperturbingnoise.WehaveshownthatthebehavioroftheNLMSalgorithmwithdecreasingstepsizenearaminimaissimilartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective. TheglobaloptimizationperformanceofLMS-SASandNLMSalgorithmtotallydependontheshapeofthecostfunction.Thesharperthelocalminima,thelesslikelytheNLMSalgorithmisofescapingoutfromthissteadystatepoint.Ontheotherhand,thebroaderthevalleyaroundlocalminima,themoredicultitisforthealgorithmtoescapeoutfromthisvalley.

PAGE 56

CHAPTER4 INFORMATIONTHEORETICLEARNING 4.1Introduction Themeansquareerrorcriterionhasbeenextensivelyusedintheeldofadaptive systems[80].Thatisbecauseofitsitsanalyticalsimplicityandtheassumption ofGaussiandistributionfortheerror.SincetheGaussiandistributionistotally characterizedbyitsrstandsecondorderstatistics,theMSEcriterioncanextract allinformationfromasetofdata.However,theassumptionofGausssiandistribution isnotalwaystrue.Therefore,acriterionwhichconsidershigher-orderstatisticsis necessaryforthetrainingofadaptivesystems.Shannon[81]rstintroducedaentropy ofagivenprobabilitydistributionfunctionwhichprovidesameasureoftheaverage informationinthedistribution.ByusingtheParzenwindowestimator[82],wecan estimatethepdfdirectlyfromasetofdata.Itisquitestraightforwardtoapplythe entropycriteriontothesystemidenticationframework[6,5].Thepdfoftheerror signalbetweenthedesiredsignalandtheoutputsignalofadaptiveltersmustbeas closeaspossibletoadeltadistribution, ( ).Hence,thesupervisedtrainingproblem becomesanentropyminimizationproblem,assuggestedbyErdogmusetal.[6]. ThekernelsizeoftheParzenwindowestimatorisanimportantparameterinthe globaloptimizationprocedure.ItwasconjecturedbyErdogmusetal.[6]thatfora sucientlylargekernelsize,thelocalminimaoftheerrorentropycriterioncanbe eliminated.Itwassuggestedthatstartingwithalargekernelsize,andthenslowly decreasingthisparametertoapredeterminedsuitablevalue,thetrainingalgorithm canconvergetotheglobalminimumofthecostfunction.Theerrorentropycriterion consideredbyErdogmusetal.[6],however,doesnotconsiderthemeanoftheerror signal,sinceentropyisinvarianttotranslation.Inthisdissertation,weproposea modicationtotheerrorentropycriterion,inordertotakethispointintoaccount. 47

PAGE 57

48 Theproposedcriterionwithannealingofthekernelsizeisthenshowntoexhibitthe conjecturedglobaloptimizationbehaviorinthetrainingofIIRlters. 4.2EntropyandMutualInformation Shannon[81]denedtheentropyofaprobabilitydistribution P = f p 1 ;p 1 ; ;p N g as H s ( P )= N X k =1 p k log( 1 p k ) N X k =1 p k =1 ;p k 0(4-1) whichmeasurestheaverageamountofinformationcontainedinarandomvariable X withprobabilities p k = P ( x = x k ) ;k =1 ; 2 ; ;N atthevaluesof x 1 ;x 2 ; ;x N Amessagecontainsnoinformation,ifitiscompletelyknown.Thelargerinformation itcontains,thelesspredictableitis.Informationtheoryhasbroadapplicationinthe eldofcommunicationsystems[83].Butentropycanbedenedinamoregeneral form.AccordingtoRenyi[58],themeanoftherealnumber x 1 ;x 2 ; ;x N withpositive weighting p 1 ;p 2 ; ;p N hastheformas x = 1 ( N X k =1 p k ( x k ))(4-2) where ( x )isaKolmovov-Nagumofunction,whichisanarbitrarycontinuousand strictlymonotonicfunction. Anentropymeasure H generallyobeysthefollowingformula: H = 1 ( N X k =1 p k ( I ( p k )))(4-3) where I ( p k )= log( p k )istheHartley'sinformationmeasure[84]. Inordertosatisfytheadditivitycondition,the ( )canbeeither ( x )= x or ( x )=2 (1 ) x .When ( x )= x theentropymeasurebecomeasShannon'sentropy. When ( x )=2 (1 ) x ,theentropymeasurebecomeRenyi'sentropyoforder ,whichis denotedas H R = 1 1 log( N X k =1 p k ) ;> 0and 6=1(4-4)

PAGE 58

49 ThewellknownrelationshipbetweenShannon'sandRenyi'sentropyis H R H s H R 1 >> 0and > 1(4-5) lim 1 H R = H s (4-6) InordertofurtherrelateRenyi'sandShannon'sentropy,thedistanceof P = ( p 1 ;p 2 ; ;p N )totheoriginalof P =(0 ; 0 ; ; 0)isdenedas V = N X k =1 p k = k P k (4-7) where V iscalledthe -normoftheprobabilitydistribution[85]. TheRenyi'sentropyinthetermof V isas H R = 1 1 log( V )(4-8) TheRenyi'sentropyoforder meansadierent -norm.Shannon'sentropycanbe viewedasthelimitingcase 1oftheprobabilitydistributionnorm.Renyi'sentropy isessentiallyamonotonicfunctionofthedistanceoftheprobabilitytotheoriginal.The H R 2 = log P N k =1 p 2 k iscalledthequadraticentropy,becauseofthequadraticformon theprobability. Wecanfurtherextendtheentropydenitiontoacontinuousrandomvariable Y withpdf f y ( y )as[58]: H R = 1 1 log( Z 1 f y ( z ) dz )(4-9) H R 2 = log( Z 1 f y ( z ) 2 dz )(4-10) ItisimportanttomentionthatRenyi'squadraticentropyinvolvestheuseofthesquare ofthepdf. BecausetheShannonentropyisdenedasweightedsumofthelogarithmof thepdf,itisdiculttodirectlyusetheinformationtheoreticcriterion.Sincewe cannotdirectlyusethepdf(unlessitsformispriorknown),weusethenonparametric estimators.Hence,theParzenwindowmethod[82]isusedinthisdissertation.The

PAGE 59

50 Parzenwindowestimatorisakernel-basedestimatorwith ^ f Y ( z;y )= 1 N N X i =1 ( z y i )(4-11) where y i 2 R M aretheobservedsignal. ( )isakernelfunction.TheParzenwindow estimatorcanbeviewedasaconvolutionofthekernelfunctionwiththeobservedsignal. ThekernelfunctioninthisdissertationischosenofGaussianfunctionas ( z )= G ( z; 2 )= 1 (2 2 ) M= 2 exp( z T z 2 2 )(4-12) Here,wewillfurtherdevelopanITLcriteriontoestimatethemutualinformationamongrandomvariables.Mutualinformationisabletoquantifytheentropy betweenpairsofrandomvariables.Hencemutualinformationisalsoveryimportantto engineeringproblems. MutualinformationisdenedinShannon'sentropytermas I ( x;y )= H ( y ) H ( y j x ),whichisnoteasilyestimatedfromsamples.Analternativeestimated mutualinformationbetweentwoprobabilitydensityfunction(pdf) f ( x )and g ( x )is Kullback-Leibler(KL)divergence[86],whichisdenedas K ( f;g )= Z f ( x )log f ( x ) g ( x ) dx (4-13) SimilarlyRenyi'sdivergencemeasurewithorder fortwopdf f ( x )and g ( x )is H R ( f;g )= 1 ( 1) log Z f ( x ) 2 g ( x ) 1 dx (4-14) TherelationbetweenKLdivergenceandRenyi'sdivergencemeasuresisas lim 1 H R ( f;g )= ( f;g )(4-15) TheKLdivergencemeasurebetweentworandomvariables Y 1 and Y 2 essentially estimatesthedivergencebetweenthejointpdfandthemarginalpdfs.Thatis I s ( Y 1 ;Y 2 )= KL ( f Y 1 Y 2 ( z 1 ;z 2 ) ;f Y 1 ( z 1 ) f Y 2 ( z 2 )) = ZZ f Y 1 Y 2 ( z 1 ;z 2 )log f Y 1 Y 2 ( z 1 ;z 2 ) f Y 1 ( z 1 ) f Y 2 ( z 2 ) dz 1 dz 2 (4-16)

PAGE 60

wherefY1Y2(z1;z2)isthejointpdf,fY1(z1)andfY2(z2)aremarginalpdfs.Becausethosedivergencemeasuresmentionedabovearenon-quadraticinthepdfterm,theycannoteasilybeestimatedwiththeinformationpotential.Thefollowingdistancemeasuresbetweentwopdfs,whichcontainsonlyquadratictermsofpdf,aremorepractical. UsingtheCauchySchwartzinequality,thedistancemeasurebetweentwopdfsf(x)andg(x)isasICS(f;g)=log(Rf(x)2dx)(Rg(x)2dx) (Rf(x)g(x)dx)2(4-19) ItisobviousthatICS(f;g)0andtheequalityholdstrueifandonlyiff(x)=g(x). Similarly,usingtheEuclideandistance,thedistancemeasurebetweentwopdfsf(x)andg(x)isasIED(f;g)=Z(f(x)g(x))2dx=Zf(x)2dx+Zg(x)2dx2Zf(x)g(x)dx(4-20) ItisalsoobviousthatIED(f;g)0andtheequalityholdstrueifandonlyiff(x)=g(x)4.3AdaptiveIIRFilterwithEuclideanDistanceCriterion Theerrorsignale(n)isthedierencebetweendesiredsignald(n)andtheoutputsignaly(n)oftheadaptiveIIRlter,whichise(n)=d(n)y(n)(4-22)

PAGE 61

52 Itisobviousthatthegoalofthealgorithmistoadjusttheweightssuchthattheerror pdf f e isascloseaspossibletodeltadistribution ( ).Hence,theEuclideandistance criterionfortheadaptiveIIRltersisdenedas I ED ( f e )= Z 1 ( f e ( ) ( )) 2 d" = Z 1 f e ( ) 2 d" 2 f e (0)+ c (4-23) where c standsfortheportionsofthisEuclideandistancemeasurethatdonotdepend ontheweightsoftheadaptivesystem.Noticethat,theintegralofthesquareofthe errorpdfappearsexactlyasinthedenitionofRenyi'squadraticentropy.Therefore, itcanbeestimateddirectlyfromits N samplesbyaParzenwindowestimatorwith Gaussiankernelofvariance 2 exactlyasdescribedin[6,5]. ^ f e ( )= 1 N N X i =1 ( e i ; 2 )(4-24) If N !1 ,then ^ f e ( )= f e ( ) ( "; 2 ),where denotestheconvolutionoperator. Thus,usingaParzenwindowestimatorfortheerrorpdfisequivalenttoaddingan independentrandomnoisewiththepdf ( "; 2 )totheerror.Theerror,withthe additivenoise,becomes d y + n =( d + n ) y .Thisissimilartoinjectingarandom noisetothedesiredsignalassuggestedbyWangetal.in[87].Theadvantageofour approachisthatwedonotexplicitlygeneratenoisesamples.Wesimpletakeadvantage oftheestimationnoiseproducedbytheParzenestimator,whichasdemonstratedabove, worksasanadditive,independentnoisesource.Thekernelsize,whichcontrolsthe varianceofthehypotheticalnoiseterm,shouldbeannealedduringtheadaptation, justlikethevarianceoftheinjectednoisebyWangetal.[87].Fromtheinjectednoise pointofview,thealgorithmbehavessimilartothewell-knownstochasticannealing algorithm;thenoisewhichisaddedtothedesiredsignalbackpropagatesthroughthe errorgradient,resultinginperturbationsintheweightupdatesproportionaltothe weightsensitivity.However,sinceouralgorithmdoesnotexplicitlyuseanoisesignal,its operationismoresimilartoconvolutionalsmoothing.Forasucientlylargekernelsize,

PAGE 62

BysubstitutingtheParzenwindowestimatorfortheerrorpdfintheintegralofEquation(4-23),andrecognizingthattheconvolutionoftwoGaussianfunctionsisalsoaGaussian,weobtaintheITLcriterionas(afterdroppingallthetermsthatareindependentoftheweights):IED(fe)=1 Thegradientvector@IED(fez)=@tobeusedinthesteepestdescentalgorithmisobtainedas@IED(fe) 2N22NXi=1NXj=1[(eiej)(eiej;22)(@y(ni) wherethegradient@y=@isgivenby@y(n) and(n)=[y(i1);y(i2);;y(iN);x(i);x(i1);;x(iM)]T.4.4ParzenWindowEstimatorandConvolutionSmoothingFunction4.4.1Similarity

PAGE 63

)ispiecewisedierentiablewithrespecttox. ThekernelfunctioninthisthesisischosenofGaussianfunctionas(x)=G(x;2)=1 (22)n=2exp(xTx Itisobviousthat(x)=1 ),lim!0(x)=(x),and(x)isaGaussianpdf.Hence(x)satisesthepropertiesofsmoothingfunction. Theobjectiveoftheconvolutionsmoothingfunctionistosmooththenonconvexcostfunction.Theparametercontrolsthedispersionofh(x),whichcontrolsthedegreeofcostfunctionsmoothing.Inthebeginningstageoftheoptimization,theissettobelargesuchthath(x)cansmoothoutallthelocalminimumofthecostfunction.Sincetheglobalminimumofthesmoothedcostfunctiondoesnotcoincidewiththeglobalminimumoftheactualoriginalcostfunction.Theisslowlydecreasedtozero.Asaresult,thesmoothedcostfunctioncangraduallyreturntotheoriginalcostfunctionandthealgorithmcanconvergetotheglobalminimumoftheactualcostfunction. Thereforethe(x)hasthesameroleofh(x)insmoothingthenonconvexcostfunction.Theparametercontrolsthedispersionof(x),whichcancontrolthedegreeofthecostfunctionsmoothing.Similarly,theparameterissettobelargeandthenslowlydecreasestozero.ThereforetheITLalgorithmwiththeproperparametercanconvergetotheglobalminimum.

PAGE 64

whichistheexpectationwithrespecttotherandomvariable".Thestandarddeviationof"iscontrolledby.Hence,thesmoothedcostfunctioncanberegardedasanaverageversionofactualcostfunction. FortheITLalgorithm,wechangetheshapeofthepdfbyParzenwindowestimatorateachparticularpointof.Thuswechangethecostfunctionateachpointof.Theestimatedcostfunctionisas^V;=Zfe+(e;)de(4-30) whereisaGaussiannoisewithzeromeanandvariance.WeconcludethattheSASmethodaddsanadditionalnoisetotheweightinordertoforcethealgorithmtoconvergetotheglobalminimum,whiletheITLalgorithmaddsanadditivenoisetotheerrorinordertoforcethealgorithmtoconvergetotheglobalminimum.Theadditivenoiseaddedtoerroraectsthevarianceoftheweightupdatesproportionallytothesensitivityofeachweight,@e=@i.Thismeansthatasinglenoisesourceistranslatedinternallyontodierentnoisestrengthsforeachweight.4.5AnalysisofWeakConvergencetotheGlobalOptimumforITL

PAGE 65

whereNistheadditivenoise.Herethegradientofthecostfunctionusedinthesteepestalgorithmis@J @"@^" @=@J @"(@" @+N())=@J @"@" @+N()@J @"(4-32) whereJisthecostfunction.FortheITLalgorithm,thecostfunctionisJ=IED(fe)=Z1(fe(")("))2d"(4-33) Therefore@J @"=(fe(")("))2(4-34) HerewewritetheITLalgorithmasIto'sintegralast=a+Ztam(s;s)ds+Zta(s;s)dWs(4-35) Where8><>:m(t;t)=(t)@IED WiththesimilarderivationofEquation(3-52)fortheLMS-SASalgorithm,weobtainthetransitionprobabilityoftheITLalgorithmescapingoutalocalminimumforthescalarcaseasp(;tj;t0)=1 2()2

PAGE 66

IftheinputsignalissettohaveaGaussiandistribution,N(x;2x),thenthedesiredsignalwillalsobeGaussian,N(d;2d).TheoutputsignaloftheadaptivelterwillbeaGaussianaswell,N(y;2y).HerewewanttocalculatetheanalyticalexpressionoftheEuclideandistanceinthesimulationexampleofthesystemidenticationframeworkfortheunknownsystemofH(z)=b1+b2z1 byreducedorderadaptivelterofHa(z)=b Herethedesiredoutputsignalisrealizedasd(i)=b1x(i)+b2x(i1)+a1d(i1)+a2d(i2)(4-41) Thend=b1+b2 TakingvarianceonbothsidesofEquation(4-41),weobtainthatRd(0)=(b21+b22+2b1b2a1)Rx(0)+(a21+a22)Rd(0)+2a1a2Rd(1)(4-43) whereRd(t)andRx(t)arethevarianceofdesiredoutputsignalandinputsignal,respectively.RightshiftingoneunitofEquation(4-41),weobtainthatd(i+1)a1d(i)=a2d(i1)+b1x(i+1)+b2x(i)(4-44)

PAGE 67

TakingvarianceofEquation(4-41)and(4-44),weobtainthatRd(1)a1Rd(0)=(b1b2+b1b2a2)Rx(0)+a1a2Rd(0)+a22Rd(1)(4-45) FromEquation(4-43)and(4-45),wecanobtainthatRd(0)=(b21+b22+2b1b2a1)(1a2)+2b1b2a1a2 Similarly,wecancalculatethe(y;2y)oftheoutputsignaloftheadaptivelterasy=b Takingvarianceofaboveequation,weobtainthatRy(0)=b2Rx(0)+a2Ry(0)(4-49) SothatRy(0)=b2 Wealsocancalculatethecovarianceofdesiredoutputsignalandtheoutputsignaloftheadaptivelterasfollowing.TakingcovarianceofEquation(4-41)and(4-49),weobtainthatRdy(0)=(b1b+b2ab)Rx(0)+a1aRdy(0)+a2aRdy(1)(4-51) TakingcovarianceofEquation(4-41)andy(i1)=bx(i1)+ay(i2),weobtainthatRdy(1)=(b2b+a1b1b)Rx(0)+a1aRdy(1)+a2aRdy(0)(4-52) FromEquation(4-51)and(4-52),weobtainthatRdy(0)=(b1+b2a)(1a1a)a2a(b2+b1a1) (1a1a)2+(a2a)2bRx(0)(4-53)

PAGE 68

Finally,wecanobtainthate=dy(4-54)2e=R2d(0)+R2y(0)2Rdy(0)+2(4-55) where2eincreasesby2,whichiscorrespondingtotheGaussiankernelfunctionoftheParzenwindowestimator.TheEuclideandistanceiscalculatedasIED(fe)=1 Figure4-2showsthecontoursoftheanalyticalexpressionfortheITLcriterion(forcomparisonFigure4-3showsthecontoursoftheanalyticalexpressionfortheEntropyR1f2(")d"criterion).TheconvergencecharacteristicsoftheadaptationprocessfortheltercoecientstowardstheglobaloptimumisshowninFigure4-1.Inthebeginningoftheadaptationprocess,theestimatederrorvariance2eislargebecauseofthesignicantlylargevalueofthekernelsize,2,intheGaussiankernelfunctionoftheParzenwindowestimator.Therefore,thersttermoftherighthandsideofEquation(4-56)isconsiderablysmallerthanthesecondterm.Thuscanbeneglectedinthebeginningstageoftheadaptationprocess.Weobservethatthesecondtermconcentratesmoretightlyarounde=dy=0associatedwiththeincreasing2e,i.e.,theincreasing2.ThestraightlineinFigure4-1b.isthelineofe=dy=0.Itisclearfromthisgure,Figure4-1,thattheweight-trackoftheITLalgorithmconvergestowardsthelineofe=dy=0aswepredictedinthetheoreticalanalysisgivenabove.Whenthesize,2,oftheGaussiankernelfunctionslowlydecreasesduringadaptation,theITLcostfunctionwillgraduallyconvergebacktotheoriginalone,whichmightexhibitlocalminima.4.7SimulationResults

PAGE 69

Figure4-1:ConvergencecharacteristicsofweightforExampleIbyITL.A)Weight;B)Contourofweight. Figure4-2:EuclideandistanceofExampleIinA)2=0;B)2=1;C)2=2;D)2=3.

PAGE 70

Figure4-3:EntropyR1f2(")d"ofExampleIinA)2=1;B)2=2;C)2=3;D)2=4.E)2=5;F)2=6;G)2=7;H)2=8;I)2=9.

PAGE 71

bythefollowingreducedorderadaptiveIIRlterHa(z)=b Themaingoalistodeterminethevaluesofthecoecientsfa;bg,suchthattheEuclideandistancecriterionisminimized.IfweassumetheerrorpdfoffeisGaussianasfe(e;w)=1 22e(4-60) Then,wecanderivetheestimatedEuclideandistanceas^IED(fe)=1 Thusweplot,experimentally,thecontouroftheEuclideandistancecriterionperformancesurfacesindierentforExampleIandIIinFigure4-2and4-4,respectively.ItshowsthatthelocalminimaoftheEuclideandistancecriterionperformancesurfacehavedisappearedwithlargekernelsize.Thus,bycarefullycontrollingthekernelsize,thealgorithmcanconvergetotheglobalminimum. TheinputsignalisarandomGaussiannoisewithzeromeanandunitvariance.ThereexistseveralminimumsontheEuclideandistancecriterionperformancesurfacewithsmallkernelsizeonbothexamples.However,thereexistasoleglobalminimumofEuclideandistancecriterionsurfacewithasucientlargekernelsize.Inthissimulation,thekernelsizeischosentobesucientlargeinthestartstage,andthenslowlydecreasedtoapredeterminedsmallvalue,whichisthetrade-obetweenlowbiasandlowvariance.Inthisway,thealgorithmcanconvergetotheglobalminimum.Thestepsizeforthealgorithmisaconstantvalueof0.002.Thesimulationresultsarebasedon100MonteCarlorunsalongwithrandomlyinitialconditionofweightateachMonteCarlorun.Itshowsfromthesimulationresultsthatthealgorithmconvergestotheglobalminimumwith100%ofthetimeforbothexamples.Theconvergence

PAGE 72

Figure4-4:EuclideandistanceofExampleIIinA)2=0;B)2=1;C)2=2;D)2=3.

PAGE 73

Figure4-5:ConvergencecharacteristicsofweightforExampleIIbyITL.A)Weight;B)Contourofweight.characteristicsoftheadaptationprocesswiththeweightapproachingtotheglobalminimumareshowninFigure4-1and4-5,respectively,whereinitialweightarechosentoapointnearthelocalminimum.4.8ComparisonofNLMSandITLAlgorithms Hereweusethesamesystemidenticationscheme,i.e.,weidentifytheunknownsystemofExampleI:HI(z)=0:050:4z1 byreducedorderadaptivelterofH(z)=b

PAGE 74

Numberofhits(global/local) MethodExampleIExampleIIExampleIII LMS36/6420/8092/8LMS-SAS96/41/99100/0NLMS100/089/1190/10ITL100/0100/0100/0 Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimized(globalminimum).TheinputsignalischosentoberandomGaussiannoisewithzeromeanandunitvariance.ThestepsizeoftheLMS-SASandNLMSalgorithmsischosentobealineardecreasingfunctionof(n)=0:1(15105n)andconstantstepsize=0:001fortheLMSandITLalgorithm.Thekernelsizeischosentobealineardecreasingfunctionof2=0:3(15105)+0:5fortheITLalgorithm. Table4-1showsthecomparisonofthenumberofglobalandlocalminimumhitsbybothNLMSandITLalgorithms.Theresultsaregivenby100MonteCarlosimulationswithrandominitialconditionsofateachrun.ItisclearfromTable4-1thattheITLalgorithmismoresuccessfulinobtainingtheglobalminimumthanotheralgorithms. InordertounderstandthebehavioroftheITLsolution,weinvestigatetheLpnormsoftheimpulseresponseerrorvectorsbetweentheoptimalsolutionsobtainedbytheMSEandtheITLcriteria.Assumingtheinniteimpulseresponseoftheunknownsystem,givenbyhi,i=0,...,1andtheinniteimpulseresponseofthetrainedadaptivelter,givenbyhai,i=0,...,1canbothbetruncatedatM,yetpreservemostofthepowercontainedwithin,weconsiderthefollowingimpulseresponseerrornormcriterion:ImpulseResponseCriterionLp=pvuut Table4-2showstheimpulseresponseLperrornormsfortheadaptiveIIRlterstrainedwithMSEandITLcriteria.WeseefromtheseresultsthattheITLcriterionismoreofaminimax-typealgorithm,asitprovidesasmallerL1normfortheimpulseresponseerrorcomparedtoMSE,whichyieldsanL2normerrorminimization.

PAGE 75

p123451010010001 IftheMSEsolutionisderived,eithertheNLMSischosen,orifamorerobustsearchisderived,theITLcanbeused.However,afterITLconverged,theLMSalgorithmshouldbeusedtostartfromtheITLsolutionandseektheglobaloptimumofMSE.Asdemonstrated,theITLandMSEglobalminimumareclosetoeachother.4.9Conclusion ThesolutionoftheITLisdierentfromtheMSEoptimization.Howevertheirminimaareinthesameregionofweightspace.Thereforeformorerobustglobalsearch,werecommendtouseITLandwhenitconverges,switchtotheMSEcostusingasinitialconditionstheweightvaluesfoundwithITL.

PAGE 76

Inordertodemonstratetheeectivenessofproposedglobaloptimizations,proposedglobaloptimizationareappliedtotwopracticalexamples;systemidenticationwithKautzlterandnonlinearequalization.5.1SystemIdenticationwithKautzFilter @k=e(n)'k(n)(5-1)4=@E @=e(n)dXk=0k@'k(n) @=e(n)dXk=0k@'k(n) whereisastepsize.Thegradientvector@'k=@and@'k=@aregivenby@'0(n)

PAGE 77

68 @' 1 ( n ) @ = 1 p 2 ( p 1 ( 2 + 2 ) p (1 ) 2 + 2 p (1 ) 2 + 2 p 1 ( 2 + 2 ) )( u ( n 1)+ u ( n )) +2 @' 1 ( n 1) @ ( 2 + 2 ) @' 1 ( n 2) @ 2 1 ( n 2)(5-7) and @' k ( n ) @ =2 @' k ( n 1) @ ( 2 + 2 ) @' k ( n 2) @ +( 2 + 2 ) @' k 2 ( n ) @ 2 @' k 2 ( n 1) @ + @' k 2 ( n 2) @ +2 k ( n 1) 2 k ( n 2)+2 k 2 ( n ) 2 k 2 ( n 1)(5-8) @' k ( n ) @ =2 @' k ( n 1) @ ( 2 + 2 ) @' k ( n 2) @ +( 2 + 2 ) @' k 2 ( n ) @ 2 @' k 2 ( n 1) @ + @' k 2 ( n 2) @ 2 k ( n 2)+2 k 2 ( n )(5-9) Here r y ( n )=[ T ( n ) ; d X k =0 k @' k ( n ) @ + j d X k =0 k @' k ( n ) @ ](5-10) Hence,theNLMSalgorithmbecomes 4 d = jr y ( n ) j 2 e ( n ) k ( n )(5-11) 4 = jr y ( n ) j 2 e ( n ) d X k =0 k @' k ( n ) @ (5-12) 4 = jr y ( n ) j 2 e ( n ) d X k =0 k @' k ( n ) @ (5-13) Here,considerthesystemidenticationexamplebySilva[88],whichusesthe referencetransferfunctiondescribedas H ( z )= 0 : 0890 0 : 2199 z 1 +0 : 2866 z 2 0 : 2199 z 3 +0 : 0890 z 4 1 2 : 6918 z 1 +3 : 5992 z 2 2 : 4466 z 3 +0 : 8288 z 4 (5-14) Theinputsignalisacolorednoisewhichisgeneratedbypassingawhitenoise,with mean0andvariance1,througharst-orderlterwithadecayfactorof0.8.We

PAGE 78

69 Table5-1:SystemidenticationofKautzltermodel Numberofhits MethodGlobalminimumLocalminimum ITL1000 NLMS991 LMS-SAS5842 LMS4852 considerthenormalizedleast-errorcriterion(NMSE) NMSE=10log 10 P N n =1 ( y ( n ) ^ y ( n; )) 2 P N n =1 y ( n ) 2 (5-15) where^ y istheestimatedoutputoftheKautzlter.Theglobaloptimumforthe objectivefunctionisat 0 : 6212+ j 0 : 5790,whichhasanormalizedcriterionof 12 : 5 dB lessthanthatintheFIRlter( =0).ThisagreewiththeresultbySilva[88]. Thestepsizeischosentobealinearlydecreasingfunctionof ( n )=0 : 4(1 5 10 5 n ) forbothLMS-SASandNLMSalgorithms,andconstantat0.002forbothITLandLMS algorithms.ThekernelsizefortheITLalgorithmischosentobealinearlydecreasing functionofiterations, 2 =3(1 2 : 5 10 5 n )+0 : 5.Table5-1showsthecomparison ofthenumberofglobalandlocalminimumhitsbyITL,NLMS,LMS-SASandLMS algorithms.Theresultsaregivenby100MonteCarlosimulationswithrandominitial conditionsof and ateachrun.ItisclearfromTable5-1thattheITLalgorithmis moresuccessfulinobtainingtheglobalminimumcomparedwiththeotheralgorithms. Singlecharacteristicweighttracksrepresentativeofeachalgorithm,LMS,LMS-SAS, NLMS,andITL,areshowninFigure5-1,5-2,5-3,and5-4,respectively.Figure5-5 depictstheclosenessbetweentheimpulseresponseofunknownsystemandtheimpulse responseoftheoptimizedKautzlterdeterminedwithMSEandITLcriterions. InordertounderstandbetterthemeaningoftheITLsolution,weinvestigate the L p normsoftheimpulseresponseerrorvectorsbetweentheoptimalsolutions obtainedbytheMSEandtheITLcriteria.Assumingtheinniteimpulseresponse oftheunknownsystem,givenby h i ;i =0 ;:::; 1 andtheinniteimpulseresponseof thetrainedadaptivelter,givenby h ai ;i =0 ;:::; 1 canbothbetruncatedat M ,yet

PAGE 79

Figure5-1:ConvergencecharacteristicsofweightforKautzlterbyLMSalgorithm.A)Weight;B)Weight(+j). Figure5-2:ConvergencecharacteristicsofweightforKautzlterbyLMS-SASalgo-rithm.A)Weight;B)Weight(+j). Figure5-3:ConvergencecharacteristicsofweightforKautzlterbyNLMSalgorithm.A)Weight;B)Weight(+j).

PAGE 80

Figure5-4:ConvergencecharacteristicsofweightforKautzlterbyITLalgorithm.A)Weight;B)Weight(+j). Figure5-5:Impulseresponse.

PAGE 81

p12341010010001 Figure5-6:Channelequalizationsystem.preservemostofthepowercontainedwithin,weconsiderthefollowingimpulseresponseerrornormcriterion:ImpulseResponseCriterionLp=pvuut Table5-2showstheimpulseresponseLperrornormsfortheKautzlterstrainedwithMSEandITLcriteriaaftersuccessfulconvergence.WeseefromtheseresultsthattheITLcriterionismoreofaminimax-typealgorithm,asitprovidesasmallerL1normfortheimpulseresponseerrorcomparedtoMSE,whichyieldsanL2normerrorminimization.5.2NonlinearEqualization

PAGE 82

73 describedas x i = n c X k =0 h k s i k + e i (5-17) wherethetransmittedsymbolsequence s i isanequiprobablebinarysequence f 1 g h i arethechannelcoecients,and e i isGaussiannoisewithzeromeanandvariance 2 n Theequalizerestimatesthevalueofatransmittedsymbolas ^ s i d = sgn ( y i )= sgn ( w T x i )(5-18) where y i = w T x istheoutputoftheequalizer, w =[ w 0 ; ;w m 1 ] T istheequalizer coecients,and x =[ x i ; ;x i m +1 ] T isthevectorofobservations. Theoutputoftheequalizerusingmultilayerperceptron(MLP)withonehidden layerwith n neuronsisgivenby y i = w T 2 tanh( W 1 x + b 1 )+ b 2 (5-19) where W 1 is n m matrixconnectingtheinputlayerwithhiddenlayer, b 1 is n 1vector ofbiasesforthehiddenneurons, w 2 is n 1vectorofweightsconnectingthehidden layertotheoutputneuron,and b 2 isabiasfortheoutputneuron. ConsidertheexamplebySantamariaetal.[89],wherethenonlinearchannelis composedofalinearchannelfollowedbyamemorylessnonlinearity.Thelinearchannel consideredis H ( z )=0 : 3482+0 : 8704 z 1 +0 : 3482 z 2 ,andthestaticnonlinearfunctionis z = x +0 : 2 x 2 0 : 1 x 3 ,where x isthelinearchanneloutput.Thenonlinearequalizerisan MLPwith7neuronsintheinputlayerand3neuronsinthehiddenlayer[MLP(7,3,1)], andtheequalizationdelayis d =4.Ashortwindowof N w =5errorsamplesisusedto minimizetheerrorcriterion. Thegradient @J @ = @J @" @" @ isusedforthebackpropagationalgorithmofthenonlinear equalizertraining,wheretheterm @" @ isdeterminedbythetopologyandtheterm @J @" isdeterminedbytheerrorsignal.Thereforetheproposedglobaloptimization techniquescanbeusedinthisnonlinearequalization,whicharereferredtostochastic gradient(SA),StochasticgradientwithSAS(SG-SAS),normalizedstochasticgradient (NSG),andITLalgorithms,respectively.Thestepsizeischosentobeaconstant

PAGE 83

Figure5-7:Convergencecharacteristicsofadaptivealgorithmsforanonlinearequalizer.of0.2forSG,SG-SASandITLalgorithms,andalinearlydecreasingfunctionof(n)=0:2(1n=nmax)fortheNSGalgorithm,wherenmaxisthemaximumnumberofiteration.Alineardecreasingfunctionof2=3(1n=nmax)+0:1ischosenforthekernelsizeoftheITLalgorithm. Figure5-7depictstheconvergenceoftheMSEevaluatedovertheslidingwindowforthealgorithms,andweconcludethattheITLalgorithmprovidesthefastestconvergence.Figure5-8depictstheperformancecomparisonofSG,SG-SAS,NSG,andITLalgorithmsforthenonlinearequalizerin100MonteCarlorunsforthenalsolutions.ThisgureshowsthatbothNSGandITLalgorithmshavesucceededinobtainingtheglobalminimum.Figure5-9showstheaveragebiterrorrate(BER)curves.TheBERwasevaluatedbycountingerrorversusseveralsignaltonoiserates(SNR)aftertransmittingsymbols.Thisgureshowsthatallalgorithmsprovidethesameresultfortheadequatesolutions,howevertheNSGalgorithmprovidesbestresultsfortheworsesolutions.

PAGE 84

Figure5-8:Performancecomparisonofglobaloptimizationsfornonlinearequalizer.5.3Conclusion TheperformanceofthisITLalgorithmwascomparedwiththemoretraditionalLMSvariants,whichareknowntoexhibitimprovedprobabilityofavoidinglocalminimainpreviouschapter.Nevertheless,noneofthemwereassuccessfulasITLinachievingtheglobalsolution.AninterestingobservationwasthattheITLcriterionyieldsa

PAGE 85

Figure5-9:AverageBERforanonlinearequalizer,A)overthewhole100MonteCarloruns;B)overthe10bestsolutionsofMSE;C)overthe10medialsolutionsofMSE;D.)overthe10worsesolutionsofMSE.

PAGE 86

Theproposedglobaloptimizationsalgorithmshavealsosuccessfullyappliedtoanotherpracticalexample,nonlinearequalization.ThesimulationresultsshowthatITLalgorithmachievesbetterperformancethantheothers.

PAGE 87

Srinivasanetal.haveusedastochasticapproximationfortheconvolutionsmoothingtechniqueinordertoobtainaglobaloptimizationalgorithmforadaptiveIIRltering.TheyshowedthatsmoothingcanbeachievedbytheadditionofavariableperturbingnoisesourcetotheLMSalgorithm.Wehavemodiedthisperturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithm,whichisreferredtoastheLMS-SASalgorithm,resultsinbetterperformanceinglobaloptimizationthantheoriginalalgorithm. Fromthediusionequation,wehavederivedthetransitionprobabilityoftheLMS-SASalgorithm,forthesingleparametercase,escapelocalminimum.Sincetheglobalminimumisalwayssmallerthantheotherlocalminimum,thetransitionprobabilityofthealgorithmescapingoutfromthelocalminimumisalwayslargerthantheonefromtheglobalminimum.Thus,thealgorithmwillstaymostofitstimeneartheglobalminimumandeventuallyconvergetotheglobalminimum. Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpectedvalueofthegradient,errorinestimatingthegradientnaturallyoccurs.Thisgradientestimationerrorcanbeusedtoactastheperturbingnoise.WehaveshownthatthebehavioroftheNLMSalgorithmwithdecreasingstepsizeissimilartotheoneoftheLMS-SASalgorithmfromaglobaloptimizationperspective. GlobaloptimizationperformanceofLMS-SASandNLMSalgorithmtotallydependontheshapeofthecostfunctionsurface.Thesharperthelocalminima,thelesslikely78

PAGE 88

79 theNLMSalgorithmisescapingoutfromthissteadystatepoint.Ontheotherhand, thelargercoverrangeofthesteadystatepointvalley,themoredicultthealgorithm willescapeoutfromthissteadystatepointvalley. Wehaveinvestigatedanothercostfunctionbasedonentropyndtheglobal optimumofIIRlters.Basedonapreviousconjecturethatannealingthekernelsize inthenon-parametricestimatorofRenyi'sentropytoachieveglobaloptimization,we havedesignedtheproposedinformationtheoreticlearningalgorithm,whichisshownto convergetotheglobalminimumoftheperformancesurfaceforvariousadaptivelter topologies.Theproposedalgorithmsuccessfullyadaptedthelterpolesavoidinglocal minima100%ofthetimeandwithoutcausinginstability.Thisbehaviorhasbeen foundinmanyexamples. TheperformanceofthisITLalgorithmwascomparedwiththemoretraditional LMSvariants,whichareknowntoexhibitimprovedprobabilityofavoidinglocal minima.Nevertheless,noneofthemwereassuccessfulasITLinachievingtheglobal solution.AninterestingobservationwasthattheITLcriterionyieldsasmaller L 1 errornormbetweentheimpulseresponsesoftheadaptiveandthereferenceIIRlters, whereasMSEtriestominimizethe L 2 errornorm.Ifthedesignerrequiresaminimum L 2 errornormbetweentheimpulseresponses,itispossibletouseITLadaptationto convergetothevicinityofthissolutionandthenswitchtoNLMStoachieve L 2 error normminimization. OneofthemajordrawbacksinadaptiveIIRlteringisthestabilityissue.We useKautzlters,becausetheirstabilityiseasilytobeguaranteedifpolesofthe Kautzltersarelocatedwithintheunitcircle.Inthisdissertation,weproposedthe combinationofKautzltersandanalternativeinformationtheoreticadaptation criterionbasedonRenyi'squadraticentropy.Kautzltershavebeenusedinthepast forsystemidentication[90]ofARMAmodels,butthepoleshavebeenkeptxed duringadaptation.TheproposedITLcriterionandkernelannealingapproachallowed stableadaptationofthepolestotheirglobaloptimalvalues.

PAGE 89

80 6.2FutureResearch Inthisdissertation,wehaveanalyzedtheweakglobaloptimalconvergenceof algorithmswithMSEcriterionbylookingattheirtransitionfunctionoftheprocess, assumingthattheweight, ,isascalar.Weneedmoreworksonthetransitionfunction oftheprocessingeneralcase,assumingthat isavector,inordertocompletethe analysisoftheweakglobaloptimalconvergenceofalgorithmswithMSEcriterion. WehaveobservedthattheITLcriterionyieldsasmaller L 1 errornormbetween theimpulseresponsesoftheadaptiveandthereferenceIIRlters,whereasMSEtries tominimizethe L 2 errornorm.This"minimax"propertyoftheproposedITLcriterion deservesfurtherresearch. Anotherobservationisthatlinearschedulingofthekernelsizehelpsachieve globalminima.Inannealing-basedglobaloptimizationalgorithms,schedulingofthe parameterstobeannealedisamajorissue.Instochasticannealing,itisknownthat exponentialannealing(atasucientlyslowrate)guaranteesglobalconvergence.In IIRlteradaptationusingITL,weusedlinearannealingofthekernelsizeandinall examples,successfulglobaloptimizationresultswereobtained.Moreworkisrequiredin theITLalgorithmtoselectaappropriatelythesmallestkernelsize,whichwashereset withtheruleofthumbproperties[91]. TheITLadaptationusedabatchapproach,butwebelievethattheonlineversions discussedbyErdogmusetal.[92]couldalsodisplaythesameglobaloptimization properties.TheonlineversionsofITLadaptationneedfurtherstudied. Inaddition,ageneralanalyticalproofthatexplainsthe100%globaloptimization capabilityoftheproposedalgorithmisnecessaryinordertocompletethetheoretical work.This,however,standsasachallengingfutureresearchproject.

PAGE 90

B.WidrowandS.D.Stearns,AdaptiveSignalProcessing,Prentice-Hall,EnglewoodClis,NJ,1985.[2] S.S.Haykin,AdaptiveFilterTheory,Prentice-Hall,EnglewoodClis,NJ,1986.[3] M.A.StyblinskiandT.S.Tang,\Experimentsinnonconvexoptimization:Stochasticapproximationwithfunctionsmoothingandsimulatedannealing,"NeuralNetworks,vol.3,pp.467{4833,1990.[4] J.C.PrincipeandD.Erdogmus,\Fromadaptivelineartoinformationltering,"inProceedingsofSymposium2000onAdaptiveSystemsforSignalProcessing,Communications,andControl,LakeLouise,Alberta,Canada,Oct.2000,pp.99{104.[5] D.Erdogmus,K.Hild,andJ.C.Principe,\BlindsourceseparationusingRenyi'smutualinformation,"IEEESignalProcessingLetters,vol.8,no.6,pp.174{176,June2001.[6] D.ErdogmusandJ.C.Principe,\Generalizedinformationpotentialcriterionforadaptivesystemtraining,"IEEETransactionsonNeuralNetworks,(toappear)September2002.[7] K.J.AstromandP.Eykho,\Systemidentication{Asurvey,"Automatica,vol.AC-27,no.4,pp.123{162,Aug.1971.[8] B.Friedlander,\Systemidenticationtechniquesforadaptivesignalprocessing,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-30,no.2,pp.240{246,Apr.1982.[9] L.Ljung,SystemIdenticationTheoryfortheUser,Prentice-Hall,EnglewoodClis,NJ,1987.[10] T.Soderstrom,L.Ljung,andI.Guatasson,\Atheoreticalanalysisofrecursiveidenticationmethods,"Autoimica,vol.14,no.3,pp.193{197,May1978.[11] C.R.Johnson,\AdaptiveIIRltering:Currentresultsandopenissues,"IEEETransactionsonInformationTheory,vol.IT-30,no.2,pp.237{250,Mar.1984.[12] S.S.Shynk,\AdaptiveIIRltering,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.6,no.2,pp.4{21,Apr.1989.[13] S.GeeandM.Rupp,\AcomparisonofadaptiveIIRechocancellerhybrids,"Proceedings.InternationalConferenceAcoustics,Speech,andSignalProcessing,1991.81

PAGE 91

S.L.Netto,P.S.Diniz,andP.Agathoklis,\AdaptiveIIRlteralgorithmsforsystemidentication:Ageneralframework,"IEEETransactionsonEducation,vol.38,pp.54{66,Feb1995.[15] P.A.Regalia,AdaptiveIIRFilteringinSignalProcessingandcontrol,MarcelDekker,NewYork,NY,1995.[16] M.Dentimo,J.M.McCool,andB.Widrow,\Adaptivelteringinthefrequencydomain,"ProceedingsIEE,vol.66,no.12,pp.1658{1659,Dec.1978.[17] E.R.Fervara,\FastimplementationofLMSadaptivelters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-28,no.4,pp.474{475,Aug.1980.[18] T.K.Woo,\HRLS:amoreecientRLSalgorithmforadaptiveFIR,"IEEECommunicationLetters,vol.5,no.3,pp.81{84,March2001.[19] D.F.Marshall,W.K.Jenkins,andJ.J.Murphy,\Theuseoforthogonaltransformsforimprovingperformanceofadaptivelters,"IEEETransactionsonCircuitandSystem,vol.36,no.4,pp.474{484,Apr.1989.[20] S.S.NarayanandA.M.Peterson,\Frequencydomainleast-mean-squarealgorithm,"ProceedingsIEEE,vol.69,no.1,pp.124{126,Jan.1981.[21] S.A.White,\Anadaptiverecursivedigitallter,"inProceedings9thAsilomarconferenceCircuitSystemComputer,pp.21{25,1975.[22] R.A.David,\Anadaptiverecursivedigitallter,"inProceedings15thAsilomarconferenceCircuitSystemComputer,pp.175{179,1981.[23] B.D.Rao,\AdaptiveIIRlteringusingcascadestructures,"inProceedings27thAsilomarconferenceonSignalSystemComputer,vol.1,pp.185{188,1993.[24] J.K.Juan,J.G.Harris,andJ.C.Principe,\Locallyrecurrentnetworkwithmultipletime-scales,"IEEEproceedingsonNeuralNetworkforsignalprocessing,vol.VII,pp.645{653,1997.[25] P.A.Regalia,\StableandecientlatticealgorithmsforadaptiveIIRltering,"IEEETransactionsonSignalProceeding,vol.40,no.2,pp.375{388,Feb.1992.[26] R.L.ValcarceandF.P.Gonales,\Adaptivelatticelteringrevisitedconvergenceissuesandnewalgorithmswithimprovedstabilityproperties,"IEEETransactionsonSignalProcessing,vol.49,no.4,pp.811{821,April2001.[27] J.J.Shynk,\AdaptiveIIRlteringusingparallel-formrealization,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.37,no.4,pp.519{533,Apr.1989.[28] J.E.CousseauandP.S.R.Diniz,\AlternativeparallelrealizationforadaptiveIIRlters,"inProceedingsInternationalSymposiumCircuitsSystem,pp.1927{1930,1990.

PAGE 92

J.J.ShynkandR.P.Gouch,\Frequencydomainadaptivepole-zerolters,"ProceedingsIEEE,vol.73,no.10,pp.1526{1528,Oct.1985.[30] B.E.UsevitchandW.K.Jenkin,\AcascadeimplementationofanewIIRadaptivedigitallterwithglobalconvergenceandimprovedconvergencerates,"inProceedingsInternationalSymposiumCircuitsSystem,pp.2140{2143,1989.[31] D.G.Luenberger,IntroductiontoLinearandNonlinearProgramming,Wiley,MA:Addison,1973.[32] J.LinandR.Unbehauen,\Bias-remedyleastmeansquareequationerroralgorithmforIIRparametersrecursiveestimation,"IEEETransactionsonSignalProcessing,vol.40,pp.62{69,Jan1992.[33] H.FanandW.K.Jekins,\AnewadaptiveIIRlters,"IEEETransactionsonCircuitandSystem,vol.CAS-33,no.10,pp.939{947,Oct.1986.[34] H.FanandMilosDoroslvacki,\OnglobalconvergenceofSteiglitz-McBrideadaptivealgorithm,"IEEETransactionsonCircuitandSystem,vol.40,no.2,pp.73{87,Feb.1993.[35] K.SteiglitzandL.E.McBride,\Atechniquefortheidenticationoflinearsystems,"IEEETransactionsonAutomaticControl,vol.AC-10,pp.461{464,1965.[36] S.L.NettoandP.Agathoklis,\AnewcompositeadaptiveIIRalgorithm,"inProceedings28thAsilomarconferenceonSignalSystemComputer,vol.2,pp.1506{1510,1994.[37] J.E.Cousseau,L.Salama,L.Donale,andS.L.Netto,\OrthonormaladaptiveIIRlterwithpolyphaserealization,"inProceedingsofICIES'99Electronics,CircuitandSystems,vol.2,pp.835{838,1999.[38] M.RadenkovicandT.Bose,\GlobalstabilityofadaptiveIIRltersbasedtheoutputerrormethod,"inProceeingsofICIES'99Electronics,CircuitandSystems,vol.1,pp.663{667,1999.[39] P.L.Hsu,T.Y.Tsai,andF.C.Lee,\ApplicationsofavariablestepsizealgorithmtoQCEEadaptiveIIRlters,"IEEETransactionsonSignalProcessing,vol.46,no.6,pp.1685{1688,Jun.1999.[40] W.J.SongandH.C.Shin,\Bias-freeadaptiveIIRltering,"inProceedingIEEEInternationalConferenceonAcoustics,Speech,andSignalProceeding,vol.1,pp.109{112,2000.[41] K.C.HoandY.T.Chan,\Biasremovalinequation-erroradaptiveIIRlters,"IEEETransactionsonSignalProcessing,vol.43,pp.51{62,Jan.1995.[42] M.C.HallandP.M.Hughes,\Themaster-slaveIIRlteradaptationalgorithm,"inProceedingIEEEInternationalSymposiumonCircuit,System,vol.3,pp.2145{2148,1988.

PAGE 93

J.R.Treichler,C.R.Johnson,andM.G.Larimore,TheoryandDesignofAdaptiveFilters,Wiley,NewYork,1987.[44] I.O.Bohachevsky,M.E.Hohnson,andM.L.Stein,\Generalizedsimulatedannealingforfunctionoptimization,"AmericanstatisticalassociationandtheAmericansocietyforqualitycontrol,vol.28,pp.209{217,Aug.1986.[45] S.C.Ng,S.H.Leung,C.Y.Chung,A.Luk,andW.H.Lau,\Thegeneticsearchapproach-AnewlearningalgorithmforadaptiveIIRltering,"IEEESignalProcessingMagazine,pp.39{46,Nov.1996.[46] J.A.NelderandR.Mead,\Controlledrandomsearchalgorithm,"ComputerJournal,vol.7,pp.308{313,1965.[47] P.P.KhargonekarandA.Yoon,\Randomsearchbasedoptimizationalgorithmincontrolanalysisanddesign,"inProceedingoftheAmericanControlConference,Jun.1999,pp.383{387.[48] Q.Duan,S.Sorooshian,andV.Gupta,\Shuedcomplexevolutionalgorithm,"WaterResourcesResearch,vol.28,pp.1015{1031,1992.[49] Z.B.Tang,\Adaptivepartitionedrandomsearchtoglobaloptimization,"IEEETransactionsonAutomaticControl,vol.39,pp.2235{2244,Nov.1994.[50] K.H.Yim,J.B.Kim,T.P.Lee,andD.S.Ahn,\GeneticadaptiveIIRlteringalgorithmforactivenoisecontrol,"inIEEEInternationalFuzzySystemsConferenceProceedings,Aug.1999,pp.III1723{1728.[51] B.W.WahandT.Wang,\Constrainedsimulatedannealingwithapplicationsinnonlinearcontinuousconstrainedglobaloptimization,"inProceeding11thIEEEInternationalConferenceonToolswithArticialIntelligence,Nov.1999,pp.381{388.[52] J.L.MaryakandD.C.Chin,\Aconjectureonglobaloptimizationusinggradient-freestochasticapproximation,"inProceedingofthe1998IEEEISIC/CIRA/ISASJointConference,Sep.1998,pp.441{445.[53] N.K.TreadgoldandT.D.Gedeon,\Simulatedannealingandweightdecayinadaptivelearning:TheSARPROPalgorithm,"IEEETransactionsonNeuralNetwork,vol.9,pp.662{668,July1998.[54] G.H.Staus,L.T.Biegler,andB.E.Ydstie,\Globaloptimizationforidentication,"inProceedingofthe36thConferenceonDecisionandControl,Dec.1997,pp.3010{3015.[55] T.Fujita,T.Watanabe,K.Yasuda,andR.Yokoyama,\Globaloptimizationmethodusingintermittencychaos,"inProceedingofthe36thConferenceonDecisionandControl,Dec.1997,pp.1508{1509.[56] W.Edmonson,J.Principe,K.Srinivasan,andC.Wang,\AgloballeastsquarealgorithmforadaptiveIIRltering,"IEEETransactionsonCircuitandSystem,vol.45,pp.379{383,Mar.1998.

PAGE 94

J.M.Thomas,J.P.Reilly,andQ.Wu,\Realtimeanalogglobaloptimizationwithconstraints:Applicationtothedirectionofarrivalestimationproblem,"IEEETransactionsonCircuitandSystem,vol.42,pp.233{243,Mar.1995.[58] A.Renyi,\Somefundamentalquestionsofinformationtheory-selectedpapersofAlfredRenyi,"AkademiaKiado,Budapest,vol.2,pp.565{580,1976.[59] A.Renyi,ADiaryonInformationTheory,Wily,N.Y.,1987.[60] C.F.CowanandP.M.Grant,AdaptiveFilters,Prentice-Hall,1985.[61] B.Widrow,J.M.McCool,M.G.Larimore,andC.R.Johnson,\StationaryandnonstationarylearningcharacteristicsoftheLMSadaptivelter,"ProceedingsIEEE,vol.64,pp.1151{1162,Aug.1976.[62] J.M.Mendel,LessoninDigitalEstimationTheory,Prentice-Hall,EnglewoodClis,NJ,1987.[63] E.I.Jury,TheoryandApplicationsoftheZ-TransformMethod,Wiley,NewYork,1964.[64] T.C.Hsia,\Asimpliedadaptiverecursivelterdesign,"ProceedingsIEEE,vol.69,no.9,pp.1153{1155,Sept1981.[65] G.C.GoodwinandK.S.Sin,AdaptiveFilteringPredictionandControl,Prentice-Hall,EnglewoodClis,NJ,1984.[66] T.Soderstrom,\Ontheuniquenessofmaximumlikelihoodidentication,"Automatica,vol.14,no.3,pp.231{244,Mar.1975.[67] M.Nayeri,H.Fan,andW.K.Jenkins,\SomecharacteristicsoferrorsurfacesforinsucientorderadaptiveIIRlters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.38,no.7,pp.1222{1227,July1990.[68] T.SoderstromandP.Stoica,\Somepropertiesoftheoutputerrormethod,"Automatica,vol.18,pp.1692{1716,Dec.1982.[69] M.Nayeri,\UniquenessofmsoeestimatesinIIRadaptiveltering;asearchfornecessaryconditions,"inInternationalConferenceAcoustics,Speech,andSignalProcessing,1989,pp.1047{1050.[70] S.D.Stearns,\Errorsurfacesofrecursiveadaptivelters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-29,no.4,pp.763{766,June1981.[71] F.HongandM.Nayeri,\OntheerrorsurfaceofsucientorderadaptiveIIRlters:Proofsandcounterexamplestoaunimodalityconjecture,"ProceedingsIEEETransactiononAcoustics,Speech,andSignalProcessing,vol.37,pp.1436{1442,Sep.1989.[72] R.RobertsandC.Mullis,DigitalSignalProcessing,Addison-Wesley,1987.

PAGE 95

W.H.Kautz,\Transientsynthesisinthetimedomain,"IRETransactionsonCircuitTheory,vol.1,pp.22{39,Sept.1954.[74] P.W.Broome,\Discreteorthonormalsequences,"J.Assoc.Comput.Machinery,vol.12,no.2,pp.151{168,Dec.1965.[75] G.A.WilliamsonandS.Zimmermann,\GloballyconvergentadaptiveIIRlterbasedonxedpolelocations,"IEEETransactionsonSignalProcessing,vol.44,pp.1418{1427,Jun.1996.[76] P.M.PardalosandR.Horst,IntroductiontoGlobalOptimization,Norwood,MA:Kluwer,1989.[77] H.RobinsandS.Monro,\Astochasticapproximationmethod,"AnnalsofMathematicalStatistics,vol.22,pp.400{407,1951.[78] E.WongandB.Hajek,StochasticProcessesinEngineeringSystems,Springer,1985.[79] A.N.Kolmogorov,\Uberdieanalytischemethodeninderwahrscheinlichkeits-rechnung,"AnnalsofMathematicalStatistics,vol.104,pp.415{458,1931.[80] S.Haykin,IntroductiontoAdaptivefilters,MacMillan,NY,1984.[81] C.E.Shannon,\Amathematicaltheoryofcommunication,"BellSystemTechnicalJournal,vol.27,pp.379{423,623{653,1984.[82] E.Parzen,\Ontheestimationofaprobabilitydensityfunctionandthemode,"AnnalsofMathematicalStatistics,vol.33,pp.1065,1962.[83] T.CoverandJ.Thomas,ElementsofInformationTheory,Wiley,1991.[84] R.V.Hartley,\Transmissionofinformation,"BellSystemTechnicalJournal,vol.7,1928.[85] G.GolubandF.VanLoan,MatrixComputation,JohnHopkinsPress,1989.[86] S.Kullback,InformationTheoryandStatistics,DoverPublicationsInc.,NewYork,1968.[87] C.WangandJ.C.Principe,\Trainingneuralnetworkswithadditivenoiseinthedesiredsignal,"IEEETransactionsonNeuralNetworks,vol.10,no.6,pp.1511{1517,Nov.1999.[88] T.O.Silva,\Optimalityconditionsfortruncatedkautznetworkswithtwoperiodicallyrepeatingcomplexconjugatespoles,"IEEETransactionsonAutomaticControl,vol.40,pp.342{346,Feb1995.[89] I.Santamara,D.Erdogmus,andJ.C.Principe,\Entropyminimizationforsuperviseddigitalcommunicationchannelequalization,"IEEETransactionsonSignalProcessing,vol.50,no.5,pp.1184{1192,May2002.

PAGE 96

B.Wahlberg,\SystemidenticationusingKautzmodels,"IEEETransactionsonAutomaticControl,vol.39,no.6,pp.1276{1282,Jun.1994.[91] D.ErdogmusandJ.C.Principe,\Anerror-entropyminimizationalgorithmforsupervisedtrainingofnonlinearadaptivesystems,"IEEETransactionsonSignalProcessing,vol.50,no.7,pp.1780{1786,July2002.[92] D.ErdogmusandJ.C.Principe,\Anon-lineadaptationalgorithmforadaptivesystemtrainingwithminimumerrorentropy:stochasticinformationgradient,"inInternationalConferenceonICAandSignalSeparation,SanDiego,CA,Dec.2001,pp.7{12.


xml version 1.0 encoding UTF-8 standalone no
fcla fda yes
!-- Global optimization algorithms for adaptive infinite impulse response filters ( Book ) --
METS:mets OBJID UFE0000558_00001
xmlns:METS http:www.loc.govMETS
xmlns:xlink http:www.w3.org1999xlink
xmlns:xsi http:www.w3.org2001XMLSchema-instance
xmlns:daitss http:www.fcla.edudlsmddaitss
xmlns:rightsmd http:www.fcla.edudlsmdrightsmd
xmlns:mods http:www.loc.govmodsv3
xmlns:sobekcm http:digital.uflib.ufl.edumetadatasobekcm
xmlns:lom http:digital.uflib.ufl.edumetadatasobekcm_lom
xsi:schemaLocation
http:www.loc.govstandardsmetsmets.xsd
http:www.fcla.edudlsmddaitssdaitss.xsd
http:www.fcla.edudlsmdrightsmd.xsd
http:www.loc.govmodsv3mods-3-4.xsd
http:digital.uflib.ufl.edumetadatasobekcmsobekcm.xsd
METS:metsHdr CREATEDATE 2020-04-28T10:07:20Z ID LASTMODDATE 2020-04-27T15:07:09Z RECORDSTATUS COMPLETE
METS:agent ROLE CREATOR TYPE ORGANIZATION
METS:name UF,University of Florida
OTHERTYPE SOFTWARE OTHER
Go UFDC - FDA Preparation Tool
INDIVIDUAL
UFAD\renner
METS:note Online edit by Nicola Hill ( 8/25/2010 )
METS:dmdSec DMD1
METS:mdWrap MDTYPE MODS MIMETYPE textxml LABEL Metadata
METS:xmlData
mods:mods
mods:accessCondition Copyright Lai, Ching-An. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
mods:language
mods:languageTerm type text English
code authority iso639-2b eng
mods:location
mods:physicalLocation University of Florida
UF
mods:url access object in context https://ufdc.ufl.edu/UFE0000558/00001
mods:name personal
mods:namePart Lai, Ching-An
mods:role
mods:roleTerm Author, Primary
mods:originInfo
mods:publisher University of Florida
mods:dateIssued 2002
mods:copyrightDate 2002
mods:recordInfo
mods:recordIdentifier source sobekcm UFE0000558_00001
mods:recordContentSource University of Florida
mods:subject SUBJ650_-0_1 jstor
mods:topic Adaptive filters
SUBJ650_-0_2
Algorithms
SUBJ650_-0_3
Cost functions
SUBJ650_-0_4
Data smoothing
SUBJ650_-0_5
Entropy
SUBJ650_-0_6
Error rates
SUBJ650_-0_7
IIR filters
SUBJ650_-0_8
Local minimum
SUBJ650_-0_9
Signals
SUBJ650_-0_10
System identification
mods:titleInfo
mods:title Global optimization algorithms for adaptive infinite impulse response filters
mods:typeOfResource text
DMD2
OTHERMDTYPE SOBEKCM SobekCM Custom
sobekcm:procParam
sobekcm:Aggregation ALL
UFIR
UFETD
IUF
GRADWORKS
sobekcm:MainThumbnail lai_c_Page_01thm.jpg
sobekcm:Wordmark UFIR
sobekcm:bibDesc
sobekcm:BibID UFE0000558
sobekcm:VID 00001
sobekcm:Publisher
sobekcm:Name University of Florida
sobekcm:PlaceTerm Gainesville, Fla.
sobekcm:Source
sobekcm:statement UF University of Florida
sobekcm:SortDate 730850
METS:amdSec
METS:digiprovMD DIGIPROV1
DAITSS Archiving Information
daitss:daitss
daitss:AGREEMENT_INFO ACCOUNT PROJECT UFDC
METS:rightsMD RIGHTS1
RIGHTSMD Rights
rightsmd:accessCode public
rightsmd:embargoEnd 2005-12-27
METS:techMD TECH1
File Technical Details
sobekcm:FileInfo
sobekcm:File fileid JP21 width 2550 height 3300
JPEG1 630 815
JPEG2
JP22
JPEG3
JP23
JPEG4
JP24
JPEG5
JP25
JPEG6
JP26
JPEG7
JP27
JPEG8
JP28
JPEG9
JP29
JPEG10
JP210
JPEG11
JP211
JPEG12
JP212
JPEG13
JP213
JPEG14
JP214
JPEG15
JP215
JPEG16
JP216
JPEG17
JP217
JPEG18
JP218
JPEG19
JP219
JPEG20
JP220
JPEG21
JP221
JPEG22
JP222
JPEG23
JP223
JPEG24
JP224
JPEG25
JP225
JPEG26
JP226
JPEG27
JP227
JPEG28
JP228
JPEG29
JP229
JPEG30
JP230
JPEG31
JP231
JPEG32
JP232
JPEG33
JP233
JPEG34
JP234
JPEG35
JP235
JPEG36
JP236
JPEG37
JP237
JPEG38
JP238
JPEG39
JP239
JPEG40
JP240
JPEG41
JP241
JPEG42
JP242
JPEG43
JP243
JPEG44
JP244
JPEG45
JP245
JPEG46
JP246
JPEG47
JP247
JPEG48
JP248
JPEG49
JP249
JPEG50
JP250
JPEG51
JP251
JPEG52
JP252
JPEG53
JP253
JPEG54
JP254
JPEG55
JP255
JPEG56
JP256
JPEG57
JP257
JPEG58
JP258
JPEG59
JP259
JPEG60
JP260
JPEG61
JP261
JPEG62
JP262
JPEG63
JP263
JPEG64
JP264
JPEG65
JP265
JPEG66
JP266
JPEG67
JP267
JPEG68
JP268
JPEG69
JP269
JPEG70
JP270
JPEG71
JP271
JPEG72
JP272
JPEG73
JP273
JPEG74
JP274
JPEG75
JP275
JPEG76
JP276
JPEG77
JP277
JPEG78
JP278
JPEG79
JP279
JPEG80
JP280
JPEG81
JP281
JPEG82
JP282
JPEG83
JP283
JPEG84
JP284
JPEG85
JP285
JPEG86
JP286
JPEG87
JP287
JPEG88
JP288
JPEG89
JP289
JPEG90
JP290
JPEG91
JP291
JPEG92
JP292
JPEG93
JP293
JPEG94
JP294
JPEG95
JP295
JPEG96
JP296
JP297
JPEG97
METS:fileSec
METS:fileGrp USE archive
METS:file GROUPID G1 TIF1 imagetiff CHECKSUM 8d254e652021df28fc1e179f35d40b68 CHECKSUMTYPE MD5 SIZE 8433296
METS:FLocat LOCTYPE OTHERLOCTYPE SYSTEM xlink:href lai_c_Page_01.tif
G2 TIF2 7b4f097b69baabe35c54dfbd4c8ff064 8433592
lai_c_Page_02.tif
G3 TIF3 39aef6fbb5c7041f67a3fd60eca6ce51 8434352
lai_c_Page_03.tif
G4 TIF4 7fe9d750bf22905e8be676071177c1ec 8433668
lai_c_Page_04.tif
G5 TIF5 e639710ea835c609ecedd393b79b0d7c 8433516
lai_c_Page_05.tif
G6 TIF6 ba64a369306f07057cfba457d1de35c2 8434608
lai_c_Page_06.tif
G7 TIF7 d166a018f490e5cbb77fd4749a08012b 8433080
lai_c_Page_07.tif
G8 TIF8 906384eab42f3d5f1b994e35352e8cde 8435052
lai_c_Page_08.tif
G9 TIF9 4f7a1f4ba779e72070b824e260e567ec 8434312
lai_c_Page_09.tif
G10 TIF10 c2541a275a9c47103873ddfbb59143ca 8435552
lai_c_Page_10.tif
G11 TIF11 32cd8c8eab5b8239bac9c7c30d9ecef8 8435932
lai_c_Page_11.tif
G12 TIF12 a01293294809a6c510ab7cd7e8a0b586 8435856
lai_c_Page_12.tif
G13 TIF13 d3936548082a4f2a7ac864c5db2dcbac 8435876
lai_c_Page_13.tif
G14 TIF14 1570baa950038e0d9f71a734ec98607c 8435848
lai_c_Page_14.tif
G15 TIF15 67f27aa4a9bd5e7d203e9ae3fbcc150e 8435720
lai_c_Page_15.tif
G16 TIF16 fb442e4bfc129fbb08a60d7010189b71
lai_c_Page_16.tif
G17 TIF17 413f1efb69cb8e25ee00d971f68241da 8433320
lai_c_Page_17.tif
G18 TIF18 96e962ca75e9b032cc3be5a57971083e 8434920
lai_c_Page_18.tif
G19 TIF19 6bc9d4d70476d0f920c0ea197e9815a9 8435196
lai_c_Page_19.tif
G20 TIF20 db5e2299680748adbde5ecc1879cc6af 8435008
lai_c_Page_20.tif
G21 TIF21 335e7704668b573c7841ca0ccef533ec 8435264
lai_c_Page_21.tif
G22 TIF22 731e8459a113e85b73d5b44744e7e9cd 8435132
lai_c_Page_22.tif
G23 TIF23 9bd7a635186c8aaf183caae325354ca3 8435212
lai_c_Page_23.tif
G24 TIF24 959ba8e7a385d458e8001c6dbf3f5d21 8435000
lai_c_Page_24.tif
G25 TIF25 e624a125d75ec6a8354a75d4109c1f41 8435096
lai_c_Page_25.tif
G26 TIF26 7c52511a61930216c70e41f2f6e595b4 8435632
lai_c_Page_26.tif
G27 TIF27 979be0c717302d84d4b7eaeb51313705 8435116
lai_c_Page_27.tif
G28 TIF28 73416bb20d188b8518210287e969abcb 8433740
lai_c_Page_28.tif
G29 TIF29 8950ab2980b0049ebc6a8f6c3168714a 8435312
lai_c_Page_29.tif
G30 TIF30 c98b8c4f264f72f589761cc34d87bdd8 8435716
lai_c_Page_30.tif
G31 TIF31 681314decf4a9761cfa9376ce882436c 8435688
lai_c_Page_31.tif
G32 TIF32 ea4ca8fe75bcd3d2d356d4a85baf0941 8433736
lai_c_Page_32.tif
G33 TIF33 0ed84c1c43873d52d0ae9caac62da67d 8435596
lai_c_Page_33.tif
G34 TIF34 fc1170af0d72f6b51a607e993f2b0cb9
lai_c_Page_34.tif
G35 TIF35 633365db245bbb010fcc6ed0512786cc 8435048
lai_c_Page_35.tif
G36 TIF36 cad9bf6da73a2aa85bd239bd444fdb9c 8434984
lai_c_Page_36.tif
G37 TIF37 4b2259e9fb3ab396da6a2f4f386a27bc 8435272
lai_c_Page_37.tif
G38 TIF38 58ad21e8d831c1b5e21a439f5a31a160 8435112
lai_c_Page_38.tif
G39 TIF39 011bab7640af8ae5ef69197f70dc31ef 8434888
lai_c_Page_39.tif
G40 TIF40 df58a8f96348bfa1109ab0f76fe80371
lai_c_Page_40.tif
G41 TIF41 5462eb784a92bb37eec76f01e5f88475 8435028
lai_c_Page_41.tif
G42 TIF42 840be532707cbd53fa983e6f4aada8f8 8435084
lai_c_Page_42.tif
G43 TIF43 578453f5473727d0715148b080637084 8434868
lai_c_Page_43.tif
G44 TIF44 b43146532ac1f154f4ae1787b1e2181d 8434908
lai_c_Page_44.tif
G45 TIF45 6b756b732d9e3d7981932c033d977fb4
lai_c_Page_45.tif
G46 TIF46 085a54e4e90bdba107f95a3b460c17b3 8435592
lai_c_Page_46.tif
G47 TIF47 7e9dce5b4b8bad9188844348268d0cf6 8435220
lai_c_Page_47.tif
G48 TIF48 b11162082afe0e88033474f257f155a6 8435328
lai_c_Page_48.tif
G49 TIF49 2b7a099b6d8a4001b08b208c264b087a 8435864
lai_c_Page_49.tif
G50 TIF50 b0be4247ba023463a71d8048a1b2b4d6 8435816
lai_c_Page_50.tif
G51 TIF51 57f6b0170a9e5efb306e6bbe66c54a5d 8435456
lai_c_Page_51.tif
G52 TIF52 e05f1182a316684d0bcc5616f4b2cd76 8436468
lai_c_Page_52.tif
G53 TIF53 e23a49152f826b14f97d43ac81a541d5 8435448
lai_c_Page_53.tif
G54 TIF54 565af2cd40dd70830f16f275c6db2d2d 8434628
lai_c_Page_54.tif
G55 TIF55 c5b1f7d1a99821ae0980ef82486b9e30 8434656
lai_c_Page_55.tif
G56 TIF56 050529dbb95957455a70352c3b5d8301 8435440
lai_c_Page_56.tif
G57 TIF57 acc02256cabe54ff9d283bcd177b9966 8435208
lai_c_Page_57.tif
G58 TIF58 67f796e2188440e89bfb47ec17f56391 8435180
lai_c_Page_58.tif
G59 TIF59 ebd6fcdc8ca51881b54b281c20cab6da
lai_c_Page_59.tif
G60 TIF60 5f40690f6f3a6e36b8a04dccfe790cf5 8435404
lai_c_Page_60.tif
G61 TIF61 84966f81dbccd9467fa79235f8391927 8435600
lai_c_Page_61.tif
G62 TIF62 9c8356042655eca66174ac096e8b6979 8435508
lai_c_Page_62.tif
G63 TIF63 f0dd1f14d0657ff7cad2b39346e3d21c 8435348
lai_c_Page_63.tif
G64 TIF64 29a73d192687ac97d54209dd922f1772
lai_c_Page_64.tif
G65 TIF65 b3969eb35f99244771ed34369e1ee004 8435108
lai_c_Page_65.tif
G66 TIF66 dac93a09f3aac57aad097ef38bd7085d 8435104
lai_c_Page_66.tif
G67 TIF67 90c07b598b028fa2bf3e1df59f0df815 8434672
lai_c_Page_67.tif
G68 TIF68 8f4e4a605216b01db49844287e4be328 8435468
lai_c_Page_68.tif
G69 TIF69 3b873239a53bfc684f8b06ca44b7c793 8435168
lai_c_Page_69.tif
G70 TIF70 9a066e1c7b7248b1a457685a8f482144 8434636
lai_c_Page_70.tif
G71 TIF71 124ed37c0f362e58b60a56eaa084b0cd 8435384
lai_c_Page_71.tif
G72 TIF72 539ed3f4853fc506d97d1e53d6d11ad9 8434256
lai_c_Page_72.tif
G73 TIF73 b2447873cbe8f30f84c07e73dea98a8b
lai_c_Page_73.tif
G74 TIF74 4b1f21dccfc766c95cc0768c59734d63 8435712
lai_c_Page_74.tif
G75 TIF75 49432979fd794814e28adf44289e5c60 8434900
lai_c_Page_75.tif
G76 TIF76 47b6d71b73ef474551f2675ae184f876 8434824
lai_c_Page_76.tif
G77 TIF77 57b92aa65f6e585015496b5d0d8065fa 8434864
lai_c_Page_77.tif
G78 TIF78 58fb71c4f6a2b94bc30a9a2befacae63 8435620
lai_c_Page_78.tif
G79 TIF79 eec19dd87421e02f10e57e948db34138 8435772
lai_c_Page_79.tif
G80 TIF80 de7f47346efd97ee27245fa780ff4669 8434448
lai_c_Page_80.tif
G81 TIF81 3d6e7405c207c82750cca5e8a16b886f 8435192
lai_c_Page_81.tif
G82 TIF82 7790b14023f5ed5d017f5c47db097dcc 8435584
lai_c_Page_82.tif
G83 TIF83 3c931cd55b52566a526bf94f6f306e7b 8436176
lai_c_Page_83.tif
G84 TIF84 b54bb2f83db954e4146e0767440c439e 8435388
lai_c_Page_84.tif
G85 TIF85 a697f257e4b149b092652f9df9e059b6 8434660
lai_c_Page_85.tif
G86 TIF86 285d0ebdda0952542838867fcef595a5 8433636
lai_c_Page_86.tif
G87 TIF87 948f0322688c230a7f24e5495057c2a3 8435344
lai_c_Page_87.tif
G88 TIF88 83aee2b993a96966785356b5cefc9a50 8435672
lai_c_Page_88.tif
G89 TIF89 a44b88050a6ab501b718808831ae9013 8435296
lai_c_Page_89.tif
G90 TIF90 4822e67283406dc1c26930e021b065de
lai_c_Page_90.tif
G91 TIF91 6068391841767af30787b422e9217b12 8436056
lai_c_Page_91.tif
G92 TIF92 0c47b7c3291eb97f719671eefd8fee89 8436040
lai_c_Page_92.tif
G93 TIF93 6e87a94a5d94347138f2e1315e4fd957 8436236
lai_c_Page_93.tif
G94 TIF94 53f35942b0780c7c55666ad07295de03 8435908
lai_c_Page_94.tif
G95 TIF95 74d90e34be40e008167dd4322e46042d 8435928
lai_c_Page_95.tif
G96 TIF96 929bfa6118d9b4888d534820de74294d 8433568
lai_c_Page_96.tif
G97 TIF97 2e4a854c4608905f5671417ba5e13d3f 8433624
lai_c_Page_97.tif
reference
imagejp2 2d2aaa02a597a6d3fe12497aa5fa0c8c 241134
lai_c_Page_01.jp2
76566b47da2006d8bdba6745f7076a58 334417
lai_c_Page_02.jp2
74639e4b1d29a26010c3d6cb32789c48 794187
lai_c_Page_03.jp2
f82605238dba793254041933c8da40dc 426289
lai_c_Page_04.jp2
86bd824f103bd960f12e99a04ec021be 318247
lai_c_Page_05.jp2
0e288cf2a60686f002481e8f37254f70 835526
lai_c_Page_06.jp2
657a40563f5f5443327e96e483891d97 132573
lai_c_Page_07.jp2
947322e1616649477232709be14e6015 997316
lai_c_Page_08.jp2
f711729565285e7fe49f638c1da09d0c 668959
lai_c_Page_09.jp2
b5761e190e015e8ead13583ff42ea198 1051967
lai_c_Page_10.jp2
b2f1f4046951a30e4f1992f10bad71aa 1051943
lai_c_Page_11.jp2
3af7f34a56a7ae919a91eac62ff68afb 1051971
lai_c_Page_12.jp2
32197c791fffff31b3ff228b5c7fcd53 1051972
lai_c_Page_13.jp2
a310d9875e32b4c878bb6a29b3132d01 1051977
lai_c_Page_14.jp2
f8754eba73e60acf404a43fdf1b018b5 1051985
lai_c_Page_15.jp2
9066a8d761977fabca0ba295ea193e66 1051973
lai_c_Page_16.jp2
fba3388f4803039871ea26b4527daf7d 229208
lai_c_Page_17.jp2
dd38d683c450c2bf31c21dbf1e757ada 838485
lai_c_Page_18.jp2
2c4e6c2ed19a60b4b3fc6f37849ca1f5 963758
lai_c_Page_19.jp2
da67205085111c6f7fc205d6c0df8507 747152
lai_c_Page_20.jp2
582e59d8c0cd25bae4491fe3d60ae4d9 896406
lai_c_Page_21.jp2
4164768b460fe74e145616562c9613d1 933710
lai_c_Page_22.jp2
068a2d75b9e22d6726961739962ea5cc 939264
lai_c_Page_23.jp2
4fc0a2878d44c9b9c3f65efeef776109 827410
lai_c_Page_24.jp2
081a0811905a8a459b9fbaa0af3d2b6b 893628
lai_c_Page_25.jp2
205ab1ef051b9b057b6d0706d9afbad7
lai_c_Page_26.jp2
03dc1f4b33250cf3866b2e0906712161 854607
lai_c_Page_27.jp2
ec83f63d60a2a642c5c7b8d79b1452c1 279346
lai_c_Page_28.jp2
0fb6aa957761f03812a91d62d8184ed7 1051911
lai_c_Page_29.jp2
b7155a07e7b7919972018637963013c9 1051979
lai_c_Page_30.jp2
68d20948460fc3e52c5d1c3b9961a029 1051955
lai_c_Page_31.jp2
577802cb699b580d552a41708ba7464a 228155
lai_c_Page_32.jp2
92f69388434b9e89fb5e3bf8103baee4 1051976
lai_c_Page_33.jp2
3ec93969ff4e44dd773486f33b60d07d 1051941
lai_c_Page_34.jp2
4623fa08b3a43d094a08574fc5012d10 849219
lai_c_Page_35.jp2
7ed0491228115f625c78508c441b3106 669271
lai_c_Page_36.jp2
66bb46fc1be514dd47efcbdb9574d9ed 940680
lai_c_Page_37.jp2
7d4cecb5dc2f71713e1ef92b9f517072 756589
lai_c_Page_38.jp2
13ba8fe28183f7504d03dd3af099a965 666033
lai_c_Page_39.jp2
8f54d4d4c76afb25983d5e54f71e2ff7 843917
lai_c_Page_40.jp2
5c2e85bd2c47dcbd3fdddbf80e6cb06b 791623
lai_c_Page_41.jp2
81f85a7268968c1ef53f80ed9bb16274 916074
lai_c_Page_42.jp2
25945761eb9d17ce5e614763f5e5cd39 676287
lai_c_Page_43.jp2
02d9e4f88a6985c5727ca6ae771741a1 748190
lai_c_Page_44.jp2
ab3119863a5901f3de1167819c516392 749496
lai_c_Page_45.jp2
4561bf01a81d85734959a6ce282c4de7 1051980
lai_c_Page_46.jp2
cc4d0d8efac2959f6408577b73f0b6ea 893786
lai_c_Page_47.jp2
af6f8309b83ad3e78b163108c27374b5 919799
lai_c_Page_48.jp2
8e47b67993669abb54e208cf9df369e6 1051983
lai_c_Page_49.jp2
d82f35e17ac49ca5f5136c84dffceabd 998534
lai_c_Page_50.jp2
d785f91735ef4795002ed1c377327e72 873493
lai_c_Page_51.jp2
6c1be112ee432c97f5b597618b32ee82
lai_c_Page_52.jp2
4c4c8f7a45ba07e45234d7920a848a41
lai_c_Page_53.jp2
e412bbbac471f9a1179c7db327ae1f39 819164
lai_c_Page_54.jp2
efbccdee23b8f55ed052dbe3fbac71b0 791483
lai_c_Page_55.jp2
d719f9834823a7d06cc46611bc659eb3
lai_c_Page_56.jp2
deea7fecc20498fe19cb0ddc4e0a5bc0 875502
lai_c_Page_57.jp2
089a9b218c7bb3f02a2a29be07d29d16 825259
lai_c_Page_58.jp2
5df528bdfd9a1b1c83cee6e6f6425060 901794
lai_c_Page_59.jp2
01f7daf5026a8f21c55be3837b0de9d3 898402
lai_c_Page_60.jp2
d67feb0af5fff8e51dc0fac350721b81
lai_c_Page_61.jp2
ee6be58149772d8ef96397657df1582c 1025082
lai_c_Page_62.jp2
236dfc71ffaf6ace0d8e859956d1782c 1026200
lai_c_Page_63.jp2
055df5a6869b251d97da2552b7c52824 1017849
lai_c_Page_64.jp2
3292d95a0d3c048e3236afe066817b25 823508
lai_c_Page_65.jp2
fdcf9c1c47819db3093f282c171d6500 782345
lai_c_Page_66.jp2
38bee6af6c21550366eb83bb3ed8d59a 636887
lai_c_Page_67.jp2
d1bcff09689e4fcde0bdb75122e04902 1051961
lai_c_Page_68.jp2
9c3e396cd30e37089aa21fa856d7874a 757894
lai_c_Page_69.jp2
711fecb37f018d9aba2d93a1a4fbbb6e 925430
lai_c_Page_70.jp2
3259844823495cd9815bb7d2dda3d874 1039166
lai_c_Page_71.jp2
c1277e09681d3819a32a157584debc64 392762
lai_c_Page_72.jp2
812ff7835d4269bde2345476d32a1124 816920
lai_c_Page_73.jp2
921072df5a75b692588dd580455cbde7 1051939
lai_c_Page_74.jp2
f12839da722b99cb6aeb2a1e86ddb8d1 837283
lai_c_Page_75.jp2
ad17e8ef600be917f105d649c91feb3d 735820
lai_c_Page_76.jp2
baad44101d70fdf4b01ceca619237461 712644
lai_c_Page_77.jp2
6716433eecde20750455228114ab6c20 1051986
lai_c_Page_78.jp2
cbf6e33ec59effe8888d81d266888455 831190
lai_c_Page_79.jp2
f258b7be1d60a30c22baa28b757a8cd3 413005
lai_c_Page_80.jp2
44cb89414ab75999afc4b13c55788098 923465
lai_c_Page_81.jp2
db10325a3a9fcafb9bee63b15872ca56 1051963
lai_c_Page_82.jp2
8af520ca613d2020e6bbe8504bd83cae 1051938
lai_c_Page_83.jp2
f09d3a88d3be60e79805039275e7c93c 908008
lai_c_Page_84.jp2
61f6667c34ebc5a63c791498af9b4964 394860
lai_c_Page_85.jp2
8eb11b921d42e14e5d14c228083054d2 353051
lai_c_Page_86.jp2
9ec66f9d4e3ade821a7c1e00fe0c6bbd 1051982
lai_c_Page_87.jp2
f7bb4135d53a8b4bdc8750266d1f1bf8 1051954
lai_c_Page_88.jp2
6bdc52f1ae9679ac607dee9d7dd0d97c
lai_c_Page_89.jp2
4232b1e588ab3b53e2cdeebece25afc0 1051962
lai_c_Page_90.jp2
cd9e6959c78d5a54a1e3baed4ab6259a 1051957
lai_c_Page_91.jp2
d6284ba211179a8c2708b083a04b9010 1051945
lai_c_Page_92.jp2
7d2c79bd7393bacd6e557822cc43ee03 1051984
lai_c_Page_93.jp2
bde7b4534470d6f14ca1cb5190171354 1051883
lai_c_Page_94.jp2
91d45bbef0050ad90a5e7506d28a96eb
lai_c_Page_95.jp2
8da538c4564785f71b9ca25ddb13cd06 367522
lai_c_Page_96.jp2
4911eedf298c80f357745a86a0626141 320151
lai_c_Page_97.jp2
imagejpeg 1991f75cde99f263087e8b23bf5b8ecc 42466
lai_c_Page_01.jpg
JPEG1.2 55be67f302aaec42804c03e8fe5db804 24653
lai_c_Page_01.QC.jpg
2ed7b75d56670f9fca54cc23031fc9a5 53412
lai_c_Page_02.jpg
JPEG2.2 77a6fb8e40e7ce37c76e4e3c25f060ab 28697
lai_c_Page_02.QC.jpg
909b56b1d886b762f5a54551e748c3f7 105577
lai_c_Page_03.jpg
JPEG3.2 4df23454ffa16ae808bfaae2238e3ef4 43865
lai_c_Page_03.QC.jpg
ce5bdc179fd774f04bf97cabeca4e25d 66015
lai_c_Page_04.jpg
JPEG4.2 d057fc47d4264b96d16cf1b59f177f80 32384
lai_c_Page_04.QC.jpg
60e41390ffab6f595b8137e898ad8301 51956
lai_c_Page_05.jpg
JPEG5.2 b97d2c18ffd4cec44aa8b96817dad54c 28899
lai_c_Page_05.QC.jpg
86a91f7ac8e66e1611355e8644afa794 101931
lai_c_Page_06.jpg
JPEG6.2 bfe0d9f4bcef66b18dc4590fa9b1047f 45324
lai_c_Page_06.QC.jpg
d657111b9e76a41ec12d295571cd198d 32080
lai_c_Page_07.jpg
JPEG7.2 c75e49ff7b18e927e86bd5a1ea36cef5 22347
lai_c_Page_07.QC.jpg
2bdc0f27bb83d707c8f82e4c99228a24 109999
lai_c_Page_08.jpg
JPEG8.2 d3b222787ed279d827249fd3ad75d4c5 46870
lai_c_Page_08.QC.jpg
ee7fce25c1ae65137e38098a93851f99 79826
lai_c_Page_09.jpg
JPEG9.2 c506df96a52c8ca36a99023b0151b74a 37306
lai_c_Page_09.QC.jpg
36cf2d7f9f2d4fe52bf91886c2cffc73 157545
lai_c_Page_10.jpg
JPEG10.2 02883a0e01e28c544ea50570c4bcd0ed 55141
lai_c_Page_10.QC.jpg
264310ec4effe572e99be3c6e2441228 155073
lai_c_Page_11.jpg
JPEG11.2 3e2aec6d14d361c75fe166d7edaadc42 58068
lai_c_Page_11.QC.jpg
e12fb4cd2d33fbdc57be6d41f446d77c 173149
lai_c_Page_12.jpg
JPEG12.2 365af7ede96a0b5de8cbf813738d1311 58964
lai_c_Page_12.QC.jpg
c49a22a3fcdd14119c9b69b7449a05ff 140131
lai_c_Page_13.jpg
JPEG13.2 393952ec44b808eac0ed96e4b09ca0e8 55677
lai_c_Page_13.QC.jpg
999e243198ef85cc946bd2a12c7c4081 144392
lai_c_Page_14.jpg
JPEG14.2 8b11f1f7cd5a7cb123db4b24cba6f77a 59058
lai_c_Page_14.QC.jpg
f89967a245a6143312087e3f422e1b48 136844
lai_c_Page_15.jpg
JPEG15.2 7e694f87333e1ee22e024e118caf4587 56712
lai_c_Page_15.QC.jpg
6d9285320d9ee2a7dc50117f2f7960a7 137185
lai_c_Page_16.jpg
JPEG16.2 def3812bd3f0e5aa8bd8b471c280f7f3 56463
lai_c_Page_16.QC.jpg
3812fcbcf4f9fa3933ed56599d59d619 41598
lai_c_Page_17.jpg
JPEG17.2 5148070814ee7754f6e74387ac7956d0 25173
lai_c_Page_17.QC.jpg
40d725178b823cf650c900b2dcaad522 97951
lai_c_Page_18.jpg
JPEG18.2 138a07d0a3aea83d7d0a412c17b710c1 44205
lai_c_Page_18.QC.jpg
d8831979c134b88fbb4443ff57ccfcc9 107188
lai_c_Page_19.jpg
JPEG19.2 a61de5d58f9c048ab9c0d8c9ad4972a3 47771
lai_c_Page_19.QC.jpg
36537c48e2f0542c2bf8d8ebe5abedda 90498
lai_c_Page_20.jpg
JPEG20.2 9ffcdb84125553e8b42a07dc665760f7 41416
lai_c_Page_20.QC.jpg
53e38874b54a043bc9cd6de2e21635bc 104115
lai_c_Page_21.jpg
JPEG21.2 62490a7a9beeb2ff35523003804eaf91 47152
lai_c_Page_21.QC.jpg
ebc287cc7369f495d7686094f13ec96f 105927
lai_c_Page_22.jpg
JPEG22.2 fb590c285ca5284f57b00058fe750af2 45973
lai_c_Page_22.QC.jpg
dd67c69808ef2f95ccfabbfb24b990d7 105432
lai_c_Page_23.jpg
JPEG23.2 4f69171edfd7b1657a0c45c2ba1f4961 46241
lai_c_Page_23.QC.jpg
abdc44120c1f07e7811597171949075d 92295
lai_c_Page_24.jpg
JPEG24.2 7a25557b0400a35a2082088ee56110ed 42177
lai_c_Page_24.QC.jpg
2390c35ea8e774f9802908b19cd97357 101090
lai_c_Page_25.jpg
JPEG25.2 ce8eb9cf4c1cba4a1926d9b99f1e3afb 45649
lai_c_Page_25.QC.jpg
ac24d799c4fad6e654fdd274e8c62e32 128799
lai_c_Page_26.jpg
JPEG26.2 8d9719d21fd3c1d139d3d408ffb1fad0 54030
lai_c_Page_26.QC.jpg
bd02f3a7bce4af55b28202d89510ac3f 100826
lai_c_Page_27.jpg
JPEG27.2 41f56a0eed28bcc4e209e73af2ff3ebb 45007
lai_c_Page_27.QC.jpg
691304ea4dd50fee1b985f60eab19c81 49655
lai_c_Page_28.jpg
JPEG28.2 0c76e8b68c756f1d8b2d08ee28ceb9b5 29044
lai_c_Page_28.QC.jpg
838ee4e53c595f82ee23161bfc835b6d 124737
lai_c_Page_29.jpg
JPEG29.2 32696d24c90f9218a1e4f760cc5dd2da 51480
lai_c_Page_29.QC.jpg
8898235ce788e2cf027974591bebe044 129070
lai_c_Page_30.jpg
JPEG30.2 6980d38f701e7dde0e26f6756b3989e5 53704
lai_c_Page_30.QC.jpg
194871940220ab0f87e2a28d7227de2d 126858
lai_c_Page_31.jpg
JPEG31.2 4c8a49235f55e73ce4571b6d015c3c32 53474
lai_c_Page_31.QC.jpg
144335b9b0ae560b4231fb221e64187f 44624
lai_c_Page_32.jpg
JPEG32.2 b64fcd9178cfa6fe3f38e81a41eb8a8a 27133
lai_c_Page_32.QC.jpg
e39834e9fb110473a07ea376a8ba897d 128854
lai_c_Page_33.jpg
JPEG33.2 77221a09c3b7968e5a558644cf578f16 54019
lai_c_Page_33.QC.jpg
2d53f812668deb26c35934511e8abe10 119673
lai_c_Page_34.jpg
JPEG34.2 ddd3fea2dc887db2aca6cd4044da3087 51224
lai_c_Page_34.QC.jpg
6b16736d8b132d1789c61e98ae556dd9 95436
lai_c_Page_35.jpg
JPEG35.2 bb64e62fd5c7da55d24fe82d91fcf407 43938
lai_c_Page_35.QC.jpg
d39933aea70bd1c05d918ee57bd8b8a5 81426
lai_c_Page_36.jpg
JPEG36.2 f6fb8ccbf46e3d31d17c994e1bc23053 39874
lai_c_Page_36.QC.jpg
3cd019b3d37ee769799fdb533d8bc268 105584
lai_c_Page_37.jpg
JPEG37.2 6a154540d1a3c9508d607d7219abb80a 47122
lai_c_Page_37.QC.jpg
97737e51eb899e8c35b4bc98ae0df421 89685
lai_c_Page_38.jpg
JPEG38.2 9c2c5e518abdfc9bceca28cd154bfe2f 42144
lai_c_Page_38.QC.jpg
5f1e0310b90c74318aa1ac1c0b2eefbb 81301
lai_c_Page_39.jpg
JPEG39.2 e21dc0474ae839965c42b629692940f8 38090
lai_c_Page_39.QC.jpg
87cc2cd7fbfc2b4d7df6f676d83fca2b 97815
lai_c_Page_40.jpg
JPEG40.2 19bb9940154a440b4fae28a6c92d141a 44356
lai_c_Page_40.QC.jpg
71284b374072dbc0ecd0138b0a9c931e 92088
lai_c_Page_41.jpg
JPEG41.2 e56f0f59082f881ef27e997bb90cf9f7 42360
lai_c_Page_41.QC.jpg
9d35b854607a040de831708034fed213 102772
lai_c_Page_42.jpg
JPEG42.2 e88f9f4aa86a2fa2eee9551d9f3282a3 46637
lai_c_Page_42.QC.jpg
cbcf784270047e1380383721b274cb38 83712
lai_c_Page_43.jpg
JPEG43.2 f536a8f1ca737e8d0b8c4484d1e6ec06 39765
lai_c_Page_43.QC.jpg
08d961c25018c21b3bfdd0329d02a894 89236
lai_c_Page_44.jpg
JPEG44.2 45a68d5a356a31ff60bfbc013f6b8458 40959
lai_c_Page_44.QC.jpg
d8c29ef8c55dd01ad5be847f1ae5bc69 89459
lai_c_Page_45.jpg
JPEG45.2 3ee722254a7c210609518dbb5abf5d8d 41309
lai_c_Page_45.QC.jpg
3967d7a1502196b6516163c5007cca32 125414
lai_c_Page_46.jpg
JPEG46.2 454b7d592e5a68808423d0109a76ee6a 52546
lai_c_Page_46.QC.jpg
1e53abcce67d21343b7f976863081f4e 102599
lai_c_Page_47.jpg
JPEG47.2 99c836d6366e5f49d3d0c457c988d096 45463
lai_c_Page_47.QC.jpg
98eebddc6265961cb780c382b830173b 106933
lai_c_Page_48.jpg
JPEG48.2 6c56ffe931c7222f79c5ba3a0c9c4b6f 46434
lai_c_Page_48.QC.jpg
7d21339eaa854af5bfec920fc94fe7db 110624
lai_c_Page_49.jpg
JPEG49.2 2cbb85609bcd50b775252399dc865862 46261
lai_c_Page_49.QC.jpg
7022d29bcc165ffcc7a610d035e3eedc 92578
lai_c_Page_50.jpg
JPEG50.2 5bcbbe63ba7c9829c08050fe90f41d6b 43071
lai_c_Page_50.QC.jpg
dd14f523ebb063fd67f69db95a55e752 96374
lai_c_Page_51.jpg
JPEG51.2 f614dd213eaba1284092946600440e79 44532
lai_c_Page_51.QC.jpg
5213ae23ed65c30fa614bfae3a5439f9 111909
lai_c_Page_52.jpg
JPEG52.2 399f335e131a533ff84362bba614b4aa 50898
lai_c_Page_52.QC.jpg
d97fd5ace384d03dacc65704ba304482 127364
lai_c_Page_53.jpg
JPEG53.2 1765b70e4e5f5c7c7f8de4a3c5b32b69 51336
lai_c_Page_53.QC.jpg
aec0211274b599c5cdbe0f84a6176b51 96191
lai_c_Page_54.jpg
JPEG54.2 748ff2172add6ed69578136776a0755f 40402
lai_c_Page_54.QC.jpg
48d61379ad5a11daaa11b3661687298a 91893
lai_c_Page_55.jpg
JPEG55.2 f1f934692a7aeb274904c1249e3e5f14 41698
lai_c_Page_55.QC.jpg
a33cb60f8f6652d1aa4fd51feda145f7 126340
lai_c_Page_56.jpg
JPEG56.2 7448db0e9240b9a89301d635bc5e2688 52204
lai_c_Page_56.QC.jpg
b95f49bb635d62089f15371882f518da 102754
lai_c_Page_57.jpg
JPEG57.2 790fc13c488bd7ac469725bbdd4ca10e 45311
lai_c_Page_57.QC.jpg
00c078c5235c5b4b48c8111870f1c697 93827
lai_c_Page_58.jpg
JPEG58.2 90ee12b4bc646502d21b3d01b15cd436 43342
lai_c_Page_58.QC.jpg
39dbc2f63478445c4c476c26d86c04bd 100396
lai_c_Page_59.jpg
JPEG59.2 5e82b26b5631805a3008fa3da390d07c 45053
lai_c_Page_59.QC.jpg
0f577280cb687350a8e0f67733bee6df 102163
lai_c_Page_60.jpg
JPEG60.2 8f49a7bd2ce3c4e099db86b47e3ec0f5 46427
lai_c_Page_60.QC.jpg
007a17b2cfa17f9051c7bdc8841ee98f 124888
lai_c_Page_61.jpg
JPEG61.2 84decd2ad86af976d5a217c8dd5fed0e 52452
lai_c_Page_61.QC.jpg
b1e5c26a6d6d88ebafad86cc8a4893c3 115151
lai_c_Page_62.jpg
JPEG62.2 24e07fd480f2b6f69bd33a83ae50a37d 49704
lai_c_Page_62.QC.jpg
e4096d0a9f836e884537e8edaa0d1fb4 112668
lai_c_Page_63.jpg
JPEG63.2 5787cc4ce2c2ee0a21fc97d6c6f60cb0 48467
lai_c_Page_63.QC.jpg
5204a6544fe1a72d1d34590aa9f74cc2 112896
lai_c_Page_64.jpg
JPEG64.2 e347c560729c60ae2cee0d92f6dca9d2 49312
lai_c_Page_64.QC.jpg
fe2f855ba83f7925cf5b806eac301e46 94764
lai_c_Page_65.jpg
JPEG65.2 4e66d36967a5f95ab416b23c37a0e9ca 44222
lai_c_Page_65.QC.jpg
a55645f063fca1f8935a61dc025512b7 91346
lai_c_Page_66.jpg
JPEG66.2 7ea0e5373cf37a61659d3388262c3c38 42473
lai_c_Page_66.QC.jpg
6cbbf9b28f291b789f116e8f343cf6f6 79499
lai_c_Page_67.jpg
JPEG67.2 af47c9a47585ef2fe2263c09ad18d947 38686
lai_c_Page_67.QC.jpg
a4387f66239117c1efd4953a24836968 115517
lai_c_Page_68.jpg
JPEG68.2 b5239feaa61fa5d84393b9ee727f613b 50903
lai_c_Page_68.QC.jpg
2c92e8747ebdafb58e20a39d8321c2d6 91219
lai_c_Page_69.jpg
JPEG69.2 d81e5a2fefef936bc743154e08335957 42937
lai_c_Page_69.QC.jpg
3c9a5003d9d557644703618d8ea03556 84559
lai_c_Page_70.jpg
JPEG70.2 f68a653bf0e771226ef00ea5a8d1959c 36443
lai_c_Page_70.QC.jpg
e265475abad07aaaeeb1c176e478da9d 115646
lai_c_Page_71.jpg
JPEG71.2 557c07c5a46e17d604271ba60badb67e 49400
lai_c_Page_71.QC.jpg
b8678084a4587a4c601d65d9e3c51ca2 58457
lai_c_Page_72.jpg
JPEG72.2 1bb29c281fea175209a512de030197e8 32440
lai_c_Page_72.QC.jpg
d15ed3e2bdb13b09314d3a397d0858c5 96900
lai_c_Page_73.jpg
JPEG73.2 c59bce09191f7d5c3b2436042946ef0d 43411
lai_c_Page_73.QC.jpg
f9f0cadf7499135821aaa0969f16ec96 130081
lai_c_Page_74.jpg
JPEG74.2 2cdda690e6ea1a46db913e0881190c96 54612
lai_c_Page_74.QC.jpg
9e0f5fefd9c66c6f21f8c0b637dd2a6d 99123
lai_c_Page_75.jpg
JPEG75.2 a2fdda1532f5be7f7484aaaffa2d0911 43831
lai_c_Page_75.QC.jpg
3f91ab4cf5c592cbdfc6075c4f69722e 89521
lai_c_Page_76.jpg
JPEG76.2 5e05b98e5b5ed31f75773441428def70 41734
lai_c_Page_76.QC.jpg
898bea77518e075051c2c9f5223b5301 86852
lai_c_Page_77.jpg
JPEG77.2 ca82ddd065757002e49fbf31a2cce062 40643
lai_c_Page_77.QC.jpg
eea561666447f9771ecb4c9251138af1 128081
lai_c_Page_78.jpg
JPEG78.2 48a4ee5b3fa160ff5e7dbd4b315927ae 53548
lai_c_Page_78.QC.jpg
6be83b77f4e633bc44de6ee8d367468f 98430
lai_c_Page_79.jpg
JPEG79.2 dabc3c77cb89242e113976da0d6b77c0 44705
lai_c_Page_79.QC.jpg
921ad5afb4f854a73b2603e2f6a6af01 59724
lai_c_Page_80.jpg
JPEG80.2 89c1a0358f8582c314c7462ad9351035 32619
lai_c_Page_80.QC.jpg
c275c06f3bbd7623ee20e6d562fafbe2 103289
lai_c_Page_81.jpg
JPEG81.2 5e7a702dca3d81a6cb1036c6164969a4 46794
lai_c_Page_81.QC.jpg
49b6309849ce57fea871f3040171f24a 119612
lai_c_Page_82.jpg
JPEG82.2 6004f2bd602b8f0d28fd0347a2b51265 51576
lai_c_Page_82.QC.jpg
507a3638846fc6fe28ccf5f16fb597d1 118607
lai_c_Page_83.jpg
JPEG83.2 92ec0c5859c3e042b53b04ec367a4814 51729
lai_c_Page_83.QC.jpg
4ab3ed4495302e89e9b6c415d99754a1 101665
lai_c_Page_84.jpg
JPEG84.2 cd2d4af2389f29305f6020cc116f7407 46118
lai_c_Page_84.QC.jpg
b670ac526437c75c02875922fc73d84b 58444
lai_c_Page_85.jpg
JPEG85.2 d7b5b0a03951085360e0a549fe4c0f91 32581
lai_c_Page_85.QC.jpg
ad8d4a826e490818d17dc7f9a303907d 53305
lai_c_Page_86.jpg
JPEG86.2 88c0139221902403e2d92586c53fdf4f 28718
lai_c_Page_86.QC.jpg
d2c6ed99644b0da16b4673e097b90c6c 120621
lai_c_Page_87.jpg
JPEG87.2 7799d2a7ee8c1a96f2ebaaa4388b6c86 50640
lai_c_Page_87.QC.jpg
2b147c3eb98e88d79f7ce507dcb07c04 132595
lai_c_Page_88.jpg
JPEG88.2 6e53c9dec35392c4dc510c3691c55b29 54513
lai_c_Page_88.QC.jpg
63b6f42a59529cfafc510345916d9969 116899
lai_c_Page_89.jpg
JPEG89.2 ac88068006511c01c5309b2c6b8d93cc 49521
lai_c_Page_89.QC.jpg
c8d844e272121de4acca1eb7dc9e93c1 124973
lai_c_Page_90.jpg
JPEG90.2 b16150b5adae1769187b54c7b5a2bb83 49860
lai_c_Page_90.QC.jpg
04114b3e92b7b4708c0fa31ba54ee8df 148348
lai_c_Page_91.jpg
JPEG91.2 11ae3052b6ab0aa98efd8aa219de68d7 56667
lai_c_Page_91.QC.jpg
ecf13e5237293ae52ebe7e029b405efd 145649
lai_c_Page_92.jpg
JPEG92.2 63915b4c5da8105ba6c3bc20ab2970f4 56037
lai_c_Page_92.QC.jpg
bd3ec7a4159cce5466835179e116b83e 153624
lai_c_Page_93.jpg
JPEG93.2 586aa1d513d3df23a0d6c45ca93ec12d 57757
lai_c_Page_93.QC.jpg
86fb8adbdcfd66ad2b269193cb74d545 141489
lai_c_Page_94.jpg
JPEG94.2 545de6e24a88a78fd616e8c34cd927be 55016
lai_c_Page_94.QC.jpg
56c427a79bfd561a4fed4a0c89eca3c5 133744
lai_c_Page_95.jpg
JPEG95.2 ea33c59ebdc3ee5f24af6d91379aca76 53957
lai_c_Page_95.QC.jpg
330f36dcaed84dfae86b33394e46dcaa 52877
lai_c_Page_96.jpg
JPEG96.2 1dff27018c28d78919e7e1072c13d414 27598
lai_c_Page_96.QC.jpg
b9472ea2b516adafa59456e2b132057b 51593
lai_c_Page_97.jpg
JPEG97.2 fa745f29b0ad10862e0c13fee44134aa 28514
lai_c_Page_97.QC.jpg
THUMB1 imagejpeg-thumbnails 6d0776cfa26430b5fb3dfc84531ddbb4 19941
lai_c_Page_01thm.jpg
THUMB2 af59774d3394dda14177ab42e4edb241 20862
lai_c_Page_02thm.jpg
THUMB3 a0a7721fef705a874ef25fab3e898852 24304
lai_c_Page_03thm.jpg
THUMB4 ac491efa5acc81890fd7c1f1aa952f5c 21513
lai_c_Page_04thm.jpg
THUMB5 328a7e78320959a52cfed312a5a8f030 20766
lai_c_Page_05thm.jpg
THUMB6 9644906c668725db51fbbd1b7383f010 24949
lai_c_Page_06thm.jpg
THUMB7 da8673753b1619f4fe51213c7ae674c1 19088
lai_c_Page_07thm.jpg
THUMB8 e51b32fc8abb745f582446994de7b638 26098
lai_c_Page_08thm.jpg
THUMB9 3e6ab9dfcc2bc8cbb63971aa3b5e23f0 23462
lai_c_Page_09thm.jpg
THUMB10 0eb4954b4288f8b2b69368a09c36374d 27839
lai_c_Page_10thm.jpg
THUMB11 9b6fe0b25b6fb0c69b09302157c71fcb 29112
lai_c_Page_11thm.jpg
THUMB12 28007845a7765372da1519c5e8c78f90 28982
lai_c_Page_12thm.jpg
THUMB13 c88e63a9aa193b5e66f18f467e3e4fb6 28736
lai_c_Page_13thm.jpg
THUMB14 86a5c09dcbd79c74b6ca2f9841eab327 28914
lai_c_Page_14thm.jpg
THUMB15 307e5fa7cf7baf00d60e6a2105b12a35 28637
lai_c_Page_15thm.jpg
THUMB16 9789c4eae898c09169b507aba6b60361 28833
lai_c_Page_16thm.jpg
THUMB17 1842b9f75c9030a252ef344135809634 19890
lai_c_Page_17thm.jpg
THUMB18 ec2105559e32fa7faaf474be2033bfe5 25575
lai_c_Page_18thm.jpg
THUMB19 a24032b2e9c879f79b28326804975981 26771
lai_c_Page_19thm.jpg
THUMB20 7ba3c749b2911f468196d62426149934 25493
lai_c_Page_20thm.jpg
THUMB21 a419f8b1e69d25293e17064fde80451d 26667
lai_c_Page_21thm.jpg
THUMB22 1a27017650dc1c935a1ad912d74ba954 26220
lai_c_Page_22thm.jpg
THUMB23 ddee85589b679a96b1dcee2731bece03 26316
lai_c_Page_23thm.jpg
THUMB24 2dfed725e5c8318ee015a1503925e558 25690
lai_c_Page_24thm.jpg
THUMB25 32b9c1565ba6b205f68039279b43a05a 26322
lai_c_Page_25thm.jpg
THUMB26 3f3f5e3214b59b0f8613409434f69b2b 28406
lai_c_Page_26thm.jpg
THUMB27 0f0d90bf0f7c9515b8bc1be4240230a8 26065
lai_c_Page_27thm.jpg
THUMB28 044fff38cfc29c40784954c20ee4bfac 21447
lai_c_Page_28thm.jpg
THUMB29 ce2c133bedee680e70ad18c5dee133b8 26874
lai_c_Page_29thm.jpg
THUMB30 5543e722e9f95a71dd3e7ecce160b2fd 28455
lai_c_Page_30thm.jpg
THUMB31 857760b6f872f4c9754474689079d3da 28005
lai_c_Page_31thm.jpg
THUMB32 e849c346fe49411c5273ec615a2862bd 21064
lai_c_Page_32thm.jpg
THUMB33 c45dac10c521de489d3827c808776bd3 28091
lai_c_Page_33thm.jpg
THUMB34 d3cde14328d9bb6c76734cc085db0b1e 27716
lai_c_Page_34thm.jpg
THUMB35 d58d8dd47db5f9a88df52ad6e194c891 25887
lai_c_Page_35thm.jpg
THUMB36 65be02900ab02793eef4e63628660c5c 25619
lai_c_Page_36thm.jpg
THUMB37 dcf8a50ff33ae31acc4dfe84f75fd7a1 26740
lai_c_Page_37thm.jpg
THUMB38 c5d4744134fbf6e32384c107f5a17ded 26111
lai_c_Page_38thm.jpg
THUMB39 2b35b90103d47feb090e861dc2d8529d 25196
lai_c_Page_39thm.jpg
THUMB40 f7a8061d34a2874c9878a95a996b5c5f 26506
lai_c_Page_40thm.jpg
THUMB41 5a900fced9899a0860daced70372a225 25633
lai_c_Page_41thm.jpg
THUMB42 72b846428dd1f85605693f110419a66f 26382
lai_c_Page_42thm.jpg
THUMB43 966a4978cd4ac594748eff08244084be 25281
lai_c_Page_43thm.jpg
THUMB44 1a4f203199b5537e746ac8eee45b76a0 25722
lai_c_Page_44thm.jpg
THUMB45 2ee6aeddacd9bd555d7d94393dbd53c4 25804
lai_c_Page_45thm.jpg
THUMB46 ce09475fc35fa09aa121edf5bf2a4e6e 28004
lai_c_Page_46thm.jpg
THUMB47 3afbdf3f24e788f741703cf1c541c4a5 26542
lai_c_Page_47thm.jpg
THUMB48 2102f01c26c6a475b89ca0caa216c8ed 26741
lai_c_Page_48thm.jpg
THUMB49 99ef999c44d5be86d8d588921639a9f1 27963
lai_c_Page_49thm.jpg
THUMB50 4015df483bbd82b0bad4c26cfaf6ef47 27092
lai_c_Page_50thm.jpg
THUMB51 8d0e7d3805b8492a772aef0c38bb113c 26767
lai_c_Page_51thm.jpg
THUMB52 998fbb0714a5dda80ba7586236f1e892 29343
lai_c_Page_52thm.jpg
THUMB53 0161a2ed4d62605df3df92781653e783 27554
lai_c_Page_53thm.jpg
THUMB54 3bae364d3ce3a5b74e07233426c273f9 24673
lai_c_Page_54thm.jpg
THUMB55 43165ff1e47b5d75abd30c0775180b55 24492
lai_c_Page_55thm.jpg
THUMB56 b76a070a6cb9754aa52f9fa4983bbdd6 27217
lai_c_Page_56thm.jpg
THUMB57 61f1fd1421a937cb2f9509c400a7ea5d 26642
lai_c_Page_57thm.jpg
THUMB58 cc20de71516008cd835870a7b1bf409d 26419
lai_c_Page_58thm.jpg
THUMB59 ea6f047d5cffa55df640280001dbdce0 26686
lai_c_Page_59thm.jpg
THUMB60 1edf47d8c927af14e73e21020031a921 26964
lai_c_Page_60thm.jpg
THUMB61 cee68a05a86600a6ded05b961d2e7edd 27978
lai_c_Page_61thm.jpg
THUMB62 2888010804341c7fd345d14e094dbf63 27780
lai_c_Page_62thm.jpg
THUMB63 21292ce80456febaa1477e5456461ab9 26858
lai_c_Page_63thm.jpg
THUMB64 ff92924812355e55c23e48ddd7bc576b 27141
lai_c_Page_64thm.jpg
THUMB65 475416e7e214bfe08c9891528f6a8398 26079
lai_c_Page_65thm.jpg
THUMB66 96ce3aba7f6df62891dae3ce5edca8c1
lai_c_Page_66thm.jpg
THUMB67 3821f9bfc2c6d267f5e65fb65511eca2 24828
lai_c_Page_67thm.jpg
THUMB68 817c5f526ca4d28bb4971f419a097aab 27388
lai_c_Page_68thm.jpg
THUMB69 983fefca27e81bfc6f73acf1ef77da88 26031
lai_c_Page_69thm.jpg
THUMB70 d1cf8207a604ec8db38cc8336aa60853 23802
lai_c_Page_70thm.jpg
THUMB71 e376061e159222ca6f38220bacad1602 27270
lai_c_Page_71thm.jpg
THUMB72 620a3b5c73760e6b7d4249d4ca4682d0 22884
lai_c_Page_72thm.jpg
THUMB73 e56245384a946b6edfc4a8e46e6db4a6 26594
lai_c_Page_73thm.jpg
THUMB74 426615c812409a4e528e4532ef83331d 28582
lai_c_Page_74thm.jpg
THUMB75 13844c4d553c503b311b1acf74d0da89 25456
lai_c_Page_75thm.jpg
THUMB76 922617b2d9627063261b488e4ee84464 25235
lai_c_Page_76thm.jpg
THUMB77 d00d74cc69cf0b2c722295571d1145f9 25332
lai_c_Page_77thm.jpg
THUMB78 dbe0f5cf0e5f101aef7df25853811f84 27958
lai_c_Page_78thm.jpg
THUMB79 f9f023aaaa1bc86b5166dcd27fd78927 27437
lai_c_Page_79thm.jpg
THUMB80 37aba367417d96359fddce7b10b840d9 23226
lai_c_Page_80thm.jpg
THUMB81 bbffc0808f53d90834d2707a20c95977 26770
lai_c_Page_81thm.jpg
THUMB82 cc95de0d1168504d9bc56c70a5ad0dae 27645
lai_c_Page_82thm.jpg
THUMB83 98f9fad58927173f9f2d8c224c90da7a 28724
lai_c_Page_83thm.jpg
THUMB84 13dbb87b95f0261d02e0ac129027a2d2 26622
lai_c_Page_84thm.jpg
THUMB85 f1e17fc12a599e98e682459791c45b02 23385
lai_c_Page_85thm.jpg
THUMB86 be2193f92bf2a8ffa52f9f1ca7b1f8f7 21058
lai_c_Page_86thm.jpg
THUMB87 c34237f0b390188a2912c05ddbecebea 27022
lai_c_Page_87thm.jpg
THUMB88 dc2cf0a00f31ba75cd53d7f942b8e1cf 28391
lai_c_Page_88thm.jpg
THUMB89 a72bec25f64c92617d283b803be77529 26690
lai_c_Page_89thm.jpg
THUMB90 2e1424bd142021f522dfcf1de9800400 27637
lai_c_Page_90thm.jpg
THUMB91 720942ac231c5e8641c6cab840038c62 29436
lai_c_Page_91thm.jpg
THUMB92 a216a0504441044a400ca025a80224aa 29329
lai_c_Page_92thm.jpg
THUMB93 62c825ce67522aaf7f41bf05d9c0b218 29643
lai_c_Page_93thm.jpg
THUMB94 106a70d88302987d4bc3e49c4bb56691 29105
lai_c_Page_94thm.jpg
THUMB95 21b3a1720ed90f8222dddfaaed9298cd 28813
lai_c_Page_95thm.jpg
THUMB96 3779de25f7611f00634a59d418647fbe 20781
lai_c_Page_96thm.jpg
THUMB97 dd6cf7b1cb945d74d81133559a4c23cb 20686
lai_c_Page_97thm.jpg
TXT1 textplain c9a31cadfcdae36d3a86f1c845d8e3b3 442
lai_c_Page_01.txt
TXT2 7b53d0ab32adbd6594bb802ad0722bc1 619
lai_c_Page_02.txt
TXT3 834c1be716008c9996909c649eb75326 2755
lai_c_Page_03.txt
TXT4 3b55560bc8772b2bda23b26e96aa2edc 1272
lai_c_Page_04.txt
TXT5 19e51b19cf66284d3b0c870c245f3898 584
lai_c_Page_05.txt
TXT6 92ed0d769486932199e1c0a64ce8a446 1475
lai_c_Page_06.txt
TXT7 87aaaa8b5e9d1cbeecec5b9600f140f6 278
lai_c_Page_07.txt
TXT8 17881d1f1611bcb974f09d06ea2476c3 1855
lai_c_Page_08.txt
TXT9 e74e8d26e2f45848676431a18790aa7b 1158
lai_c_Page_09.txt
TXT10 24fcc87b595cd03d7537c99ee3b6be49 2993
lai_c_Page_10.txt
TXT11 8f8382f970c1cdb90f7f2ce4ad7aefb7 2823
lai_c_Page_11.txt
TXT12 da3509c7de9e897b50bc81c6227171ce 3237
lai_c_Page_12.txt
TXT13 a3441321a44e6ff10e4c3ab3ebb06702 2385
lai_c_Page_13.txt
TXT14 055f53586e865b34245ac4fe277dc0eb 2529
lai_c_Page_14.txt
TXT15 90ce3f3d5665d290708fec4433709050 2298
lai_c_Page_15.txt
TXT16 faf5259f62644169999354b0a73f1ab6 2326
lai_c_Page_16.txt
TXT17 23808ad48788c7e7b67437aa02978eee 476
lai_c_Page_17.txt
TXT18 9324f0cb4b057655153841c8c0eea624 1519
lai_c_Page_18.txt
TXT19 4f2c4c86812b5c8213b074adb2ed6003 2009
lai_c_Page_19.txt
TXT20 3c5b66bb5435f8f4d30172c510e15d2c
lai_c_Page_20.txt
TXT21 fb6d69d35a40ceb69d0d76c5fe966291 1503
lai_c_Page_21.txt
TXT22 f05c771845ad17a5e2ee0ed5d7cab7a3 1887
lai_c_Page_22.txt
TXT23 21429f508e70fedacdc04207cc14fcc0 1978
lai_c_Page_23.txt
TXT24 48a96dca31af4cf5f9d432d3209968b8 1636
lai_c_Page_24.txt
TXT25 8684b5436638fe0869c0aa2943f732e6 1881
lai_c_Page_25.txt
TXT26 0e8bc6cd26ed68e709d8a9c308467980 2166
lai_c_Page_26.txt
TXT27 907cb7ff4c669b137a3ab856c9ca65ac 2311
lai_c_Page_27.txt
TXT28 f80b8ae2f8e83f92d803491eb224fdb0 445
lai_c_Page_28.txt
TXT29 54e15433601845d17091a070f5c1f367 2044
lai_c_Page_29.txt
TXT30 5f65cd5e79528e369e27f3d6f193230f 2106
lai_c_Page_30.txt
TXT31 fc7a8f11d120200238b3bd000a8cf485 2212
lai_c_Page_31.txt
TXT32 60a9bb5b22233170e558d7d53d253283 306
lai_c_Page_32.txt
TXT33 f0717f331c688ff4eebc5892373f68cc 2297
lai_c_Page_33.txt
TXT34 38d041f0523a0a2744cec2db092c3873 2056
lai_c_Page_34.txt
TXT35 93ccef73e587e3541dd6f36223202c16 1750
lai_c_Page_35.txt
TXT36 749554c690a119e7184d22ec308bdc17 1945
lai_c_Page_36.txt
TXT37 7b106eb60a05ce0aab32ce4873d248ef 1734
lai_c_Page_37.txt
TXT38 bbb2f9b0b031da475eb478db5f6cb0ac 1776
lai_c_Page_38.txt
TXT39 d7acfb0f3bf4d32c82a6157b642ca0d2 1464
lai_c_Page_39.txt
TXT40 cd6e5e9722a40f3476e11800e54aecc2 1929
lai_c_Page_40.txt
TXT41 8bbec68953504e03afb6954b7f8e1e74 1680
lai_c_Page_41.txt
TXT42 bbaedb9b2f76970e00909c0728168efd 2072
lai_c_Page_42.txt
TXT43 ec68658fa4afe7bd45ae2f69426668be 1875
lai_c_Page_43.txt
TXT44 8eea527ed292cc21af816632d21137db
lai_c_Page_44.txt
TXT45 fd46e7ee2da83da8d07f9e2f7108f1e3 1547
lai_c_Page_45.txt
TXT46 3ff4afb6c10d72311f90dbfc75bedfb2 2163
lai_c_Page_46.txt
TXT47 42daac94713a48315bfa93388cea0b3b 2001
lai_c_Page_47.txt
TXT48 0e309ae8dce1c01d08c782e29e119bbc 1332
lai_c_Page_48.txt
TXT49 e81be264ee987665c3bbce7d8cf74393 1263
lai_c_Page_49.txt
TXT50 901e14303c22d07cd7d6f2e5b5600510 734
lai_c_Page_50.txt
TXT51 15a83c2e9e5cbc86e8aabfda015a29be 1867
lai_c_Page_51.txt
TXT52 3ac15c160e8a00888903a8b1f46880bf 1498
lai_c_Page_52.txt
TXT53 c47a8301961f0b59177ab80ee30643bb 2115
lai_c_Page_53.txt
TXT54 a87c9bc1155538bd24f21b920a769512 1411
lai_c_Page_54.txt
TXT55 d93e1ccb6e3579f30ce4cb6a887bc7ae 1380
lai_c_Page_55.txt
TXT56 5259e87882b0c9ab698ad6db7ea0911b 2141
lai_c_Page_56.txt
TXT57 737aa5e88d5dd9d2de7cb1b96f560735 2094
lai_c_Page_57.txt
TXT58 c1c072564bc8cd2d196922d764e50b6d 1984
lai_c_Page_58.txt
TXT59 5654d9921f7dfaad2e7d49eeaca36f54 2204
lai_c_Page_59.txt
TXT60 8b06bacee4ec573f4c07aa8618cf3fd7 1951
lai_c_Page_60.txt
TXT61 c45a2b243fff292bbf7004170f52b67c 2194
lai_c_Page_61.txt
TXT62 fa04270d899990a2470e496c834c9098 2164
lai_c_Page_62.txt
TXT63 9ab71ef3acc5a77bf714fd9eb9fe4c77 1896
lai_c_Page_63.txt
TXT64 651acac31b678caba8c6d35f243cb1e7 1953
lai_c_Page_64.txt
TXT65 986b9f206437558eec2e143a8bd72fbd 1877
lai_c_Page_65.txt
TXT66 5d2159ee80732894e82e3cb07b304023 1850
lai_c_Page_66.txt
TXT67 d1f851edb33ac484b7c1015922cd5ac7 1564
lai_c_Page_67.txt
TXT68 2ed1cfbd2fff1d0bb511a1c7255cb877 2162
lai_c_Page_68.txt
TXT69 770711d3cd7ef4d95717f42d733d141c 663
lai_c_Page_69.txt
TXT70 c8df83cf76296b582095b3e159391fbc 420
lai_c_Page_70.txt
TXT71 ca982127b7e959fd9b1a44917da86f2a
lai_c_Page_71.txt
TXT72 c1a93858d0b8c141603f97599b0366ee 537
lai_c_Page_72.txt
TXT73 38e546f321ad9291ea85829ac0699724 1831
lai_c_Page_73.txt
TXT74 4112bc11a80cf355ec6c48d9a200ddda 2332
lai_c_Page_74.txt
TXT75 fd8e01d07dd634a1dab682b956af0900 1572
lai_c_Page_75.txt
TXT76 74b727860196b0f99c3c9ce98d2d765c 1612
lai_c_Page_76.txt
TXT77 059d5113cf3ff1029899365071b3a228 1663
lai_c_Page_77.txt
TXT78 f26bb58b123f498b46f6d552e0d1b66d 2266
lai_c_Page_78.txt
TXT79 e0c8e1c092c92bbe26eda06f536b9c13 1066
lai_c_Page_79.txt
TXT80 0d8423576b0422fbc875dc4954eeefc7 1024
lai_c_Page_80.txt
TXT81 3e57c32595afa7f98aec6fe585f391ab 1878
lai_c_Page_81.txt
TXT82 574f74e644ab3e48501bd874c75cd480 2229
lai_c_Page_82.txt
TXT83 443301d3d8c04b01dcc2ea9f6e10ae8b
lai_c_Page_83.txt
TXT84 0815a84fb1fc9d1f45e7a61e52bf8ab5 1646
lai_c_Page_84.txt
TXT85 9028d59328876d8bfa9a95b9c2eace64 568
lai_c_Page_85.txt
TXT86 67d6f5e46b1453bda8b9d8019874d6d7 701
lai_c_Page_86.txt
TXT87 e9303d58034eae826ce57938013b2324 1997
lai_c_Page_87.txt
TXT88 b3e3ef21e36e49cbe5c4110f88d2821d 2217
lai_c_Page_88.txt
TXT89 3b8db005867ea59462ce252238829199 1921
lai_c_Page_89.txt
TXT90 8b1d98d1af2c0b112e962c1026382d82 2182
lai_c_Page_90.txt
TXT91 bbd4e66a0a056cc5789b0a67ffdc6991 2693
lai_c_Page_91.txt
TXT92 062ac1e199ca81a17608dc3e7d70c1c0 2626
lai_c_Page_92.txt
TXT93 ddd717b37583957adeddd19d4f9b0ebc 2759
lai_c_Page_93.txt
TXT94 d55ceaf37af61f7cdbe922e3a331b571 2522
lai_c_Page_94.txt
TXT95 43f2074cca355127189c51a35d97797d 2349
lai_c_Page_95.txt
TXT96 0e8addd481e1d26fdfde5d7c2235419e 739
lai_c_Page_96.txt
TXT97 fbffbba4044263d6b272aebf3b8e7499 573
lai_c_Page_97.txt
PDF1 applicationpdf 89b4e2f00b0c57337bb994f64a6eba9c 3779365
lai_c.pdf
METS2 unknownx-mets e9bdcd03ada0de50415d0018c25e82eb 101366
UFE0000558_00001.mets
METS:structMap STRUCT1 physical
METS:div DMDID ADMID ORDER 0 main
PDIV1 1 Main
PAGE1 Page i
METS:fptr FILEID
PAGE2 ii 2
PAGE3 iii 3
PAGE4 iv 4
PAGE5 v 5
PAGE6 vi 6
PAGE7 vii 7
PAGE8 viii 8
PAGE9 ix 9
PAGE10 10
PAGE11 11
PAGE12 12
PAGE13 13
PAGE14 14
PAGE15 15
PAGE16 16
PAGE17 17
PAGE18 18
PAGE19 19
PAGE20 20
PAGE21 21
PAGE22 22
PAGE23 23
PAGE24 24
PAGE25 25
PAGE26 26
PAGE27 27
PAGE28 28
PAGE29 29
PAGE30 30
PAGE31 31
PAGE32 32
PAGE33 33
PAGE34 34
PAGE35 35
PAGE36 36
PAGE37 37
PAGE38 38
PAGE39 39
PAGE40 40
PAGE41 41
PAGE42 42
PAGE43 43
PAGE44 44
PAGE45 45
PAGE46 46
PAGE47 47
PAGE48 48
PAGE49 49
PAGE50 50
PAGE51 51
PAGE52 52
PAGE53 53
PAGE54 54
PAGE55 55
PAGE56 56
PAGE57 57
PAGE58 58
PAGE59 59
PAGE60 60
PAGE61 61
PAGE62 62
PAGE63 63
PAGE64 64
PAGE65 65
PAGE66 66
PAGE67 67
PAGE68 68
PAGE69 69
PAGE70 70
PAGE71 71
PAGE72 72
PAGE73 73
PAGE74 74
PAGE75 75
PAGE76 76
PAGE77 77
PAGE78 78
PAGE79 79
PAGE80 80
PAGE81 81
PAGE82 82
PAGE83 83
PAGE84 84
PAGE85 85
PAGE86 86
PAGE87 87
PAGE88 88
PAGE89 89
PAGE90 90
PAGE91 91
PAGE92 92
PAGE93 93
PAGE94 94
PAGE95 95
PAGE96 96
PAGE97 97
STRUCT2 other
ODIV1
FILES1
FILES2