Citation
Dynamical analysis, applications, and analog implementation of a biologically realistic olfactory system model

Material Information

Title:
Dynamical analysis, applications, and analog implementation of a biologically realistic olfactory system model
Creator:
Xu, Dongming
Publication Date:
Language:
English
Physical Description:
xii, 109 leaves : ill. ; 29 cm.

Subjects

Subjects / Keywords:
Coupling coefficients ( jstor )
Dynamical systems ( jstor )
Eigenvalues ( jstor )
Limit cycles ( jstor )
Modeling ( jstor )
Neural networks ( jstor )
Oscillators ( jstor )
Signals ( jstor )
Simulations ( jstor )
Sufficient conditions ( jstor )
Dissertations, Academic -- Electrical and Computer Engineering -- UF
Electrical and Computer Engineering thesis, Ph. D
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 2005.
Bibliography:
Includes bibliographical references.
General Note:
Printout.
General Note:
Vita.
Statement of Responsibility:
by Dongming Xu.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
Resource Identifier:
028079374 ( ALEPH )
880637483 ( OCLC )

Downloads

This item has the following downloads:


Full Text










DYNAMICAL ANALYSIS, APPLICATIONS, AND ANALOG
IMPLEMENTATION OF A BIOLOGICALLY REALISTIC OLFACTORY SYSTEM MODEL














By

DONGMING XU


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2005
































Copyright 2005

by

Dongming Xu















To my parents.














ACKNOWLEDGMENTS

I would like to express my sincere gratitude to my advisor Dr. Jose C.

Principe. It is a privilege for me to work with him in my entire graduate career. He provided me with a free, creative and friendly atmosphere for an invaluable research experience. Without his knowledge, experience and vision, and his encouraging attitude and enthusiasm, this work would be impossible. I also sincerely appreciate the unconditional help offered by the members in my supervisory committee. I want to thank Dr. John G. Harris for his guidance through the work of hardware design from both his teaching and our research meetings. I would like to thank Dr. Jianbo Gao for his theoretic support. I truly benefited from his knowledge in dynamical systems. My deep appreciation goes to Dr. James Keesling for his interest and participation in my supervisory committee.

I would also like to thank my friends and colleagues (especially Mustafa Can Ozturk, Yuan Li and Dr. Mark D. Skowronski) in the Computational NeuroEngineering Laboratory, for all the wonderful discussions and hard work together. I am dearly thankful to Dr. Vitor Tavares for his early work on Freeman's model and analog design.

My deepest love goes to my parents who gave me the freedom and support to explore my life in the way I want. I would like to thank Xiuge Yang for her love and support through all my graduate years.














TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ............................. iv

LIST OF TABLES ....................................... vii

LIST OF FIGURES ................................ viii

A BST R ACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

CHAPTERS

1 INTRODUCTION .............................. 1

1.1 Olfactory System Model as Information Processor ............ 1
1.2 Structure of the Freeman Model ................... 5
1.3 Study Objectives .................. ......... 8

2 HOPF BIFURCATION IN A REDUCED KII(RKII) SET .......... 11

2.1 Dynamics of an RKII Set ...................... 11
2.2 Bifurcations in Nonlinear Systems .................. 14
2.3 Center Manifold Theorem ...... ...................... 16
2.4 Hopf Bifurcation in an RKII Set ........................ 18
2.4.1 Properties of the Equilibrium ..................... 18
2.4.2 Hopf Bifurcation .............................. 20
2.5 Parameter Design in an RKII Set ....................... 25
2.5.1 Determine Proper Values of Km.9 and Kgm ............. 28
2.5.2 Dynamics Controlled by an External Input ............ 31
2.6 Experiments ..................................... 32
2.6.1 Control of RKII Dynamics by the Coupling Coefficients 32 2.6.2 Control of RKII Dynamics by the External Input ...... ..36
2.7 Discussions ........ .............................. 37

3 SYNCHRONIZATION OF COUPLED RKII SETS ................ 39

3.1 Sufficient Condition on Synchronization of Coupled Chaotic Systems ........ ................................ 41
3.2 Analytical Solutions of Couplings for Synchronized RKII Sets . . 43 3.3 Simulations ....... .............................. 47
3.3.1 Example 1: Kg_ = 0.8 .......................... 50
3.3.2 Example 2: Kg = 0.4 .......................... 52
3.3.3 Nonlinear Coupling ...... ...................... 54








3.3.4 Quasiperiodic Motion in Coupled RKII Sets ........... 56
3.4 Discussions ........ .............................. 57

4 COMPUTATION BASED ON SYNCHRONIZATION ............. 59

4.1 Logic Computation ....... .......................... 60
4.2 Associative Memory Using an RKII Network ............... 64
4.3 Discussions ........ .............................. 70

5 HARDWARE IMPLEMENTATION USING ANALOG VLSI ....... ..72

5.1 Summing Node ....... ............................ 72
5.2 Linear Dynamics ........................... 77
5.3 Nonlinear Function .......................... 78
5.4 Efficient Design of an RKII System ................. 80
5.5 Digital Simulation and Chip Measurement ................ 82
5.6 Discussions ........ .............................. 86

6 CONCLUSIONS ........ ............................... 91

APPENDICES

A BASIC TYPES OF LOCAL BIFURCATIONS .................. 94

A.1 Saddle-Node Bifurcation ............................. 94
A.2 Cusp Bifurcation ....... ........................... 94
A.3 Pitchfork Bifurcatation .............................. 95
A.4 Hopf Bifurcatation ....... .......................... 96

B DERIVATION OF SUFFICIENT CONDITIONS ON SYNCHRONIZATION OF COUPLED RKII SETS .......................... 98

C LINEARIZATION OF NONLINEAR FUNCTION ............... 101

C.1 Example I ..................................... 101
C.2 Exam ple II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

REFERENCES ................................... 104

BIOGRAPHICAL SKETCH ............................ 109













LIST OF TABLES
Table page

3-1 Comparison between analytic solution and numerical simulation. . .. 55 4-1 Set of parameters that realize the complete boolean primitives . . . . 63














LIST OF FIGURES
Figure page

1-1 Two different types of K0 sets. (a) Excitatory. (b) Inhibitory ....... 6 1-2 KI network consists of interconnected KOs with common signs . . . . 7 1-3 Full KII set consists of four KOs ...... ..................... 7

1-4 KIII network represents the computational model of the olfactory
system ........ .................................. 9

2-1 RKII set consists of one mitral cell and one granule cell coupled through
Kmng(> 0) and Kgm(< 0) ...... ........................ 11

2-2 Plot of the nonlinear function Q(x) when Qm = 5 .............. 13

2-3 Equilibrium is determined by the intersection of two nullclines . ... 19 2-4 Frequency of oscillation as a function of coupling strength ....... ..24 2-5 Derivative of Q(x) when Qm = 5 ................... 28

2-6 Fixed point solution in an RKII set when P=0. (a) Temporal plot.
(b) Phase plot ....... .............................. 32

2-7 Limit cycle solution in an RKII set when P=O. (a) Temporal plot.
(b) Phase plot ....... .............................. 33

2-8 Fixed point solution in an RKII set when P=1. (a) Temporal plot.
(b) Phase plot ....... .............................. 34

2-9 Limit cycle solution in an RKII set when P=1. (a) Temporal plot.
(b) Phase plot ....... .............................. 35

2-10 Fixed point solution in an RKII set with positive input. (a) External
input P=0.35. (b) External input P=26.1 .................. 36

2-11 Scanned parameter space of IKmgKm[.. ...................... 37

3-1 Two representations of coupled RKII sets. (a) Linearly coupled RKII
sets (b) Under synchronization, coupled RKII sets could also be
described as an individual RKII set with auto-feedback ....... ...43









3-2 Parameter space defined by Kmm and Kgg. The plot shows combined
results of synchronization and bifurcation boundaries when input
P = 0, Kg = 1 and Kgm = -6 ........................... 48

3-3 Values of I Kgg I and Kmm are limited by 1. Three regions are determined where the coupled system shows synchronized oscillation,
desynchronized oscillation or fixed point solutions ........... ...49

3-4 Simulations of coupled RKII sets when Kgg=-0.8. (a) Kmm=0.6. (b)
Kmm=0.96. (c) Kmm=0.73. (d) Kmm=0.74. (e) Kmm=0.91. (f) Kmm
= 0.93 ........ .................................. 51

3-5 Simulations of coupled RKII sets when Kgg = -0.4. The coupled
system desynchronizes and synchronizes at (a) Kmm = 0.33. and
(b) Kmm = 0.4 ....... ............................. 53

3-6 Comparison between analytic solution and numerical simulation . . . 54

3-7 Example of quasiperiodic motion in two coupled RKII sets. (a) Temporal signal. (b) Phase plot. (c) Power spectrum ............. 56

3-8 Simulated regions of quasiperiodic oscillations ................ 57

4-1 Time domain response when implementing coupled RKII sets as an
XNOR gate. State outputs are shaped by the nonlinear function.
Solid line shows Q(m(')) while dashed line shows Q(m(2)) ...... ..62

4-2 Phase plot in the state space of M1 and M2 when implementing coupled RKII sets as an XNOR gate .......................... 63

4-3 RKII network as an associative memory. (a) Stored pattern "0". (b)
Network response to input pattern "0". (c) Stored pattern "1". (d) Network response to input pattern "1". (e) Stored pattern "2". (f)
Network response to input pattern "2" ...................... 68

4-4 Noise is added to an inactive channel and an active channel in "0" . . 69

4-5 Readout from an RKII network. (a) Readout pattern that effectively
improves the discrimination between desired channels and colored
channels. (b) Network response to colored input pattern "1" . . . . 70

5-1 Summing node with N inputs ............................. 73

5-2 Differential transconductance amplifier ..................... 74

5-3 Wide linear range tranconductance amplifier with source degeneration using double diffusors ............................ 75

5-4 Because the coupling strength is always greater than 1, input current
range can easily exceed the limit of the base transamp ......... 76








5-5 The 2nd order linear filter. Filter and hold technique is used to increase the time constant ............................... 77

5-6 The nonlinear block ....... ............................ 79

5-7 Asymmetric current mirror .............................. 81

5-8 New implementation of the asymmetric nonlinear function ........ 81 5-9 Offset cancelation using two transamps ..................... 82

5-10 New implementation of the RKII system that only includes an adder
and a linear system ................................. 82

5-11 Comparison between impulse-invariant filter and circuit simulation . . 84 5-12 Chip layout of a single RKII set .......................... 85

5-13 Measured excitatory and inhibitory nonlinear functions .......... 86

5-14 Transient behavior measured from the chip. Channel 1 is excitatory
output and Channel 3 is inhibitory output .................. 87

5-15 Phase plot measured from the chip ..... ................... 88

5-16 Measured effective range of external input .... ............... 89

5-17 Chip layout of a mixed-signal design that includes 8 interconnected
RKII sets ........ ................................ 90

C-1 Nullclines obtained with three-section nonlinear function ......... 102

C-2 Four-section nonlinear function ..... ..................... 103














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

DYNAMICAL ANALYSIS, APPLICATIONS, AND ANALOG
IMPLEMENTATION OF A BIOLOGICALLY REALISTIC OLFACTORY SYSTEM MODEL

By

Dongming Xu

August 2005

Chair: Jose C. Principe
Major Department: Electrical and Computer Engineering

The Freeman model is a biologically plausible dynamical neural network that mimics the complicated wave patterns collected from the electroencephalogram (EEG). Freeman has studied the model to explain how biological systems interact and extract information from the environment. The Freeman model has hundreds of free parameters that need to be fine tuned either manually (through trial-anderror), adaptation algorithms, or numerical simulations to match experimental data.

This work investigates the Freeman model and its dynamical building blocks as the basis for novel information processing systems based on nonlinear dynamics. We considered three issues. First, we performed a theoretical analysis of the reduced KII (RKII) basic building block, where simpler dynamics are observed. We showed, using the center manifold theorem, how the RKII dynamics are controlled by an Hopf bifurcation and designed the parameters analytically. Second, nonlinear dynamics of coupled RKII sets were studied analytically using the synchronization of output channels to define computational primitives based on








specific dynamics of the network. The design of coupling coefficients in the network for obtaining a desired output response is based on our theoretical analysis of the RKII set dynamical behavior. The computational power of the RKII network is demonstrated by two applications that address logic computation and associative memory. Thirdly, we designed analog VLSI chips to implement this continuous dynamical system. Analog design faces the challenges of power consumption and limited chip area. To build a more efficient integrated circuit, the nonlinear function in the KII architecture is eliminated by redesigning the summing node. In the new design the summation block serves simultaneously as a current adder and nonlinear activation function preserving the original functionality. Chip tests show the good performance of the new design.














CHAPTER 1
INTRODUCTION

1.1 Olfactory System Model as Information Processor

The olfactory system is one of the oldest parts in the brain. It is a welldeveloped distributed system that achieves odor identification and segmentation. The olfactory bulb receives inputs from specialized odor receptors and sends nonconvergent signals to the olfactory cortex. Olfactory cortex, in cooperation with other parts of the brain, send the feedback signals to the olfactory bulb to accomplish recognition tasks. Through physiological and experimental studies, many researchers have built computational models that describe the architecture and functions of olfactory bulb. In this thesis we will focus on the K-sets model developed through the seminal work of Walter J. Freeman.

The realistic computational model of the olfactory system proposed by Freeman [1] describes brain function as a spatio-temporal lattice of groups of neurons (neural assemblies) with dense interconnectivity. It is a hierarchical model that quantifies the function of one of the oldest sensory cortices, where there is an established causal relation between stimulus and response. It also presents the function as an association between stimulus and stored information, in line with the content addressable memory (CAM) framework studied in artificial neural networks [2]. Freeman uses the language of dynamics to model neural assemblies, which seems a natural solution because of the known spatio-temporal characteristics of brain function [3]. Although we believe that the full dynamical description of the entire model (a KIII network) is beyond our present analytical ability, one may still be able to understand the dynamics of lower level structures from first principles. One advantage of a dynamic framework to quantify mesoscopic interactions is the









possibility of creating analog VLSI circuits that implement similar dynamics [4, 5]. In this respect, dynamics is also independent of hardware (mimicking the wellknown hardware independence of formal systems). However, the dynamical approach to information processing is much less developed than the statistical reasoning used in pattern recognition. Only recently did nonlinear dynamics start being used to describe computation [6] and much work remains to achieve a nonlinear dynamical theory of information processing. Hence we are simultaneously developing the science and understanding the tool's capability; a situation that is far from ideal. The challenge is particularly important in the case of Freeman's model, where the distributed system is locally stable but globally unstable, creating nonconvergent (eventually chaotic) dynamics. Nonconvergent dynamics are very different from the simple dynamical systems with point attractors studied by Hopfield [7], because they have positive Lyapunov exponents [8]. Freeman's computational model of olfactory system has already been applied to several information-processing applications [4,5,9,10]. Ultimately, we plan to use Freeman's model as a signal-to-symbol translator, quantify its performance and use it in analog VLSI circuits for low-power, real-time processing in intelligent sensory processing applications.

Deterministic systems with complex dynamics have been studied in the

past as information processors and feature extractors. In addition to Freeman's model, other approaches using oscillatory or chaotic networks [11-14] have shown great computational potential. These computational systems are often built upon their corresponding dynamics that can be systematically described and controlled by energy functions or convergence properties. To accomplish such a task, we should construct a well-defined set of computational primitives to represent the state space. This is of course dependent on the system's fundamental structures and its specific dynamics. Different dynamics will provide a different set








of primitives. Among examples of information processing using dynamical systems, the approach is usually to create attractors (either convergent or nonconvergent) in the state space. The case of dynamical systems with point attractors (as in the Hopfield network) generate local minima in the energy function, which leads to the convergence of the system dynamics. More interestingly, the widely used recurrent networks (RNN) integrate information dynamics with system dynamics, which produces better representation of time signal and more efficient projection onto the output space [15,16]. Yet, biological systems are still much more intriguing in both structure and observable behaviors. Brain functions deal with emerging properties from different functional layers, where nonconvergent dynamics are the carrier for both encoded and processed information. However, the case of nonconvergent dynamical systems (as in the Freeman model) is much harder to quantify. This is especially significant for the Freeman model, where detailed resemblances to the real biological system result in both practical implementation issues and the lack of fundamental understanding of its dynamical structures. Existing methods used to quantify global behaviors of the network, which are mostly based on energy clustering, are still far from satisfactory to unveil the advantages brought by the complex dynamics. Fundamental understanding of information storage (which depends on system controllability and invariance) and information retrieval (which depends on the understanding of projection space) both remain unsolved.

One possible advantage of Freeman's system, that still has not been fully exploited for the analysis, is the uniformity of the dynamics of its individual components. In Freeman's model, the dynamical behaviors of the individual sub-systems, such as fixed point, multiple equilibria and limit cycles, are very simple. From an information-processing viewpoint, this is important because the global dynamics are built from these primitive dynamics. One fundamental step is to quantify the dynamics of the individual components in the parameter space.








Another step is to learn the rules that compose the complex dynamics of the spatio-temporal system from the individual dynamics. This is challenging because the dimensionality of the overall system can be very high to lose controllability and invariance. In this work, we used a step-by-step approach with the goal of unraveling some fundamental mechanisms that naturally emerge due to the interconnectivity of the components. In this regard we used synchronization analysis among the processing elements (PE) as the first step to categorize different regions of dynamics in the parameter space to elucidate the possible composition of global dynamics. In fact, global behaviors generated by interactions among lowerlevel components are very rich. There are a wide variety of collective behaviors in a coupled oscillatory system including synchronization, partial synchronization, desynchronization, dephasing, phase trapping, and bursting in coupled neural oscillators through diffusive coupling. These are all possible tools to refine the state space of our system to provide different representations of embedded signals. If all of them can be used, computational ability and efficiency will be greatly improved. However, we should realize and emphasize that we are still far from building computational systems upon fully explored rich dynamics, where parameter design, controllability and system invariance need to be thoroughly investigated. While we could not provide now a full interpretation of the entire system dynamics for our model , as the first step, we tried in this work to analytically study synchronization that has relative simplicity and invariance property to construct an initial division of dynamical regimes in the parameter space. By controlling one of the emerging properties, the projection space can be built so that in these complicated and highly interconnected networks, dynamics collapse to a lower-dimensional space to facilitate information retrieval.

Analytical studies have the added advantage of decreasing appreciably the need to scan parameters that may still be needed to systematically illustrate the








global dynamics. To achieve the ultimate goal of designing a desirable computational system in the framework of a complex dynamical system, it is of great value to understand these basic yet rich dynamical behaviors of the target system, starting from the most basic elements. Moreover, in hardware designs, variations in design processes and fabrication, as well as noise, may degrade the performance of the system. Theoretical studies help recognize the range of the system's working parameters to make sure that variations does not significantly change the system's qualitative behavior.

1.2 Structure of the Freeman Model Generally, an Nth order system in Freeman's model is defined by
1 7d2xi(t) dxi(t)
1 [dt) + (a + b)-d-- + (ab)xi (t)
ab dt2 dt
= ZNi [WjQ(xj(t), Qm)j+ wVfj(Q(xj(t), Qm), t)] + Pi(t) (1.1)
i~j~ ,...N,

where a and b are experimental constants that determine the cutoff frequencies of the 2nd-order dynamics. Wj is the coupling coefficient, fi models a dispersive delay and P(t) is the external input. Each PE in Equation 1.1 models the independent dynamics of the wave density for the action dendrites and the pulse density for the parallel action of axons. Note that there is no autofeedback in the model. Q(x) is the asymmetric nonlinear function (at the output stage) in each PE. It describes the pulse to wave transformation [1]. The mathematical model and properties of Q(x) are discussed later. Freeman's model is a locally stable and globally unstable dynamical system in a very high dimensional space. Although it has great complexity, the model is built from a hierarchical embedding of simpler and similar structures. Based on the seminal work of Katchalsky [1], four different levels (KG, KI, KII and KIII) are included in the model defined next [1].











()+ F(s) iQ (x)

Summing Order dynamics Asymmetric
node nonlinear function



(b) s,

Summing 2nd order dynamics Asymmetric
node nonlinear functionFigure 1- 1: Two different types of K sets. (a) Excitatory. (b) Inhibitory

A KG set represents about out to 108 neurons that receive the same input and have the same sign of output. Each KG set consists of three blocks as summation node, 2n2 order linear dynamics, and nonlinearity. It is the simplest and most basic building block in the hierarchy. All higher-level structures are made of interconnected KG sets. Spatial inputs to a KG set are weighted and summed. The resulting signal is passed through a linear time-invariant system with 2nd-order dynamics. The output of the linear system is shaped by the asymmetric nonlinear function Q(x). Two categories of KG sets (excitatory and inhibitory) are defined by the sign of the nonlinear function (Figure 1-1). There is no coupling among any single KG set when forming a KG network.
A KI set also describes a set of neurons that have a common input source and sign of output. However, there are dense interconnections within the KI set. KG sets with a cooine sign (either excitatory or inhibitory) are connected through forward lateral feedback to construct each KI set (Figure 1-2). No auto-feedback is allowed in the network.

A KI set is defined to realize the dense functional interconnections between two KI sets. Therefore, there are three possible topologies, which are formed
















Figure 1-2: KI network consists of interconnected KOs with common signs P(t)



M l





,+ +
M2 G2








Figure 1-3: Full KII set consists of four KOs by two excitatory sets, two inhibitory sets or an excitatory and inhibitory pair, respectively. The last one is specifically denoted as a KII set. It is a coupled oscillator consisting of two KI sets (or four KO sets) of different signs (Figure 1-3). Mitral cells M1 and M2 are excitatory cells, while granule cells G1 and G2 are inhibitory cells. Each KII set has fixed coupling coefficients obtained from biological experiments. A KII set is the basic computational element in Freeman's olfactory system. The measured output from any nonlinear function has two stable states controlled by the external stimulus. The resting state occurs when external input is in the zero state. An oscillation occurs when external input is present.








Therefore a KII set is an input controlled oscillator. The KII network is built from KII sets that are interconnected with both the excitatory cells (denoted by M1) and inhibitory cells (denoted by Gi). This interconnected structure represents a key stage of learning and memorizing in the olfactory system. External Input patterns (P(t)) and feedback signals through Ml cells are mapped into spatially distributed outputs. Excitatory and inhibitory interconnections enable cooperative and competitive behaviors, respectively. The KII network functions as an encoder of input signals or as an auto-associative memory [1,4].

The KIII network (Figure 1-4) embodies the computational model of the olfactory system. It has different layers representing anatomical regions of a mammalian brain. In a KIII network, basic KII sets and a KII network are tightly coupled through dispersive connections (mimicking the different lengths and thicknesses of nerve bundles). Since the intrinsic oscillating frequencies of each one of the KII sets in different layers are incommensurate among themselves, this network of coupled oscillators behaves chaotically.

1.3 Study Objectives

We studied Freeman model (an oscillatory network) because it is a computation unit that mimics biological information processing with sufficient mathematical detail to enable engineering applications. Freeman wrote extensively about this model, but kept the focus of explaining biology, not to help build man-made information processing artifacts. Our goal is to theoretically analyze the model to gain a basic understanding of how this network works; how to control it; and most importantly, how to use its advantages to define a system of computational primitives. We investigated basic levels in the hierarchy, emphasizing dynamical behaviors through simulations and structural analysis that explains the experimental results. In simulations, our main task was to identify dominant and interested dynamics from different behaviors in different levels. We conducted detailed and meaningful
















































Figure 1-4: KIII network represents the computational model of the olfactory system








theoretical analysis for lower-level structures from the KO to KII networks. Based on the theoretical analysis and computational primitive defined from categorized dynamics, potential applications were investigated. We also examined hardware design of the model using analog VLSI for real-time implementations.

Chapter 2 begins with the analysis of Hopf bifurcation in the reduced KII (RKII) set. Hopf bifurcation characterizes the most fundamental dynamical behavior of the basic building block in the whole hierarchy. Analytical solutions will be given to quantify the desirable regions of control parameters in the model. The results will then be used to provide synchronization analysis of higher level structures such as coupled RKII sets, which will be presented in Chapter 3. Chapter 4 discusses the computational potential of RKII networks based on synchronization. Analog VLSI design is presented in Chapter 5. New architecture that reduces the complexity of circuit design is proposed. Finally, Chapter 6 presents conclusions of this work.














CHAPTER 2
HOPF BIFURCATION IN A REDUCED KII(RKII) SET

A clear understanding of the dynamics of Freeman's model is needed to

control the network and to define its characteristics, so that we have insight on how and why to use the model in different applications. Basic observations of dynamical behaviors of the olfactory bulb could be achieved either experimentally or from simulations of the model. However, it is difficult to understand the network's controllability considering the massive number of free parameters in such a complicated network. With a hierarchical structure, Freeman's model is built from very basic components such as a KII set. As a general approach, the study of the basic component helps us understand both local and global behaviors of the network. This chapter presents a theoretical solution to categorize different behaviors in lower-level structure of the model.

2.1 Dynamics of an RKII Set

P (t)



M



Kmg Kgm




G


Figure 2-1: RKII set consists of one mitral cell and one granule cell coupled through Km9g(> 0) and Kgm(< 0)








An RKII set is a simplified version of the full KII set [5] shown in Figure

1-3. Unlike a full KII set, it has only a single excitatory (mitral) cell and a single inhibitory (granule) cell (Figure 2-1). The M-cell and G-cell are coupled by two internal coefficients King and Kgm. P(t) is a time-varying external stimulus but typically it only has binary values that are used in static pattern recognition [4, 5, 9, 10]. In such cases, P(t) is assigned either an off state (zero input) or on state (positive input). Although an RKII is structurally different from a full KII, the network dynamics of a large assemble of RKII sets generate similar global dynamics to that of a full KII [5]. Moreover, the individual dynamics of the RKII set are qualitatively similar to the full KII set, i.e., it is an input controlled oscillator. For our analytical studies, the RKII has the great advantage of fewer free parameters, which simplifies the analysis. In addition, the most significant property of the olfactory bulb is the interaction between mitral and granule cells (as an excitatory-inhibitory network). The number of cells is normally not balanced and their connections are not uniformly distributed. However, starting with the simplest yet general feedback architecture, higher level models can always be achieved by well defined intracellular connections. From a modeling perspective, the most important task here is to mimic the behavior rather than to mathematically match the target biological system. Therefore, we focus our work on the study of an RKII set and hope to use the analysis for understanding higher level structures in the system.

An RKII set is described by a system of 2nd-order ordinary differential equations (ODEs) as
1 .d2m(t) b)dm(t) ( +(a+ b) +m(t)) = KgmQ(g) + P

b d 2 + (a + b) dg(t) + g(t)) = KmgQ(m), (2.1)
-( +t dt








where m(t) and g(t) are state variables of mitral and granule cells, and King > 0 and K,,m, < 0 are internal couplings. The values of a and b determine the cutoff frequencies of the 2nd-order dynamics. They are given experimentally as a = 220 Hz and b = 720 Hz [1]. Q(x) is the nonlinear function that models the spatio-temporal integration of spikes into mesoscopic waves measured in the olfactory bulb and serves as the key function block to construct this population model [1]. Q(x) is defined by Equation 2.2 as

Q(x) e m( - eQ ) X > X0 (2.2a)

-1 else, (2.2b) where x0 = ln(1 - Qm ln(1 + 1/Qm)). Qm is a free parameter that determines the ratio between positive and negative saturation values of Q(x). In our study, for simplicity, we used Equation 2.2b to describe both positive and negative saturations to avoid discontinuity. This variation should not affect our results.


6 I
5

4

3

/


Figure 2-2: Plot of the nonlinear function Q(x) when Qm = 5








The olfactory bulb is a homogenous excitatory and inhibitory network constructed from basic building blocks (RKII sets). A typical activity of an excitatoryinhibitory pair is the birth of oscillations induced by stimulus. Actually, in such cases, the desired dynamics of RKII sets are expected to be exactly the same as those of the full KII sets. The system stays at its equilibrium (the origin) when external stimulus P(t) is zero or negative, which indicates no sensory inputs are received. The transition into oscillatory states (limit cycles) is induced by a positive and large-enough input. A natural approach to investigate how the system undergoes such structural changes is based on bifurcation theory. Many well studied categories of bifurcations led to the birth of limit cycles. The RKII differs from conventional excitatory-inhibitory structures because the oscillator itself is a 4thorder system. However, next, we give an overview of dynamical system analysis and show that the transition into a limit cycle is actually through a Hopf bifurcation. This reduces our current model due to the fact that Hopf bifurcation resides in a 2D space around the local neighborhood of the fixed point, yet the RKII set is still biologically plausible.

2.2 Bifurcations in Nonlinear Systems A complicated nonlinear system is usually studied by linearizing the system equations at its operating points. In this way, its local behavior can be described by its linear counterpart. Moreover, the dynamics of a linear system are rather simple and are dependent upon the eigenvalues of the linear equations. A linear time-invariant system is defined as

dx
dt = Ax(t) (2.3)


where x E R'. A E NxN is a constant matrix. The behavior of Equation 2.3 is determined by the eigenvalues of A. The solution x(t) = x(O)e t has the following possible dynamics [17]:








" The system has a stable fixed point solution if the real parts of all the

eigenvalues of A are negative.

" The system is unstable if at least one of the eigenvalues of A has a positive

real part.

" If every eigenvalue of A has a real part that is less than or equal to zero, and

if all eigenvectors corresponding to the eigenvalues with zero real part are

independent, than the system is stable; otherwise it is unstable.

If none of the eigenvalues of A has zero real part, the fixed point solution of Equation 2.3 is called a hyperbolic fixed point. Otherwise, it is called a nonhyperbolic fixed point.

The analysis of a linear system is rather simple and straightforward since all possible dynamics are clearly determined by the eigenvalues and eigenvectors of A. In a nonlinear system, more complicated behaviors may exist and even be unsolvable. In most cases, a nonlinear system is linearized around its equilibrium so that an explicit solution and qualitative analysis could be achieved around the neighborhood of the fixed point [18]. Conditions that guarantee a qualitatively similar phase portrait between a nonlinear system and its linearized version are described by the Hartman-Grobman Theorem [19]. Theorem 1 (Hartman-Grobman Theorem). If a nonlinear system :k = f(x) has a hyperbolic fixed point at x = x0 and a Jacobian A, then this system is locally topologically equivalent to the linearized system (Equation 2.3).

Based on this theorem, if a nonlinear system (Equation 2.4) has no purely imaginary eigenvalues, the orbit of the flow generated by this system can be mapped to that of its linearization as defined in Equation 2.3 by a homeomorphism (a one-to-one correspondence of points in the two flows).

dx
d-t = f(x) (2.4)








Clearly, stability analysis of a nonlinear system can be greatly simplified while the qualitative property obtained remains the same. However, this happens only when the system is hyperbolic and not at a bifurcation point [19]. Bifurcation occurs when a system is structurally different (nonhyperbolic) with respect to the variation of its parameter set. A parameter-dependent system may present different behaviors in the phase space when the parameter passes through the bifurcation point [19]. While a nonlinear system can be reduced to its linearized counterpart, its behavior (or trajectory) around the bifurcation point cannot simply be described by a single set of linear systems. In fact, various bifurcations observable in biological systems are usually interesting but make the analysis difficult. Some basic types of bifurcations are shown in Appendix A. The question is, when a nonlinear system has nonhyperbolic fixed points, how do we simplify it? The most important method is the center manifold theory.

2.3 Center Manifold Theorem

Consider a linear system
dx
dx Ax (2.5) dt
where A is a constant matrix and x E N. The entire space RN can be described as the direct sum of three subspaces determined by the eigenvalues of A. The three subspaces are stable, unstable and center subspaces.

" if {el, ..., e,} are the generalized eigenvectors of A whose corresponding

eigenvalues all have negative real parts, the stable subspace is defined as


ES = span{el,..., e}.


" if {es+l, ..., e,+,} are the generalized eigenvectors of A whose corresponding

eigenvalues all have positive real parts, the unstable subspace is defined as


E =








if {es+u+i, ..., es+u+c} are the generalized eigenvectors of A whose corresponding eigenvalues are all purely imaginary, the center subspace is defined

as

E span{es+u+l,..., es+u+c}.

The three subspaces are all invariant manifolds, which means that the solutions of the linear system with initial conditions in any of the subspace will stay in it forever.

Given a system without any solution in the unstable space (that is, Eu = 0), it is clear that any orbit or solution will converge to the center manifold. Depending on the dimension of the center manifold, asymptotic behavior of the bifurcating system under analysis can be approximated by the reduced system locally around the fixed point. This is the basic idea of the center manifold thorem [17]. Theorem 2 (Center Manifold Theorem). The nonlinear system defined by 2.4 can be parameterized by states in the center subspace (i.e., x E Ec) as C = {x + Y1 Y = �(x)}. (2.6)


y = O(x) E Es U Eu is called the center manifold. q is a smooth function and satisfies 0(0) = 0 and DO(0) = 0.

Furthermore, the center manifold is locally invariant with respect to the

original system, i.e., every solution of the nonlinear system (Equation 2.4) that originates from the center manifold will stay in the center manifold. The center manifold can also be attractive. When the unstable subspace does not exist (i.e., Eu = 0), all solutions of the original system that is close enough to the center manifold will approach the center manifold.
Next, we characterize the dynamical properties of the RKII set and use the center manifold theorem to reduce its dimension of interesting dynamics.








2.4 Hopf Bifurcation in an RKII Set
By changing of variables, Equation 2.1 can be described as a system of firstorder ODEs by

dm(t)
dt =
dm n(t)
dt -abm - (a + b)m + ab(KgmQ(g) + P)
dt
dg(t)
dt = g
dg, (t)
dt -abg - (a + b)g, + abKmgQ(m), (2.7)
dt

where m and g represent the state variables of mitral and granule cells respectively. m, and gv are the 1St-order derivatives of m and v. The internal couplings King, K.., and input P(t) are the possible bifurcation parameters in this system. Before looking into the the actual causes and properties of the bifurcation, we first discuss some of RKII's properties.

2.4.1 Properties of the Equilibrium

The equilibrium point can not be solved analytically, but we will investigate its existence and uniqueness with the help of numerical simulations. Let us assume m* and g* are the the fixed-point solutions. Based on Equation 2.7, Equation

2.8 can be used to solve numerically for the actual value of the equilibrium. The equilibrium should be at the intersections of the two nullclines defined by the two equations in Equation 2.8.


m- K,Q(g)- P = 0 g- K.mgQ(m) = 0 (2.8) First, we discuss the the existence and uniqueness of the equilibrium of an RKJI.






19

6


4
m=K Q(g) +P g=K Q(m) gmg

2


EC0


-2


-4


-6i I I I
-6 -4 -2 0 2 4 6
g

Figure 2-3: Equilibrium is determined by the intersection of two nullclines

Property 1. When Kmn. > 0, Kgm < 0, and P > 0, there is one and only one finite fixed-point solution in an RKII set defined by Equation 2.7. The equilibrium only exists at the origin or in the first quadrant, i.e., m* > 0 and g* > 0. Proof. Recall that the equilibrium {m*, g*} of Equation 2.7 is determined by the intersections of the two nullclines m and g defined by Equation 2.8. The system has a fixed point at the origin when P = 0. When P > 0, m and g have the same sign only in the first quadrant. Hence, the intersection is always in the first quadrant, which means m* > 0 and g* > 0. When P > 0, it is straightforward that

e-1
Q'(x) = exe > 0 m'(g) = KgmQ'(g) < 0 g'(m) = gm.Q'(m) > 0. (2.9)








Therefore, m(g) is monotonically decreasing and g(m) is monotonically increasing. Therefore, there can be at most one intersection between the two nullclines m and g. The conclusions can also be verified by checking the intersection between the two nullclines in figure 2-3. l

Some other properties of the fixed-point solution are presented below. They will be of interest when the actual values of the internal coefficients need to be determined.
Property 2. Assume that m* and g* all have finite values, when P is a constant, m* is a decreasing function of both King and IKgm1; g* is an increasing function of King and a decreasing function of IKgm1. Proof. According to property 1, m* > 0 and g* > 0, so we have Q(m*) > 0 and Q(g*) > 0. Recall that Q'(x) = exe- > 0. The proof is straightforward based on the signs of the 1St-order derivatives computed from Equation 2.8.

dm* - Kg.l Q(m*)Q(g*) < 0 dKng 1 + KgKgm1 Q(m*)Q'(g*) dm* -Q(g*) 0 d Kg,,, 1 + IKmgKg[ Q'(m*)Q'(g*) dg* _ Q(m*) >0 dKmg 1 + KiggKgn Q(m*)Q(g*) dg* - KgQ(g*)Q(m*)
d KgmI 1 + IKgKg.[ Q'(m*)Q'(g*) <0 (2.10)



2.4.2 Hopf Bifurcation
Intuitively, the behaviors of an RKII set resemble those of a Hopf bifurcation during the onset of a periodic orbit. Specifically, by smoothly adjusting some of the free parameters, an RKII can bifurcate from a stable fixed point to a stable limit cycle. This matches a supercritical Hopf bifurcation (Appendix A) [17, 20]. Two steps will be taken to analyze this model. First, we will show that Hopf








bifurcation exists. There are three free parameters, namely King, Kgm and P. We will show that they can be considered together as a single parameter for this particular bifurcation. Second, properties such as frequency and amplitude of oscillation of the periodic solution can be estimated in the neighborhood of the unique equilibrium based on center manifold theorem. Apparently, we expect the center manifold to have a dimension of two.

Conceptually, the Hopf bifurcation exists in a N dimensional system i = F(x, p), where x, F E R' and F E Cr, r > 3, when

" Jacobian of F, DF(O, p), has a pair of complex conjugate eigenvalues A and

such that

A (p) = a( i) + iw(p) (2.11) where w(1ao) = wo > 0, a(po) = 0, and a'(po) 5 0. /to is called the critical

value.
* The remaining N - 2 eigenvalues all have strictly negative real parts.

To examine whether a Hopf bifurcation exists in our system, we first compute the eigenvalues of Equation 2.7. The corresponding Jacobian matrix is

0 1 0 0
-ab -(a + b) abK9.Q'(g*) 0 A = (2.12)
0 0 0 1 abKmgQ' (m*) 0 -ab -(a+ b)

where m* and g* are the fixed-point solutions that are determined by King, Kgm and P. Q'(x) is the derivative of Q(x).








To calculate the eigenvalues, we have

A -1 0 0 A-Al ab A + (a + b) 'abKgQ (g*) 0
0 0 A -1 (2.13)

-abKmgQ'(m*) 0 ab A+ (a+b)

0.

Thus, we obtain


[A(A + a + b) + ab]2 (ab)2 KgKgmQ (m*)Q'(g*) = 0. (2.14) Solving the equation, we have


A(A + a + b) = -ab � iablKmgKgml Q'(m*)Q'(g*). (2.15) Therefore, we obtain the eigenvalues as


-(a + b) � v/(a - b)2 � i4abv/IKmgKgmI Q'(m*)Q' (g*) (2.16)
,,2,3,4 = -2 (.6


Note that Q'(m*)Q(g*) > 0.

Clearly, eigenvalues of A are controlled by all three free parameters in the form of it = I K Kgm I Q' (m*) Q' (g*). Without loss of generality, we will consider these parameters in terms of a single representation by p. Due to the fact that /I is not always a monotonic function with respect to any of the free parameters, each of them may lead to multiple bifurcation points. Later we will show how it affects the system dynamics. For now, we only consider the change of structural stability by it in a neighborhood of the critical value po. From our previous discussion, it is easy to check that one pair of the system's complex eigenvalues always have negative real part. The conditions for an Hopf bifurcation are all satisfied at the point where a pair of purely imaginary eigenvalues are generated. The critical values are








Po = (a+b)2/ab and w0 = v-b. In a neighborhood of the equilibrium, the frequency of the periodic solution can be approximated by the system's time constant (1/a and 1/b).

To determine the local dynamics of this bifurcation, we need to perform center manifold reduction to find the normal form in the neighborhood of the equilibrium and bifurcation parameter [o [17]. The first step is to construct the real Jordan canonical form of A. Transformation matrix T = [Re{vi} Ir{v1} Re{v3} Im{v3}], where v, and v3 are the eigenvectors corresponding to the pure imaginary eigenvalue and the eigenvalue with negative real part, respectively. At the critical value p0, we have
0 -'AT=00 0

T-lAW = ab 0 (2.17)
0 0 -(a + b) - (2.17)

0 0 vra-b -(a+ b)
The first block represents the invariant center manifold while the second block represents the invariant stable manifold. Denote y = T-1x. The new system ' = G(y) can be used to construct the normal form of the original system. By solving the center manifold, we can have information about the frequency, amplitude and stability of the periodic solution in the neighborhood of the equilibrium. Analytical form of the center manifold is generally unsolvable. In the next section, we will directly analyze the bifurcations as a function of each of the free parameters to obtain accurate solutions that can control the dynamics of an RKII. Here, through numerical calculation important information such as frequency, amplitude and stability of the periodic solutions can be estimated from the reduced center manifold [20]. We should notice that the analysis is local. Thus, estimation is accurate only in a neighborhood of the equilibrium. From simulations, a reasonable estimation can be obtained in a rather large range beyond the critical value.








Particularly, the periods of oscillations in a neighborhood of the equilibrium can be estimated using wo as


T = 2i[12[1 + k(a - ao) + o((a - 0))],
W0


(2.18)


where w0 = v/ and ao = (a + b)2/(ab). The values of k and a need to be evaluated after normal form transformation according to any specific set of system parameters [20]. Figure 2-4 shows an example, in which frequency of oscillation


63 62
c: 61 g 60 L_ 5 9
U
0
z58

S57
0
56 55 541
5 6 7 8 9 10 IKgm*lI

Figure 2-4: Frequency of oscillation as a function


11 12


of coupling strength


was recorded as a function of p. Input P = 0 and King is fixed at at 1 so that M= Kgm. The bifurcation point is at (a + b)2/ab which approximately equals to

5.6. Value of Kg is increased beyond the critical value. Estimated frequency from the reduced system approximates the real system very well up to two times of the coupling strength at the bifurcation point.








2.5 Parameter Design in an RKII Set

Bifurcation occurs when variations of parameters result in structural changes of the system. In other words, free parameters can effectively control the system behavior. In this section, by studying the real part of eigenvalues of an RKII set as a function of coupling coefficients and external input, we can find ways to achieve desired dynamics of RKII sets systematically.

Recall that the corresponding Jacobian matrix is:

0 1 0 0
-ab -(a + b) abKgQ'(g*) 0 (2.19)

0 0 0 1 abKmgQ' (m*) 0 -ab -(a + b) where m* and g* are the fixed-point solutions that are determined by King, Kg" and P. Q'(x) is the derivative of Q(x). The eigenvalues are:

-(a + b) � V(a - b)2 � i'4ab/Kmgg_ m Q'_(rn_)Q'(g)
A 1,2,3,4 -b 2 gl (2.20) Let

R = Re{ V(a - b)2 � i4ab IKgKgml Q'(m*)Q(g*)}

I = m{ V(a -b)2�i4ab IKgKgIQ'(m*)Q'(g*)} (2.21) It is clear that R > 0 and (a + b) > 0. Remember that system behavior switches when Re{A} passes zero. In this case, -(a+ b) � R determines the sign of Re{A}, so we have the following sufficient and necessary conditions for the two distinguished states of the equilibrium that we are interested in.

1. An RKII set (Equation 2.7) has an unique stable equilibrium iff


R2ax < (a + b)2


(2.22)








2. An RKII set (Equation 2.7) has an unique unstable equilibrium iff

Rmax > (a + b (2.23) R = (a + b) is the bifurcation point of the system that represents the boundary for the purpose of controlling the behavior of an RKII set.

Let R (a - b)2 and R, = 4abVIKmgKgmj Q'(m*)Q'(g). Note that

Rmax = Re2{ Rr�iRjd
F ft2 + R cos2(a/2)
R 2 1 + cosa
2
+ Rr
/RT + Ri 2

R+R+Rr (2.24)
2

By solving Rmax (a + b)2, we obtain the bifurcation point in terms of coupling coefficients as

1KgKgm 1 (a + b)2 (2.25) Q'(m*)Q'(g*) ab
If the external input P is fixed, Equation 2.25 sets the boundary between stable fixed-point and limit cycle in terms of Kmg and Kgm. Generally, the right side of Equation 2.25 cannot be explicitly solved. However, the boundary is still much easier to be found in which only the equilibrium need to be solved but the whole parameter space doesn't need to be scanned and checked. In the case of zero input (with which system has a fixed-point at the origin), Q' (m*)Q'(g*) equals to 1. The values of a and b are predetermined, so we have an explicit upper bound as IKmgKgm I = (a + b)2/ab - 5.58, which agrees with the critical value obtained in the previous section.








In general, with a fixed external input, IKm9KgmI determines the dynamical behavior of an RKII set by

I' Kmg~gml 1 (a + b)2 when R2ax < (a + b)2 (2.26a)
IKJ gKgm1 < Q'(m*)Q'(g*) ab

1 (a + b)2 when R2a> (a � b)2. (2.26b) IK ggm1 > Q'(m*)Q'(g*) ab

Therefore, for the purpose of controlling the dynamics (i.e., fixed point solution with zero input and limit cycle with positive input) of the system defined by Equation 2.7, the sufficient and necessary conditions on the coupling coefficients are

" Condition 1: if and only if

(a + b)2 (2.27) IKmgKgml < ab (

an RKII is at rest when there is no input.

" Condition 2: if and only if

1 (a + b)2 (2.28) Q'(m*)Q'(g*) ab (

an RKII oscillates when the input is positive.
Note that the right side of Equation 2.28 is also a function of King and Kgm, which is nonlinearly dependent on IKmgKgmI.

Condition 2 also sets restrictions on the value of Qm in Q'(x). Q'(x) is defined by
e-1
Q'(x) exe- QM x > Xo (2.29) 0 else (2.30) where x0 = ln(1 - Qm ln(1 + 1/Q.n)). Figure 2-5 shows a plot of Q'(x). Q'(x) > 0 and has a maximum value of Qme- Qm when x = ln(Qm). To satisfy condition 2, Q'(x) must be greater than 1 for some x. Solving the maximum values, we have Qm > 1. This is in accordance with the requirement in biological models that asymmetry of Q(x) is a necessary source of instability in the neural network [4].















0
0
1.5

0


ai)
0


1 2 3
Stabe state of M-cell or G-cell


Figure 2-5: Derivative of Q(x) when Qm = 5

2.5.1 Determine Proper Values of King and K_,

We investigated the conditions on King and K.,. In this section, we provide the specific procedures to choose suitable King and Kgm that satisfy both conditions.
(a + b)2
As in the case when IKingKgmI < ab , the boundaries of King and Kg is well defined by

King > 0

Kgm < 0

IK~9Kq~j < (a + b)2 ab (2.31) The second case is more complicated because the 2nd condition is the nonlinear function of King and Kgm. Although we could not calculate an analytical solution of the region where valid King and Kgm exist, we will show how to approximate








the desirable areas. Again, the equilibrium is determined by m* - KgnQ(g*) - P = 0 g*- KngQ(m*) = 0 (2.32)


where m* and g* are the fixed point solutions. Solving Equation 2.32 and writing Q'(x) in terms of Q(x), the boundary of Condition 2, IKmgKgm ,1 (a+b)
' Q (rn)Q (") ab

could also be defined as

QKgKgj Q(m*)Q'(g*) IKmgKgmem+9*(1 - g ) + Pm, ) Q.n JKgmI QK~
= ein+9*(Kmg - + ) Q. Q.
(a+b)2 (2.33) ab (

where m*(> 0) and g*(> 0) are also functions of King and Kgm. It is hard to find an explicit solution for the lower boundary of desired Kig and Kgm, especially in the form of IKmgKgml. However, we can still find ways to determine useful properties of the desired regions. Property 3. Given m* > 0 and g* > 0, if

Q,(m*)Q,(g*) = QI(m*)Q(KmgQ(m*)) = 1, (2.34) then

Q1(m*)QI(KmgQ(m*)) > 1 (2.35) for all 0 < m < m*.

Proof. QI(m*)QI(KmgQ(m*)) has similar shape as Q'(x), So it has a lower bound of 1 when m decreases. 0








Property 4. Suppose that K~g and Kgm is a set of values that satisfies both conditions. If we fix the value of King = Kig and increase Kgm until IKmgKgn = (a + b)2 1 (a+b)2
ab ' then ing~ gnj > Q(m*)Q'(g*) ab s automatically satisfied IK**I1 (a~b)2we
Proof. At the point where K*gKg. > 1* (a ab)2 we have mggm Q'(m*)Q'(g*) ab
Q'(m*)Q'(g*) > 1. With increased Kg, both m* and g* are decreased (Properties 1). According to Properties 3, Q'(m*)Q'(g*) > 1 is also true when IK*ngKgmn
(a + b)2
is increased to a Thus,
ab Ths
IKgKgm = (a + b)2 1 (a + b)2 (2.36) ab Q'(m*)Q'(g*) ab

is automatically satisfied. El

Because Equation 2.33 is a continuous function, based on the above properties, we could start from IKmgKgml = a , then fix the value of King and decrease the value of IKgmn to satisfy both Condition 1 and Condition 2. The first exponential term in Equation 2.33 will be monotonically decreasing while the other two terms will be monotonically increasing with respect to a reduction of m* and g*. Two cases could possibly occur. IKngKgml could be dropping until it reaches the bifurcation points that we start with. Or, it will reach another bifurcation point before that. Similarly, starting from IKmgKnj = 1 (a-+b)2 and Q' (m*)Q' (g*) ab
increasing only the value of lKgmI will also give us a range of valid values. However, there is no guarantee that these two areas are overlapped so that starting from any valid King and Kgm, all values with larger IKgmI will satisfy the conditions. Nevertheless, experiments show that for reasonable and normally used parameters (such as the a and b given previously, and Qm=5), we can use the above procedures as a rule of thumb to easily determine in which direction we should change the invalid values to obtain the working ones.






31

2.5.2 Dynamics Controlled by an External Input

The state of an RKII is controlled by an external stimulus P. Here, P is a

time-invariant input with two states (off and on). Off state (P = 0) is guaranteed by Condition 1 to be stable. However, not every positive value (on) could induce the oscillatory state in an RKII set. We will see in the following that P creates oscillatory behavior only in a bounded region. Property 5. In the system defined by Equation 2.7, the fixed point solutions m* and g* are both monotonically increasing functions with respect to the external input P.

Proof. The lSt-order derivatives of m* and g* with respect to P are dm* 1
dP 1 + IKmgKgmI Q'(m*)Q(g*)
dg* _Km9 Q'(m*)
-g ~ Q(* > 0 (2.37) dP 1 + IKmgKgml Q'(m*)Q'(g*)

Recall that King > 0 and Q'(x) > 0 with finite x. So both derivatives in Equation

2.37 are positive. m* and g* are both monotonically increasing functions with respect to the external input P. LI Theorem 3. Given a set of fixed coupling coefficients King and Kgm, the values of external input P that enable oscillatory state in an RKII set only exist in a bounded region.

Proof. The interval of x where Q'(x) > 1 (Figure 2-5) is limited. The conditions obtained previously require Q'(m*(P))Q'(g*(P)) > 1. Therefore, m* and g* should only stay in a unique finite region. According to Property 5, P must be bounded. LI
















0.15 0.1

0.05 " . 0 < -0.05

-0.1

-0.15

-0.2

-0.25
0 1 2 3 4 5 t (s)


0.15
(b) m(t) VS. g(t) S Initial Value

0.1


0.05


0



-0.05


-0.1
-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2
m(t)


Figure 2-6: Fixed point solution in an RKII set when P=O. (a) Temporal plot. (b) Phase plot



2.6 Experiments

2.6.1 Control of RKII Dynamics by the Coupling Coefficients


We give two examples in which the external input is fixed to either zero or a


positive value. In the case of zero input, an exact boundary is shown.


When P(t) = 0, Equation 2.26a gives a fixed bifurcation boundary that divides


the space into areas of oscillation and fixed point solutions. As given in [1], a = 220


Hz and b = 720 Hz. So when


IKmgKgmI < 5.5783,


(2.38)







33


0.25
(a) m(t)

0.15
0.1
0.05






-0.2

-025
0 0.1 0.2 0.3 0.4 0.5 t (s)

0.15
o. (b) - ,m() vS..g(t) " Initial Value
0.1


0.05

0)


-0.05-0.1
-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2
m(t)

Figure 2-7: Limit cycle solution in an RKII set when P=O. (a) Temporal plot. (b) Phase plot an RKII set is stable and has a fixed point solution of (0, 0). When


IKmgKg,, > 5.5783, (2.39) an RKII set oscillates. Note that in this case, the actual values of King and Kg" do not change the qualitative behavior as long as IKgKgl is unchanged. In the examples, P = 0, King is fixed to one and Kg,, is varied to set different values of IKmgKgm. Initial condition of m and v is set at (0.1, 0.1). In Figure 2-6, when King = 1 and Kgn = -5.5 the output converges to a fixed point at (0, 0) after a





































m(t)

Figure 2-8: Fixed point solution in an RKII set when P=. (a) Temporal plot. (b) Phase plot

very long transient time. When we increase lKgml to 5.6, the RKII set converges to a stable limit cycle (Figure 2-7).

In the second example, we set input to P = 1. As discussed in the previous

section, there is no explicit expression of a lower boundary in the form of IKmgKgmI but once we determine one set of valid values, all other values with a larger IKgmj are all possible choices to keep the system oscillating with a positive external input. Again, we set King to 1. Figure 2-8 shows the transient output of M-cell and G-cell with an testing value of Krn = -4. Apparently, the coupling strength is not large enough to sustain an oscillation. In this case, the equilibrium is at (0.1776,0.1906)

















0.
- I " E -0.5



-1 .5


0 0.1 0.2 0.3 0.4 0.5 t (s)


(b) m(t) VS. g(t) .8 Initial Value


0.6

0.4

0.2

0

-0.2

-0.4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 m(t)

Figure 2-9: Limit cycle solution in an RKII set when P=I. (a) Temporal plot. (b) Phase plot


and
1 (a+b)2
IKm.K.ml 4 < Q'(m*)Q(g*) ab 4.1852, (2.40) which is less than 4. Trying another Kgm = -5, the RKII set enters the oscillatory state as shown in Figure 2-9. Here, the values of m* and g* decrease and the equilibrium is at (0.1502,0.1595). Condition 2 is satisfied with

1 (a + b )2 - . 7 2( . 1 IK gKgmI = 5 > Q'(m*)Q'(g*) ab 4.3762 (2.41)


After scanning different values of Kgm, we obtained an approximate minimum value of 4.237 as the lower bound. So, in this case, Kmg = 1 and -5.5783 < Kgm <








36


0.14
0. (a) m - r(t) 0.12 --- g(t)

0.1

0.08 0.06 E 0.04
0.02

0

-0.02

-0.04
0 1 2 3 4 5 t (s) 1(b) - rn(t) )(t)

10


8
C)
"10
2
-a 6
E
4




0
0 1 2 3 4 5 t (s)


Figure 2-10: Fixed point solution in an RKII set with positive input. (a) External input P=0.35. (b) External input P=26.1



-4.237 are all possible solutions to achieve the desired dynamical behavior in an RKII set. Note that, with a relatively small IKgmI, the amplitude of the oscillation will be very small and the transient time to reach a stable limit cycle is rather large. So, normally, IKgmI should have a relatively large value.

2.6.2 Control of RKII Dynamics by the External Input

To show that P only exists in a bounded region to achieve desired system

dynamics, we fix King = 1 and JKg = 5 and scan different values of P. Approximately, with this set of coupling coefficients, P must be kept within (0.42, 26.06) to effectively excite the RKII set. Figure 2-10 shows the transient signal of the system








Scanned Boundanes: P=1


1 2 3 4 5 6 7 8 9 10


Figure 2-11: Scanned parameter space of IKmgKmI

with two different P values (0.35 and 26.1). We see that, not all positive input can force an RKII into the oscillatory state. In fact, when P = 0.35


1 (a + b)2 5 IK=gKgm1 = 5 < Q'(m*)Q'(g*) ab 5.0974,


(2.42)


and when P = 26.1


1 (a + b)2 = IKmgKgm. = 5 < Q'(m*)Q'(g*) ab


(2.43)


2.7 Discussions

The Hopf bifurcation occuring in an RKII set can be controlled in terms of

King, Kgm, and P. The condition on the fixed-point state is entirely determined by the system's time constants and can be solved analytically. Although generally the second condition on input induced oscillation does not have a close form solution,






38

simple rules can be used to adjust the parameters. As long as the first condition is still satisfied, the coupling strength can always be increased to achieve oscillatory state under stimulation. We also scanned the parameter space of King and K,, to give a sense of the shape of the boundaries (Figure 2-11). The simulation is performed with P = 1 and Qm = 5. We see in the plot that, the lower bound follows the properties we discussed before. If an arbitrarily determined coupling coefficients could not make the RKII set controllable as we expected, we need to increase the value of either King or IKmI.














CHAPTER 3
SYNCHRONIZATION OF COUPLED RKII SETS

A biological system, which usually presents very complicated behaviors, is often a highly interconnected network that is built from dynamically similar subsystems. The behaviors of such sub-systems can be very simple ones such as stable fixed point and multiple equilibria [21], or complicated ones such as oscillations [22] and even chaos [23]. An interesting question is how the complicated behaviors of the overall system could possibly be generated by the interactions among its sub-systems. This leads us to study the behaviors of coupled systems. There are a wide variety of collective behaviors in a coupled oscillatory system including synchronization, partial synchronization, desynchronization, dephasing, phase trapping, and bursting in coupled neural oscillators through diffusive coupling. A broad range of analysis has been undertaken from physical systems to biological systems such as those presented in [24-30].

An important aspect of this research work is to understand the global behavior of Freeman's KII network so that we can achieve an efficient and robust way to use the network. We investigate here the synchronization of the basic building block in the computational model of olfactory system. Basically, Freeman's model presents chaotic behaviors through the outputs of the olfactory bulb layer when it is either in the basal state or stimulated. While the network as a whole creates signals that are similar to EEG [31,32], the basic building blocks (KO, KI and KII sets) are either single or coupled nonlinear dynamical systems that have simple dynamics such as fixed point solution and limit cycle. Although to understand how the chaotic behavior is generated in the KIII network is a very challenging question, by using synchronization analysis, we expect to understand one of the








global behaviors by only using the analysis obtained in previous chapters to avoid complicated analysis when dealing with higher level structures. It can be a helpful first step to start understanding higher level system dynamics.

Recall that, an RKII set is an input controlled oscillator. Given the external input P, a bifurcation boundary is defined by

1 (a + b)2 (3.1) IKmnggml =Q'(m*)Q'(g*) ab (

where Q' (x) is the derivative of Q(x) and (m*, g*) is the unique equilibrium of Equation 2.1. Especially, when P = 0,

IKmgKgmI = (a + b)2 (3.2) ab

In this case, when IKmgKgmI, < Equation 2.1 is stable at the origin while ab namical behaviors of
individual RKII set are clearly understood, we still do not know how to properly control coupled sets. Analysis on synchronization of these coupled oscillatory elements is very important toward our goals of understanding collective behaviors of higher level structures in the Freeman model. Besides, learning and memorizing in the olfactory system are still a challenging problem in that the coding of information is still unclear. The OB layer, as a coupled RKII network, is the key element for information encoding and computation. It has been used as a dynamical associative memory just like the other oscillatory networks that are being used in learning and memorizing [12-14]. The analysis of synchronization of coupled RKII sets may also give hint on possible implementations of the OB layer (RKII networks) for the purpose of information processing.

Synchronization of coupled chaotic systems has been widely studied [33, 34] due to many challenging applications such as data communications, chemical reactions [35-37] and collective behaviors in biological systems. Pecora and Carroll








[33] studied the synchronization of a subsystem that is a replica of the original system where the driving input comes from. Later, Ding and Ott [38] extended it to the cases that the synchronized systems are not necessarily replicas of the original system. Brown and Rulkov [39] studied the sufficient conditions to design proper coupling strength of two chaotic systems that guarantee a synchronization between them. Although the analysis are for coupled chaotic systems, most of them also apply to systems with simple dynamics such as a limit cycle. This analysis is of particular interest in our study of neural systems. In the following sections, we discuss how to use Brown and Rulkov's method to determine the coupling strength where coupled RKII sets synchronize.

Section 3.1 presents the methodology of synchronization analysis. Analytical solution of coupling strength to achieve synchronization in coupled RKII sets is discussed in Section 3.2. Simulation results are presented in Section 3.3. The derivation of the analytical solutions of the sufficient conditions is given in Appendix B.

3.1 Sufficient Condition on Synchronization of Coupled Chaotic Systems

The synchronization of two dynamical systems have different definitions based on different purpose and applications. Generalized synchronization (GS) between a driving system and a response system is discussed in [40]. In this chapter, we consider the case that follows the definition of identical synchronization (IS), in which the transformation from driving trajectory to response trajectory is an identity function. Different approaches on synchronization analysis of coupled chaotic systems have been discussed by many researchers. Normally, the method is based on Lyapunov functions, of which the Lyapunov exponents are difficult to be evaluated. Brown and Rulkov [39] proposed a sufficient condition on linear stability of trajectories in the synchronization invariant manifolds of two identical chaotic








systems. In their method, the Lyapunov exponents do not need to be explicitly calculated. Also based on rigorous analysis, they gave an approximated and simple result that is easy to be calculated. Although solved for chaotic systems, the results are also applicable to general nonlinear systems.

Given driving and response systems as

dx
d- = F(x; t) (3.3) dt

and
dy


where x, yE RN x represents the driving trajectory and y is the response system. E is a system of functions describing the coupling between driving and response systems. It can be either a linear matrix or a set of nonlinear functions. Define the deviation of system y from system x as w - y - x, we have dw
dt - [DF(x; t) - DE(O)]w (3.5)


where DF(x; t) is the Jacobian matrix of F, and DE(O) is the Jacobian of coupling matrix E at the origin. This equation describes the motion transverse to the synchronization manifold. The synchronization manifold is linearly stable if w(t) linearly converges to zero for all possible driving trajectories x(t) within the chaotic attractor of the driving system. DF(x; t) - DE(O) can be further decomposed into a time independent part A = (DF) - DE(O) and an explicit time dependent part B = DF(x; t) - (DF).
Assume that matrix A has eigenvalues A1, A2,. .., AN and are ordered by their real parts as R{A1} > R {2} > ... > R{AN}. Denote the matrix of the corresponding eigenvectors of all the eigenvalues as T = [el, e2,. . ., eN]. Brown and Rulkov [39] solved the condition for linear stability of the invariant trajectory in








the synchronization manifold as R[A,] >< JIT-I[B(x;t)T]ll >, (3.6)


where < � > is the operator of time average. In addition to the rigorous result, a quick and simple rule is given as R[Aj] < 0, (3.7) where R[A1] is the largest real part of the eigenvalues of A. That is, conditions on synchronized behaviors can be studied by evaluating the eigenvalues of deviation system. Next, we use the simple rule to study the synchronized behaviors of coupled RKII sets.
3.2 Analytical Solutions of Couplings for Synchronized RKII Sets

P(t) KP(t) P(t) Kmm Kmm.**





MM




Kgg Kgg*

(a) (b)

Figure 3-1: Two representations of coupled RKIJ sets. (a) Linearly coupled RKII sets (b) Under synchronization, coupled RKII sets could also be described as an individual RKII set with auto-feedback

By the original definition, coupling among different KII set is through the

same nonlinear function Q(x) as defined in the previous chapter. Linear coupling between oscillatory components in olfactory [41] and many other neural system








models has also been widely used. While different couplings all address the interconnection among different set of neurons, intuitively, linear coupling has much simpler dynamics and is easier to analyze. Here, we shall first consider the case of linear coupling through coefficients Kmm and Kg9 and derive an analytic solution. In the next section, we discuss the effects of nonlinear coupling and compare the differences and similarities between the two coupling scheme in terms of theoretical results and simulations.

Assume we have two identical systems defined by Equation 2.1, in linear coupling mode (Fig. 3-1(a)), the system is described by a system of ODE's as

- (1)
7h0) = mnv)

S() I'= -abm(1) - (a + b)m(,1) + ab(KgmQ(g(1)) + P + Kmmm(2))
0 ) = (1) (3.8)

(1) -abg(1) - (a + b)g(1) + ab(KmgQ(mU)) + Kggg(2))

where Kmm > 0 and Kgg < 0 are the excitatory and inhibitory couplings respectively. The superscript (1) and (2) denote the two systems respectively. In this case, coupling matrix DE(0) is

0 0 0 0

DE(0) abKmm 0 0 0 (3.9)
0 0 0 0

0 0 abKg 0

and

A (DF) - DE(0)

0 1 0 0

-ab(1 + Kmm) -(a + b) abKgm(Q' (g(l))) 0 (3.10)

0 0 0 1 abKmg(Q'(m(1))) 0 -ab(1 + Kgg) -(a + b)








where (Q'(x)) is the measure (time average) of the derivative of Q(x). The outputs are simple limit cycles so we use time average to evaluate them.

To find the sufficient condition that guarantees the synchronization between two RKII sets, we should solve for the real part of the Jacobian of A. Detailed analysis on how to solve for the conditions will be presented in Appendix B. The results are given here. The sufficient condition on coupling coefficients Kmm and Kg_ is the union of Equations 3.11-3.16.

When

K.m + IKggI > V4KB, (3.11) either of the following three inequalities (Equations 3.12, 3.13 and 3.14) needs to be satisfied.


(Kmm KA )(Kg + KAI) < KB Kmm - IKg9 > K (3.12)
2

-2 < Kmm-IK gg < KAI
2
(1 + Kmm)(jKggI - 1) < KB (3.13)


Km - jKg >
2
(1 +Kmm)(IKgg 1) < KB KA1 KAI1,
(Kmm - K )(IK9g I + ) > KB (3.14) When


0< Km,,,+ IKggI < V4KB, (.5


(3.15)








Equation 3.16 needs to be satisfied.

-2 < Kmm- ggl

(Kmm + IKgg1)2 > 4KB - 4KA2 - 2KA2(Kmm - JKggj) (3.16) Values of KA1, KA2 and KB are calculated as (a -b)2
KA1 ab
ab '

KA2 (a + b)2
ab
KB K gKg(Q'(m) )) (Q'(g(1))) . (3.17) These conditions can be further simplified under certain assumptions. Synchronized RKII sets could have different kinds of behaviors. For our purpose, we want these coupled sets to keep the same dynamics as those of a single RKI1 set when they are synchronized. Under identical synchronization, dynamics of two coupled sets can be described as a single system with auto-feedback (Figure 3-1(b)) as

rh - m

ntv = -ab(1 - Kmm)m - (a + b)m, + ab(K .nQ(g) + P)

= gv

-ab(1 - Kgg)g - (a + b)g, + abKmgQ(m). (3.18) Kmm and Kgg interpret couplings from the other system as auto-feedback coefficients. Only when Kmm < 1, the above equation defines the same system as a single RKJI. Thus, we limit Kmm to be less than 1. Moreover, in experiments, the absolute value of Kg is normally much smaller than Kmm. Thus the two coupling coefficients are limited to the unit square. Note that the first set of conditions require Kmm + JKgg > 2V/-B. From experiments, KB is always greater than 1. Thus, when Km < 1 and JKggj < 1, the sufficient condition on synchronization can








be simplified to

(Kmm + 1Kgl)2 > 4KB - 2KA2(2 + Kmm - KgglI). (3.19)


3.3 Simulations
We consider the case of synchronization when an uncoupled RKII set is in the oscillatory state. Therefore, if a single RKII oscillates, the synchronized systems should also stay in the oscillatory state. In order to keep the qualitative behavior of coupled RKII sets the same as the behavior of an individual one (oscillation in this case), the Jacobian matrix of Equation 3.18 (i.e., DF(x*) + DE(x*), where x* is the equilibrium) should have at least one eigenvalue with positive real part. In this case, solutions of Kmm and K,9 could be obtained similarly as given before but with a different KB as

K; = IKK,.Q' (m*)Q' (g*) . (3.20) Based on Equation 3.1, an oscillatory RKII set must satisfy


KgKQ'(m*)Q,(g*) K > (a + b)2 > 1, (3.21) ab

where (m*, g*) are the equilibrium. Thus, similar to discussions in the previous section, we obtain a simplified bifurcation boundary that enables oscillation in synchronized RKII systems as

(Krm -+ Kgg)2 < 4KB - 4KA2 + 2KA2(Kmm - Kgg). (3.22) We denote Equation 3.22 as the bifurcation boundary. Note that Equation 3.22 is only valid when synchronization is achieved. If the synchronization condition is satisfied before bifurcation happens, we expect the coupled sets to converge to a fixed point. The bifurcation boundary is more accurate because it uses the exact values of equilibrium instead of estimations of limit cycles used in conditions for




















E 0 A ,\
-2
--2+IK 1) 4( \2 K2K -K 1
-3 -I I =1 - -B -A 99
Synchro ization Boundary \
-4

-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
+Iggl

Figure 3-2: Parameter space defined by Kmm and K... The plot shows combined results of synchronization and bifurcation boundaries when input P = 0, Km'9 = 1 and Kgm = -6.

synchronization. The two boundaries are actually two parabolas (Figure 3-2) that divide the parameter space into three possible regions, where two coupled RKII sets behave differently as synchronized oscillation, desynchronized oscillation, and fixed point. Figure 3-3 gives an example of a detailed view of the actual boundaries inside the unit square. The lower bound of the fixed-point region is defined by synchronization conditions. However, we used the time average of limit cycles to estimate KB but in the fixed-point region the trajectory is not oscillatory anymore. Therefore, we can replace KB in Equation 3.19 with KB to fine tune the lower bound of the fixed-point region. The section of bifurcation boundary that is in the desynchronized area is removed. There are three other parameters (King, Kgm and P) that could change the position and shape of the boundaries. In most cases, these three parameters are predetermined so we can decide the boundaries

















E
E 0.5
0.4 Synchronization Boundary
0.3
0.2 Desynchronized Oscillation


0.1

0 0.1 0.2 0.3 0:4 0.5 0.6 0.7 0.8 0.9 1 IKgg 1

Figure 3-3: Values of JKggj and Kmm are limited by 1. Three regions are determined where the coupled system shows synchronized oscillation, desynchronized oscillation or fixed point solutions.

accordingly. In the case when their values need to be changed, it is still convenient to know how the boundaries move. In fact, only KB (or K ) will be affected by the three parameters. Increasing or decreasing KB (or K ) will shift the parabolas along the position vector (1,-1) (Figure 3-2).

We should also note that Equation 3.10 defines a mutually coupled system. Driving trajectory is not determined until the the system reaches synchronization. This affects the position of the synchronization boundary because the value of KB changes with different values of coupling strengths (hence, the synchronization boundary may not be strictly a parabola function). If KB does not have large variations, it can be simply estimated by a single oscillatory RKII set without assuming any coupling strength in advance. To achieve a more accurate result, for every fixed Kgg (K mm), different values of kmm (Kgg) can be evaluated by








computing corresponding values of KB using the single RKII model with autofeedback (Equation 3.18). This seems to lose the advantage of a simple analytic solution. However, a single RKII set usually runs much faster in real time to reach steady state for a good estimation of KB than two coupled RKII sets run to achieve synchronized oscillation. Single RKII set is also simulated more efficiently because it has fewer dimensions. It should also be pointed out that, by adaptively evaluating the coupling coefficients, only the synchronization boundary needs to be calculated to decide the sufficient condition on synchronization. This is because when computing the time averages, we assume that the RKII set has an oscillatory state, which automatically satisfies the bifurcation boundary. Moreover, when external input is zero, the other two boundaries can still be easily calculated because the driving trajectory is fixed at the origin. Apparently, the bifurcation boundary should be always below or overlapped with the accurately calculated synchronization boundary.

Next, two simulations are presented, where we chose Kg = 1, Kgm = -6

and P = 0 as an example. First, we consider two cases with linear coupling where JKggJ = 0.8 and JKg = 0.4, respectively. For each example, we show that only when both synchronization and bifurcation conditions are satisfied, the coupled systems behave as expected. Second, the effect of nonlinear coupling is discussed. Numerically simulated boundaries and calculated boundaries are compared for both linear and nonlinear coupling. We will also show that, in addition to simple oscillations, quasiperiodic motion is also observed in a small region around the synchronization boundary.

3.3.1 Example 1: Kgg = 0.8

When increasing Kmm from a very small value, the system first meets the boundary that generates synchronization behaviors. However, in the synchronized system, both of the reduced KII sets converge to a fixed point instead of







































(a) K-mm0.6, KW--0.8























75 49.8 49!85 49. 49.05
t




(C) K.-0.73, Kg--0.8 F-- I























5 498 4g.85 499 4915 0


50 00 70 s0 90


Figure 3-4: Simulations of coupled RKII sets when K99=-0.8. (a) Kmm=0.6. (b) Kmm=0.96. (c) Kmm=0.73. (d) Kmm=0.74. (e) Kmm=0.91. (f) Kmm = 0.93


0.3 (d) K,,,,=0.74, K --0.8



0.2


0.1


0


-GAt
-02








0 10 20 30 40 50 W0 70 so W0 I00
t








oscillating, because Kmm has not passed the bifurcation point yet. Theoretically, the bifurcation point locates at Kmm = 0.92 (Fig. 3-3). Figure 3-4 shows the steady-state behavior of the coupled systems at Kmm = 0.6, 0.96, 0.73, 0.74, 0.91 and 0.93, respectively. When K9g = -0.8 and Kmm = 0.6, the coupled RKII sets are desynchronized. When Kmm = 0.96, the coupled RKII sets are synchronized. When Kmm = 0.73, desynchronization occurs below the synchronization boundary. When Kmm = 0.74, the coupled system converges to a fixed point above the synchronization boundary. When Kmm = 0.91, the coupled system converges to a fixed point below the bifurcation boundary. When Kmm = 0.93, the coupled system synchronizes and oscillates above both the synchronization and bifurcation boundaries. The results are as expected. We see that theoretically obtained bifurcation boundary predicts very well experimental results. The solution serves well as sufficient conditions for synchronized dynamics. Generally, bifurcation boundary and lower bound of the fixed-point region are more accurate because they are calculated exactly.

3.3.2 Example 2: K9 = 0.4

In this case, the bifurcation condition is first satisfied when changing Kmm from a small value (Figure 3-3). However, without satisfying the synchronization conditions, the bifurcation point is invalid. In this case, synchronized behavior should be governed solely by the conditions on synchronization. Figure 3-5 shows the difference between the two systems at Kmm = 0.33 and Kmm = 0.4. Synchronized behavior is observed when Kmm passes 0.37 while theoretically the sufficient condition is about 0.42. Again, although there exist errors between calculated and actual values of the boundary, the analytical solution is still a sufficient condition on synchronization as expected.

Figure 3-6 illustrates the differences between analytical solution and numerical simulation. We see that there are differences between the two synchronization








































-0. 5 i






49.75 49.8 49.85 49.9 49.95 50
t


I (b) K g=0.4, Km=*04 mO)I

0.8 0.6 0.4 0.2 E 0

-0.2

-0.4

-0.6

-0.8

-1
49.75 49.8 49.85 49.9 49.95 50

t


Figure 3-5: Simulations of coupled RKII sets when Kgg = -0.4. The coupled system desynchronizes and synchronizes at (a) K__ = 0.33. and (b) Kmm = 0.4



















E
E 0.5

0.4 0.3
0.2

0.1 Sufficient Condition
- Numerical Test - Linear Coupling
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 IK 99
Iggl


Figure 3-6: Comparison between analytic solution and numerical simulation boundaries but the analytical solution guarantees synchronized oscillations. As discussed before, analytical solution gives almost the same values of the other two boundaries as those in numerical simulation.

3.3.3 Nonlinear Coupling

In the case of nonlinear coupling, Q(m) and Q(g) are coupled through Kmm and Kgg. Therefore, weights of Kmm and Kgg can be considered as time varying according to a specific trajectory. To use the simple criterion, similarly, it results in estimating time averages of KmmQ(m) and Kg9Q(g). Therefore, by denoting K*,-= Kmm(Q'(m)) and K9* = Kgg(Q'(g)), the three boundaries (synchronization, bifurcation, and fixed-point) can be rewritten, respectively, as


(Km + Kg9 )2 > 4KB - 4KA2- 2KA2(K m - 33),


(3.23)






55

(Knm. + Kgg I)2 < 4K - 4KA2 + 2KA2(K*m K, 9 j), (3.24) and

(K m + K;g 1)2 > 4K - 4KA2 - 2KA2(Kim - jK~g). (3.25) When external input is zero, the bifurcation boundary and the lower bound of the fixed-point region should be exactly the same as those in the case of linear coupling. However, simulations show that the simple criterion (Equation 3.7) fails to compute the synchronization boundary when the input is zero and all the boundaries when the input is positive (where in nonlinearly coupled system, multiple equilibrium may exist). Table 3-1 compares the simulated results with analytic solutions when King = 1, Kgm = -6 and P = 0. Data shown are the boundaries (synchronization and bifurcation boundaries) for synchronized behaviors. When Kg_ = -0.7 (-0.8), solution of the lower bound of the fixedpoint region for both linear and nonlinear coupling is Kmm = 0.681 (0.739) which matches the numerical simulation. In summary, for the case of linear coupling, boundary for synchronized oscillations can be calculated using the synchronization condition, where with a fixed Kg,, K.m can be evaluated using the time average estimated from a single RKII with auto-feedback. If the external input is zero, part of the synchronization boundary can be easily calculated using the bifurcation condition. While the simple criterion generally fails in the case of nonlinear coupling, it still applies when the input is zero. In such cases, the three different dynamics in the coupled system can still be accurately decided in part of the parameter space similar to the case of linear coupling.
Table 3-1: Comparison between analytic solution and numerical simulation

_Kgg 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
K .... Simulation 0.053 0.156 0.261 0.369 0.479 0.593 0.734 0.912
Linear Analytic 0.149 0.238 0.328 0.419 0.510 0.604 0.734 0.912 Knm I Simulation 0.74 0.94 1.08 1.19 1.27 1.27 0.734 0.912 Nonlinear Analytic 0.18 0.29 0.38 0.47 0.55 0.62 0.734 0.912








3.3.4 Quasiperiodic Motion in Coupled RKII Sets

Through simulations, it is found that the coupled system shows dynamic

behaviors consistent with the parameter partition shown in Figure 3-2, except that quasiperiodic motion occurs in a narrow region below the actual synchronization boundary. From simulations, the second frequency in a linearly coupled system is normally slow. So, for illustration purpose, a nonlinearly coupled system is used, where K..g= 3, Kgm= -3, Kmm= -0.4, Kgg= 0.1885, and P= 0. Figure 3-7 shows the time-domain signal, the phase plot, and the power spectral density, respectively. In fact, the transition between synchronized and desynchronized oscillations does


0.6 0.4 0.2
0
-0.2
-0.4
-0.6
-4.8 -0.6 -0.4 -0.2 0 0.2 0.4 0
m(t-t)


100 150
Frequency (Hz)


Figure 3-7: Example of quasiperiodic motion in two coupled RKII sets. (a) Temporal signal. (b) Phase plot. (c) Power spectrum


not always occur with a sharp boundary. Figure 3-8 shows an example of the region where quasiperiodic motion occurs (between the actual synchronization boundary and the lower bound of quasiperiodic motion). Note that this behavior is









only observed under the actual synchronization boundary. Effort was made through numerical computations to find out whether chaos may also occur for linearly coupled RKII sets in the region where quasiperiodic oscillations are observed. Kgg and Kmm are scanned inside the region where quasiperiodic motion occurs by fixing Kgg for different values and varying Kmm in steps of 10-4. Power spectrum is examined and no chaotic motion was observed.




0.9/ 0.8

0.7

0.6
E
E 0.5

0.4-/

0.3

0.2
-SufficientCodtn 0.1 -- --Numerical Test
7
Lower Bound - Quasiperiodic Motion

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 IKgg

Figure 3-8: Simulated regions of quasiperiodic oscillations


3.4 Discussions Many biological systems are modeled as interconnected networks of neural

oscillators. Synchronized oscillations are believed to play a key role in such models, especially for the purpose of information processing. In applications such as associative memory, coupling strengths between different RKIIs are set to achieve








desired temporal-spatio response, where different channels are clustered based on energy and/or phase information. Usually, these weights are set empirically or through Hebbian like adaptive learning. In adaptive leanings, proper selection of initial condition and normalization of weights are normally needed to avoid instability in training. To better understand the dynamics of this model and help for network design in applications, we studied the synchronization of two coupled RKII sets based on a simple criterion to gain insight on how to properly select the coupling coefficients.

Linearly coupled RKIs can be analytically solved with respect to kmm and Kgg. In general, with fixed Kgg, the sufficient condition for synchronization can be solved by calculating Kmm based on the oscillatory trajectory estimated from a single RKII with auto-feedback. This is a simple procedure to determine the regions of different dynamics. It can effectively avoid simulating the coupled system through the whole parameter space. In addition, when external input is absent, part of the synchronization boundary can be easily obtained due to the stable fixed-point at the origin. The criterion does not apply to general nonlinear coupled systems. However, when the external input is zero, part of the parameter space can be solved similar to that in the linear case. More sophisticated method is needed to fully solve the problem of nonlinear coupling. However, the complexity to find the boundaries should always be a concern compared with the effort of trial-and-error experiments. Analytic solutions and observations were validated by numerical simulations. It was further observed that near the synchronization boundaries, the dynamics of the coupled oscillators may become quasiperiodic.














CHAPTER 4
COMPUTATION BASED ON SYNCHRONIZATION

In this chapter, we propose to use synchronization in the output space of the Freeman model as the readout. In many other oscillatory networks [13,14, 42], synchronization among groups of oscillators has been used in various applications such as image segmentation, audio processing, and pattern recognition. More interestingly, phase coding has also been studied for Freeman's model, which shows advantages over conventional pattern recognition techniques based on amplitude [43]. In this chapter, more specifically, we will not only use synchronization to construct the output space but also use it directly in designing the structure of the network based on the desired functionalities we want to achieve. In this complicated and highly interconnected network, in response to input stimulus, the dynamics collapses to a lower dimensional space, and different groups of oscillators can be categorized by their synchronized behaviors.

Based on the discussions in previous chapters on analyzing dynamical behaviors of the basic components in the Freeman model, we will show how to design critical parameters in the network to obtain desired output responses, especially synchronized and desynchronized behaviors. Two applications will be presented. The first application provides a different view of computation in Freeman's model. With fixed coupling coefficients, a combination of external inputs will lead the coupled oscillators in a reduced KII network to two different states as synchronization and desynchronization. It turns out that with only two oscillators, by determining the proper values of the coupling coefficients, we can build functional blocks to implement two-input boolean logic gates including AND, OR, XOR, NAND, NOR, XNOR, and NOT. In the second application, an associative memory is constructed








by using synchronization among output channels in a reduced KII network as the readout. The network combines both amplitude and phase relations among RKIIs for information retrieval.

4.1 Logic Computation

From previous discussions, we know that there are two basic behaviors for two coupled RKII sets. If we denote synchronization as state 1 and desynchronization as state 0, the coupled system is a logic function when defined as


f(I) = S(m(1), M(2), 9) (4.1) where I - {0, 1} x {0, 1} represents the input space and S = {0, 1} is any function that determines whether or not the outputs of the two systems m(l) and m(2) are synchronized. 0 is the threshold used in S. Note that I is not limited to binary values of 0 and 1. Basically, any two values that represent higher and lower level inputs can form the input space. Function S is just a symbolic system where one can assign either 0 or 1 to any of the two states. This means that, if f can implement one set of the basic logic computations (for example, AND, OR, XOR), it should also be able to realize their complements with a different assignment of states in the actual measure of synchronization. The NOT function can also be implemented with many options. One example is to short the inputs of NAND gate or fix one of its inputs to 1. A two-input logic gate has a total of 16 boolean functions. Here, we only consider the case that the coupled oscillators have a symmetric structure, so it cannot distinguish between the inputs (0,1) and (1,0). However, this may be solved by using weighted inputs. The weights then become the parameters that need to be designed in addition to Kmm and Kgg. By considering only the symmetric structure, without combination of different gates, the coupled oscillators have the ability to implement 8 out of the 16 logic functions that include all the boolean primitives. They cover most of the universal sets (for








example, {AND, OR, NOT} and {NAND}) that can express any logic functions. Note that in this application, analytic solution of the couplings between two RKII sets is not necessary. Any parameter can be used as long as synchronized and desynchronized behaviors can be achieved.

Synchronization can be measured using either correlations between the outputs of the two sets or direct detection of phase difference. Equation 4.2 calculates the correlations between the time averagings of two state variables.

C(miM2) < mlm2 > - < mI >< m2 > (4.2)
C~m'T'2) V(< M2 > _ /1>2) (< M2 > - < m2 2


Notice that when the two sets are synchronized, the phase plot is a straight line. When they are desynchronized, there may be either a limit cycle or quasiperiodic motions in the phase plane. So another way is to plot the bifurcation diagram, so that multiple period could be detected in the case of desyncronization.

In simulations, we use nonlinearly coupled RKII sets and fix Kmg = IKmn = 3. Input space is formed by binary values {0,1}. Different values of Kmm and Kg, are searched to see if all the six basic logic gates could be realized. We relax the limitations on the two parameters and allow them to have absolute values that are greater than one. Given the simple structure of just two coupled oscillators, the results are very promising. Using a synchronization criterion that assigns state 1 to synchronization and state 0 to desynchronization, the coupled RKII sets can implement AND, NOR and XNOR functions. That is, by assigning 1 to desynchronization and 0 to synchronization, we have the complementary functions, i.e., all the six logic functions. Figures 4-1 and 4-2 show the implemented XNOR function in the time domain and phase plane respectively. Transient behaviors are discarded to guarantee a steady-state of the outputs. The outputs of the nonlinear function, which have saturations at -1 and 5, are plotted. In the case when inputs are [1 1] and [0 0], the two reduced KII sets are perfectly synchronized. When










Input: [0 0] Input: [0 1]
5 5 4 4 3 3 2 2



1 -1

3.1 3.15 3.2 3.25 3.1 3.15 3.2 3.25 Input: [1 0] Input: [1 1 55
4 4
3 3 .
5 2
E 4
< I I 1

0 i0


3.1 3.15 3.2 3.25 3.1 3.15 3.2 3.25
Time (s) Time (s)


Figure 4-1: Time domain response when implementing coupled RKII sets as an XNOR gate. State outputs are shaped by the nonlinear function. Solid line shows Q(m(')) while dashed line shows Q(m(2)).


inputs are [1 0] and [0 1], the two oscillators are desynchronized and even present chaotic behaviors. Note that, in this application, zero inputs to RKII sets still induce limit cycles. Actually, when both channels are synchronized, there are no chaotic behaviors locally or globally no matter what the inputs are. This is different from the conventional Freeman model where chaotic behaviors are always present. This is because here in the logic computation system, the output space is specifically designed to be either synchronized or desyncrhonized to construct binary functions. And as discussed previously, when two channels are synchronized, we expect the system to behave as a single RKII set that only has the limit cycle attractor. Table 4-1 gives the parameter values in two coupled RKIIs to build the logic gates. In the second column of the table, the label (0 or 1) defined for synchronized behaviors in S is indicated.









Input: [0 1]


2

Input: [1 1]


Input: [1 0]


4 6


0 1 2 3 0 2 4
M1 M1


Figure 4-2: Phase plot in the state space of M1 and M2 wh pled RKII sets as an XNOR gate.
Table 4-1: Set of parameters that realize the complete


en implementing couboolean primitives


Function Sync. Kmm Kg, Kmg Km AND (NAND) 1(0) 1.75 -2.0 3 -3 NOR (OR) 1(0) 1.2 -0.4 3 -3 XNOR (XOR) 1(0) 1.5 -0.4 3 -3


One important question is how to extend this result to a network of oscillators. There are basically two structures. The first one is to implement the logic gates just as in digital design. That is, outputs from one set of coupled oscillators will be the input of the next one. And combinations of different gates can implement more complicated logic functions. The other way is to fully connect N RKII sets. In this network, if we consider the synchronization of every pair out of the N oscillators, there will be a total of N(N - 1)/2 status readouts. Therefore, the size of the binary output space could be as large as 2N(N-1)/2 (although in practice, it may not


Input: [0 0]








be possible to design a network that can access all the possible outputs). Although this shows another aspect of computational implementation using Freeman's model, we should also notice that this form of computation still falls into that of a normal system due to the much simplified binary state space.

4.2 Associative Memory Using an RKII Network

An auto-associative memory is a content-addressable memory that stores a

set of patterns during learning and recalls the learned pattern that is most similar to the present input. There are basically two types of associative memories: static feed-forward structures and the recurrent dynamic structures proposed by [11]. Hopfield networks have a stable point attractor. Here we will be showing that the reduced KII network can also be used as dynamic associative memories, but they differ from the Hopfield type because they have non-convergent dynamics (limit cycles or chaos).

Let us consider the case where input patterns are static and only have binary values (positive and zero). The channels (RKIIs) that receive positive (zero) stimulus will be called active (inactive) channels. Ideally, oscillatory states of m's that are receiving the same inputs should be grouped together. That is, we expect zero phase lag among active channels. Under a strong constraint, we should expect only two kinds of different oscillations that are corresponding to high and low input values, respectively. However, in real applications, noise in the background or pattern distortions would make identical synchronization unrealistic. Here, we will still require active channels to be synchronized in the sense that phase differences between active channels only deviate from one another in a small neighborhood around zero. Behaviors of inactive channels and those caused by errors in the input space will be relaxed. Therefore, the only criterion for recognizing a pattern is to detect the active group that has strong synchronization. All other output states that are not strongly correlated to this group will be classified as null-state.








Many adaptive methods have been investigated for training the interconnection weights (Kmm and K..) in Freeman's model. Most of them are based on Hebbian learning [4, 44, 45] that could also be simplified to assign binary values, strong or weak, to coupling coefficients among active or inactive channels. From Chapter 3, we know exactly how to control the behaviors of coupled RKII sets by different values of connections. Thus, coupling strength will be determined targeting specific desired behaviors. We consider four scenarios in the input space that specify the values of couplings between different channels.

To construct such a network as an associative memory, based on the excitation coming from input patterns, the following cases determine the coupling strength among different channels:

Active channels If two channels are to be excited by at least one of the stored

patterns, when applying positive stimulus, we expect the outputs from both

channels to be synchronized. At the same time, when receiving incomplete

patterns, they should also be synchronized.
Inactive channels If two channels are never excited by any of the stored patterns,

output states should be synchronized with weaker oscillations to be distinguished from the active channels. At the same time, if one or both of them

are excited, they should be able to desynchronize from one another so that all

of them would not be considered as active channels.

Active and inactive channels Without the overlap between different channels,

two channels that belong to active and inactive channels respectively within

the same pattern should be always descynchronized.

Overlapped channels If two patterns share the same channel, connections

between this mutual channel and other non-overlapped channels will conflict for different patterns. That is, for one pattern, two channels are both active

while it becomes case three for another. In such situations, intuitively, we








will reduce the inhibitory connections between this mutual channel and other

possible active channels. Therefore, the activation of the mutual channel

will be heavily dependent on the activations of other active channels in the

incoming pattern but not affected much by active channels from another

pattern that is not currently presented (inactive for the incoming pattern).

Based on the above conceptual design, we use the following as steps to set coupling strength for an RKII network. The network is initialized assuming case 2. For every pattern, the coupling strength between two active channels is set according to case 1. Couplings between active and inactive channels are set according to case

3 except for the overlapped channels, which will be set according to case 4. The four cases are similar to the process of Hebbian learning. However, although the couplings are designed according to the properties of input-output correlations in both cases, the crucial difference lies in the structure of the readouts. Outerproduct rule or adaptive algorithms such as Hebbian learning result in similarities among mutual channels in terms of energy. However, they can not realize our goal of guaranteeing a solution that specifies the correlations such as phase information among output channels. Moreover, the values of couplings are calculated directly in our case instead of through learning where issues such as convergence rate and stability may be a concern.

A 64-channel RKII network is implemented as an associative memory that is trained to memorize and recall patterns of digits "0", "1", and "2" in the size of 8 x 8. To differentiate different groups of synchronized channels and their levels of activities, we use the following quantitative measure. First, cross correlations Cjj's between the ith and jth channels are calculated. Higher correlated channels are grouped together. Denote different groups as Gi, i < N, where N is the total number of channels. The evaluation value for each group is simply described by the total amount of input received by all channels in the group (0A = -kG, Ak,









where Ak is the input value received by the kth channel). The same evaluation value is then assigned to all channels in the same group. Note that the network is designed such that only stored patterns will be recalled as synchronized output channels. Noise comes with higher input amplitude instead of zero but all the output channels related to noise or false input will mostly not be synchronized and not be grouped together. Of course, we assume that noise or false input has an amplitude smaller than or at most comparable with ideal input values. For most cases, this is a simple but very effective criterion.

Individual RKII set is configured with King = 1, Kgm = -6, and positive input P 3. For two coupled RKII sets. Case 1 couplings are set as Kmm = 0.2 and Kgg -0.4. Case 2 couplings are Kmm = 0.2 and Kg = -0.1. Case 3 couplings are Kmm = 0.1 and Kgg = -0.8. Case 4 couplings are Kmm = 0.2 and Kgg = -0.3. All couplings are further normalized according to number of channels in the network. 0.8 is set to be the threshold of correlation coefficients between two channels. Two channels with correlation value larger than the threshold are assigned to the same group. In the presence of clean patterns, the readout successfully recalls the patterns stored in the network (Figure 4-3(a)-(f)). In the time domain, we see that all channels that receive positive stimuli are synchronized and have much higher amplitude. In Figure 4-3(b), desired active channels are almost identically synchronized (small phase differences exist between the active channels for "0" and those overlapped with "1") and have larger amplitude of oscillation. Desired inactive channels have small amplitude of oscillation and are desynchronized with active channels. Using the criterion given in the previous section, pattern "0" can easily be recovered to the stored pattern as shown in 4-3(a). In Figure 4-3(d), the phase differences among desired active channels are slightly larger than those presented in the case of pattern "0". However, using the given threshold, pattern "1" can be perfectly recovered. In these cases, a














-(a)











(c)









I 2 3 4 5 6. 7 a


U.


7 a


Time (s)


Time (s)


Figure 4-3: RKII network as an associative memory. (a) Stored pattern "0". (b) Network response to input pattern "0". (c) Stored pattern "1". (d) Network response to input pattern "1". (e) Stored pattern "2". (f) Network response to input pattern "2"


No,








simple energy readout would also work perfectly. However, when input distortion (incompleteness and noise) is presented, energy or distance based readout may fail.





4 7





1 2 3 4 5 6 7 5

Figure 4-4: Noise is added to an inactive channel and an active channel in "0"


In Figure 4-4, positive inputs (P = 3) as noise are fed into a desired inactive channel and an active channel for "0" only. Figure 4-5(a) shows the total amount of input received in each synchronized group (0A = ZkCG Ak) as the readout. Although noise increases the received input values for undesired channels, those channels tend to be grouped separately so the readout value will be kept low. When more channels are distorted by noise, as long as they are desynchronized, the final readout will always be much lower than the value of large number of active channels that are grouped together. From the time domain (Figure 4-5(b)), we see that undesired channels interfere with desired active channels. In this case, simple amplitude or energy based readout will fail the test. However, correlation between those colored channels and desired channels will still divide them into two different groups. Pattern "1, can still be perfectly recovered by choosing the group of channels that have maximum readout in 4-5(b).

In most cases, output level from the undesired channels can be high enough to interfere with the desired active channels. Energy and distance based method in this case will have difficulty distinguishing false activities (noise) while the synchrony readout still extracts the phase information between channels efficiently.








a) 5 1ii
2- Desired inactive 4 P channel with noise



Active channel in "0"
that is presented In
distorted "1"

-1.5 3.52 3.54 3.56
2 3 4 a8 7 5Time (s)

Figure 4-5: Readout from an RKII network. (a) Readout pattern improves the discrimination between desired channels and colored Network response to colored input pattern "1"


that effectively channels. (b)


However, instantaneous detection of phase distributions may also fail because the frequency of oscillations would also be affected by external inputs. From experiments, oscillation frequency of the false channel is close to but not exactly the same as that of the true active channels and the phase difference between the two is time varying. That is, if timing is not properly selected, instantaneous phase information will generate wrong information. However, from simulations, long term correlations between two channels still work because correlation between active channels remains strong. The window length used to track correlations depends on specific problems. Nevertheless, extra memory that stores previous estimation of average values of output state and state powers will enable online update of cross correlations and save computational time needed for long time-series.

4.3 Discussions

Synchronized oscillations are believed to play a key role in many biological

models, especially for the purpose of information processing. Based on our previous work on analyzing how to control an RKII network to achieve desired behaviors, we proposed to use synchronization as the computational primitive in an RKII network. Information regarding collective behaviors in such networks is used to design the readout. Network parameters (coupling strengths among different RKII








sets) are designed specifically to achieve desired responses. Two applications are presented. By utilizing two coupled RKII sets as a logic computation unit, the intrinsic structure of the coupled sets provide us with very flexible configurations to implement any logic gates and the system can realize universal sets of logic operations. The potential capacity of this network is very large, however, the controllability of all the attractors in the output space is not guaranteed and needs to be further investigated.

In the second application, we demonstrated how to design RKII network as an associative memory by considering synchronization of groups of oscillators in the output space. Finding parameter values in this case is straightforward comparing to conventional learning algorithms. Synchronization as a readout uses correlation of states to represent input patterns in the output space. Particularly, the synchrony readout performs well in the presence of noisy patterns when energy based methods may fail. There are still many problems that need to be explored. Capacity and robustness of the network in terms of stored patterns and signalto-noise ratio need to be investigated. In the applications pursued so far, only fixed point solutions and synchronized oscillations have been used for encoding information. This strategy may severely limit the capacity of neural network based models. It would be very interesting to develop schemes where desynchronized oscillations, quasiperiodic oscillations, and possibly chaos, together with fixed point solutions and synchronized oscillations, can all be actively used to encode information.














CHAPTER 5
HARDWARE IMPLEMENTATION USING ANALOG VLSI

The olfactory system model is a continuous dynamical system described by ODEs. Analog circuits are the most natural way to duplicate continuous computations for real time applications. In this chapter, the RKII set is designed using analog VLSI that works in the subthreshold region. Subthreshold region is used to take into account size, power consumption and functionality. However, we will also see that this may bring other problems such as offset and interconnection problems. Based on the architecture of an RKII, a straightforward design is divided into three major tasks including summing node, nonlinear function and the linear dynamics. Design issues are discussed individually. A new architecture that combines the summing node and the nonlinear function is also proposed to reduce the complexity and power consumption.

5.1 Summing Node
The summing node receives external and feedback inputs. The output of this block is the weighted sum of all incoming signals. Specifically, in an RKII, we need to implement a unit gain for external input and user configurable coupling coefficients King and Kgi,. In [5], all these weights were implemented using operational amplifiers outside the chip. External implementation has the advantage of high precision, but it cannot be scaled well and requires large power consumption. Here transconductance amplifiers (transamp) working in subthreshold region are integrated on chip to realize current additions. The coupling coefficients are normally greater than one, so follower aggregation circuit cannot be used. Instead, a current adder followed by a current-to-voltage converter is implemented (Figure 5-1). Each of the transamps (Gin1, Gn2,..., GmN) converts an input voltage to its



























Figure 5-1: Summing node with N inputs.


corresponding current using different transconductances. Summation of all currents is fed into another transamp GmB (base transamp) with negative feedback and is converted into to a voltage output. Therefore, the output voltage is N Gmi (
V( = ,i f). (5.1)

The transamp can be designed simply using a five transistor differential pair shown in Figure 5-2. When working in the subthreshold region, the circuit has a transfer function as

Lo = 1- 12
e Kni,+/VT _ e ;c Vvn /VT

er-nVi+/VT + e-nVn-_/VT

Ibtanh (2T(Vn+ - Vi"-)), (5.2) where K,, is the capacitive coupling ratio from gate to channel in NMOS transistors and VT = kT/q 25.7 mV is the thermal voltage. r, is typically set to 0.7. The shape of the I-V curve is sigmoidal. When the differential input (Vn+ - Vin-) is






74

Vdd



-1-0






yin Vn
mo o








Figure 5-2: Differential transconductance amplifier small, (5.2) can be approximated as Io - Gm(ln+ - V.-),

where Gm= Ib,/(2VT). It is clear that this approximated linear relationship only works when the differential input is rather small. In our case, linearity in the summation block is very important because weighting of every input is expected to be a constant. However, the input-output relations make it hard to keep constant weight values in a relatively large range of input values. There are many methods to improve linearity such as source degeneration, bump circuit and floating gate. Figure 5-3 depicts a transamp with source degeneration using double diffusors [46]. The output value of I, is

1, = !b tanh (-- - (Vi., - Vi,-) - tanh-'( 2T


2m + 1








Vdd
T


















Figure 5-3: Wide linear range tranconductance amplifier with source degeneration using double diffusors

where m is the ratio between the value of W/L of the diffusors (M5-M8) and that of the input transistors (M1 and M2). Assume that VT =25.7mV, ,n = 0.7, if linear range is defined by 1% distortion from the maximum transconductance obtained at V/in+ - lKi,- 0, then the circuit shown in Figure 5-3 has an approximate linear range of 30 mV [46]. This is not a promising value that could make the circuit accommodating the large amount of inputs from a network. Based on the functionality of the transamp, assume that IB is the bias current of the base transamp G B, and Vin,MAX is the maximum value of the input signal before the input transamp enters saturation, we have N Gmi c B 53

i=1









Moreover, to obtain coupling strengths greater than 1, Gmi must be greater than GmaB (Figure 5-4). A large value of output from the series of current adder could


X 106

I Input Transarnp
I
I
Base Transamp








I
I
I
-'


Input Voltage (v)


Figure 5-4: Because the coupling strength is always greater than 1, range can easily exceed the limit of the base transamp


input current


easily exceed the linear range of the base transamp in the current to voltage converter, which makes the base transamp working out of range and the summation inaccurate. Hence, the largest weight value and input range in this circuit are limited. One way to solve this problem is to utilize the theoretical analysis obtained in previous chapters. Because it is the value of the product of internal couplings that determines the dynamic behavior of an RKII, we can average down the individual values of Km_ and Kg, but still keep the product satisfying the necessary operating conditions. Later in this chapter, a new design will be presented that combines the summation block and the nonlinearity so that a wider dynamic range and less dependance on linearity can be achieved.









5.2 Linear Dynamics

The 2nd-order linear time-invariant system can implemented using the Filter

and Hold (FH) techinque [5]. Figure 5-5 shows the diagram of this implementation. The two stages of Gm - C filters realize the functionality of a 2nd order low-pass








(J, J I ,C
2J



Figure 5-5: The 2n order linear filter. Filter and hold technique is used to increase the time constant.

filter. Switches S1 and S2 may suffer from charge injection and clock feedthrough. Dummy switches Sld and S2d are used to neutralize the charges that may be injected into the capacitor when clock switches to 'off'. W/L's of both Sld and S2d are one half of those of S1 and S2 because every time when the clock switches to 'off', half of the channel charges from S1 and S2 would be injected to C's.

Analog output of each stage in the 2nd-order system is held for a certain period by using the switches. When using proper timings of the clocks D1 and D2, the actual time constant of the system increases. The transfer function can be shown as [5]

H1z) e- (Gil/ci)kT 1 e- -(G .. 2C2)kT XZ
1 - e-(Gm/CI)kTz1 1- e-(Gm-/C2)kTz -1 where k is the pulse width of the clock (or inverse of the duty cycle). Thus, by setting proper duty cycles of the clocks, capacitance (time constant) can be effectively increased. With a duty cycle of 1%, the integrated capacitors can be scaled by 100, which greatly reduces the chip area needed.








The purpose of using a very large time constant is to mimic the exact frequencies of the measured data from animal experiments [1]. However, in practice, if all parameters are adjusted accordingly, a scaled time constant will only affect the speed of temporal development in the network. Recall that, boundary of the interesting dynamics in an RKII is solely determined by a nonlinear function that involves a, b, King and Kgm. Therefore, as long as the bifurcation condition is satisfied, the overall dynamics of the network is expected to be unchanged. Actually, an equivalent scaling for both a and b does not change the boundary set by (a + b)2/(ab).

5.3 Nonlinear Function

Recall that the nonlinear function is given as

SQm(1-e- Q ) x > xo (5.4a)

-1 else. (5.4b) In [5], the sigmoidal shape is well approximated by a differential pair working in subthreshold region. The asymmetric shape with different positive and negative saturations by setting different sizes for transistors of the current mirror load in the transamp. In this way, Q, is predefined and can have a relatively high precision if the differences in transistor sizes are properly designed (for example, by using parallel transistors). However, ability to control the actual shape of the nonlinear function is also appealing to bring more freedom to generate the desired dynamics. Especially, different Q functions result in different values of couplings necessary for oscillations. Therefore, instead of building into the chip a fixed asymmetric ratio, we use an externally controlled current multiplier to adjust the current going through the load of the differential pair. The circuit is shown in Figure 5-6.

We have

S= (5.5)
12








Vdd








_










Figure 5-6: The nonlinear block.

where I and/'2 are controlled by 1 and Vb2 respectively. The relation between the differential input and output current can be expressed by: I--- e( -Vrf1/VT - 1
Q(V l) = I 12 (5.6)


Equation 5.6 is similar to the implementation given in [5]. They both approximate the sigmoidal shape required by the nonlinear block in a K0. However, here the actual asymmetric ratio and shape of nonlinearity in use can be controlled by bias voltages set outside the chip and are not fixed at the time of fabrication.

An interesting observation is that both the summation block and nonlinear
block use similar functional circuit, i.e., transamps. The output from the intermediate linear system is shaped and added. Intuitively, it seems feasible to combine the procedures of shaping and summation to a single block. We will show in the next









section that this is true and can be utilized to reduce the complexity and power consumption of the whole system.

5.4 Efficient Design of an RKII System

Recall that the general network architecture is described by

1 d2xi(t)+(a+b) t) + (ab)xi(t)
ab dt2 dt (ab I) +

= TZ i [WijQ(xj(t), Qm) + Wi'fi(Q(xi(t), Qm), t)] + Pi(t) (5.7)
i, j=I,.., N,

Current circuit design separates the shaping and weighting processes before adding different input and feedback signals together. In this way, weighting circuits must satisfy high linearity although the signals being added are all shaped by a nonlinear function. Moreover, the current adder and the nonlinear function are all implemented by the same circuitry, transamps. Instead of separately building coupling weights and Q(x), a unified KmgQ(x) (KgmQ(x)) function can be designed using transamps that realize weighting and nonlinear shaping at the same time. It has the clear advantage of reducing system complexity. In addition, linearity is not required anymore for the input transamps. The natural exponential functionality of the differential pairs can be used without any approximation. Now, the question is how to design a controllable nonlinear function that can retain a desired asymmetric ratio and at the same time generate different power levels.

At least two free parameters are needed to control individually both the slope and saturation level of the nonlinear function. Bias current can adjust both values but not separately. Figure 5-7 shows a simple way to control the ratio of the current mirror in subthreshold region.


Io = e(Vb2-vbl)in(


(5-8)










Vb1 Vb2








Figure 5-7: Asymmetric current mirror. If Vb2 is set to the power supply, output current only depends on Vbl. Consider the circuit shown in Figure 5-8, Output current can be expressed as Q(V) Ne(v+--)/v - 1 e(V+ Vin=VVT + 1 (5.9) The value of N is determined by the bias V . This nonlinear function has a Vdd

Vbr
M7 M5 M6 M8 M9 M3 M4 M10




VHMl M2 'Ou 10t











Figure 5-8: New implementation of the asymmetric nonlinear function


built-in offset at zero input. To compensate the offset, two such transamps can be








used [5] (Figure 5-9). We should note that this scheme can only cancel systematic offset. Bias current determines the negative saturation where VbI can be used to


Figure 5-9: Offset cancelation using two transamps


control the asymmetric ratio. Without the need for shaping the output at the end of the system, RKII can be redesigned by removing the nonlinear function and loosing the requirement of the linearity of the adder (Figure 5-10).


Vref + N,



Vref -F Vre + Fos)
G_. o-u'F(s)







Figure 5-10: New implementation of the RKII system that only includes an adder and a linear system


5.5 Digital Simulation and Chip Measurement

One rather important intermediate medium between theory and analog
implementation is simulation. In particular when one is dealing with a distributed nonlinear dynamical system for which our analytical ability is still rather limited, simulations play a center role because they allow us to verify the theoretical analysis and check the behavior of the analog VLSI chips we build.








An equivalent digital model of the olfactory model is developed [5, 47] using a digital-signal-processing (DSP) approach. The linear dynamics of the basic system equation of a KO set is described as

1 a+b.
- (t) + ' x(t) + x(t) = P(t). (5.10) The olfactory system is built from this basic linear system that is followed by a static nonlinearity. DSP technique can be used to transfer this system to a difference equation. The transfer function of a KG set is H( ab 2s + a s + b( (s + a)(s + b) bb a + a-b- (5.11) The time-domain solution can be expressed as
-at e-bt
h(t) = ab(-a +a - (5.12) Using the impulse invariant transformation technique with sampling period Ts, the discrete approximation filter has the impulse response e-anT e-bnT
h[n] = Tsh(nTs) = Tab(- + ) (5.13) b -a + a-b

Denote a, = e-aT, C2 = e-bT, c= = Tsab/(b - a), and c2 Tab/(a - b), the transfer function of the equivalent digital system is obtained as Cl + C2 (5.14) H(z) 1 - alz-1 + - a2z51 Generally, the discretization of a continuous-time system using the impulseinvariant transformation is subject to aliasing. However, the poles of a KG set are all positive and the system has typical behavior of a low-pass filter. Hence, aliasing can be effectively reduced by choosing small sampling period T,. Practical implementation of the discretized system can be achieved using conventional architectures such as parallel realization. When modeling a KG set specifically based









16
L impulse invariant system 14" circuit simulation 14 L

12 10

E 8
LIMIT CYCLE REGION

6

4 2
FIXED PONT REGION '
0
0 5 10 15 IKgml


Figure 5 11: Comparison between impulse-invariant filter and circuit simulation on the actual nonlinearity designed in the circuit, we can use digital simulations to verify the circuit design. In Figure 5-11, bifurcation boundary simulated from impulse-invariant system is compared to that obtained from circuit simulation. The result indicates a good match between digital simulations and circuit design.

The analog chip (Figure 5 12) was designed using AMI 0.5 technology and includes an RKII set. Each KO set has an area of 400prm x 150pm and a power consumption of 20 pW excluding the buffers. Individual components were also fabricated for testing purpose.

Figure 5 13 shows the measurement of nonlinear functions for excitatory and inhibitory cells. Negative saturation is fixed at -100 mV. Three different values of Qm are determined by the bias that controls the ratio between positive and negative saturations. The measured nonlinear function has a relatively small offset around 5 mV. Figure 5-14 shows the response from the excitatory cell














































Figure 5-12: Chip layout of a single RKII set


















2.5
2.4
2.3 Inhibitory
2.2 Q =4
2.1 m
2 Qm=6
1.9 "
1.8
2.3 2.4 2.5 2.6 2.7 2.8 Vin (V)

Figure 5-13: Measured excitatory and inhibitory nonlinear functions and inhibitory cell. Figure 5-15 shows the phase plot of the measurement. A square wave is connected to the input of the excitatory cell. As we expected, the qualitative behaviors of the KII set follow the theoretical conclusions and exhibits two states of dynamical behaviors controlled by an external stimulus.

As discussed in Chapter 2, effective input is limited to bring oscillatory

behaviors. Taking a triangular signal as the input, the measurement (Figure 5-16) verifies that not all positive inputs can induce oscillation in an RKII set. Note that, in Figure 5-16, amplitude of oscillation at a specific time is not the steady-state amplitude corresponds to its input levels due to the speed of the triangular input. Nevertheless, the result still demonstrates the limitation on the external input.

5.6 Discussions

We have designed and fabricated all the building blocks necessary for building the K1I set. The dynamical behavior measured from the chip is qualitatively the same as that obtained from analysis and simulations. A chip includes 8






























Xflil 200mv 1Ch21 200mV'!, IMF----] A[ Chl r-56.OmV
Ch3[ 201mVI 7

Figure 5-14: Transient behavior measured from the chip. Channel 1 is excitatory output and Channel 3 is inhibitory output interconnected RKII sets was also designed and fabricated (Figure 5-17) to experiment possible implementations when building an RKII network. Because the number of output pins is limited to 40, this circuit consists of an analog part (8 RKII sets) and a digital part (a multiplexer). Despite the huge number of coupling weights (bias) that need to be set, only four different bias voltages (weights obtained from outer product rule in associative memories) are provided externally. The purpose of the digital circuit is to multiplex the reference voltage so that multiple weighting circuit can share the same one. However, the chip was not tested successfully. In the designed chips, no dedicated circuits are used to cancel offset and to provide on-chip bias. Offsets of a fabricated chip are normally generated from sources such as systematic mismatch, fabrication mismatch and chip package. In a 0.6pm technology, a lmV mismatch of the threshold voltages






88























Chll 50.OmV' " " MF200p1sl Al Chi .f-56.OmV


Figure 5-15: Phase plot measured from the chip

is approximately equivalent to a 300fF channel capacitance at the input [48]. In a highly interconnected network, offset will affect the performance in terms of speed and power consumption. For our purpose, a deviation from mathematical model of a functional block may directly affect the behavior of the dynamical system. In a network of KII sets, where rich structure is presented and probably with sophisticated boundaries between different attractors, the offsets may change the desired behaviors completely. To reduce the negative effect, electronic cancelation of offset should be considered in the design. The same idea applies to on-chip bias circuit. Every part of an oscillator needs to be biased. Most of them, such as linear dynamics and nonlinear functions, are supposed to have exactly the same functionality even in different channels. So a biasing circuit may be incorporated into the chip so that accurate and stable bias voltage or current could be provided.




Full Text

PAGE 1

DYNAMICAL ANALYSIS, APPLICATIONS, AND ANALOG IMPLEMENTATION OF A BIOLOGICALLY REALISTIC OLFACTORY SYSTEM MODEL By DONGMING XU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Dongming Xu

PAGE 4

ACKNOWLEDGMENTS I would like to express my sincere gratitude to my advisor Dr. Jose C. Principe. It is a privilege for me to work with him in my entire graduate career. He provided me with a free, creative and friendly atmosphere for an invaluable research experience. Without his knowledge, experience and vision, and his encouraging attitude and enthusiasm, this work would be impossible. I also sincerely appreciate the unconditional help offered by the members in my supervisory committee. I want to thank Dr. John G. Harris for his guidance through the work of hardware design from both his teaching and our research meetings. I would like to thank Dr. Jianbo Gao for his theoretic support. I truly benefited from his knowledge in dynamical systems. My deep appreciation goes to Dr. James Keesling for his interest and participation in my supervisory committee. I would also like to thank my friends and colleagues (especially Mustafa Can Ozturk, Yuan Li and Dr. Mark D. Skowronski) in the Computational NeuroEngineering Laboratory, for all the wonderful discussions and hard work together. I am dearly thankful to Dr. Vitor Tavares for his early work on FreemanÂ’s model and analog design. My deepest love goes to my parents who gave me the freedom and support to explore my life in the way I want. I would like to thank Xiuge Yang for her love and support through all my graduate years. IV

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS iv LIST OF TABLES vii LIST OF FIGURES viii ABSTRACT xi CHAPTERS 1 INTRODUCTION 1 1.1 Olfactory System Model as Information Processor 1 1.2 Structure of the Freeman Model 5 1.3 Study Objectives 8 2 HOPF BIFURCATION IN A REDUCED KII(RKII) SET 11 2.1 Dynamics of an RKII Set 11 2.2 Bifurcations in Nonlinear Systems 14 2.3 Center Manifold Theorem 16 2.4 Hopf Bifurcation in an RKII Set 18 2.4.1 Properties of the Equilibrium 18 2.4.2 Hopf Bifurcation 20 2.5 Parameter Design in an RKII Set 25 2.5.1 Determine Proper Values of K mg and K gm 28 2.5.2 Dynamics Controlled by an External Input 31 2.6 Experiments 32 2.6.1 Control of RKII Dynamics by the Coupling Coefficients . . 32 2.6.2 Control of RKII Dynamics by the External Input 36 2.7 Discussions 37 3 SYNCHRONIZATION OF COUPLED RKII SETS 39 3.1 Sufficient Condition on Synchronization of Coupled Chaotic Systems 41 3.2 Analytical Solutions of Couplings for Synchronized RKII Sets . . 43 3.3 Simulations 47 3.3.1 Example 1: K gg — 0.8 50 3.3.2 Example 2: K gg = 0.4 52 3.3.3 Nonlinear Coupling 54 v

PAGE 6

3.3.4 Quasiperiodic Motion in Coupled RKII Sets 56 3.4 Discussions 57 4 COMPUTATION BASED ON SYNCHRONIZATION 59 4.1 Logic Computation 60 4.2 Associative Memory Using an RKII Network 64 4.3 Discussions 70 5 HARDWARE IMPLEMENTATION USING ANALOG VLSI 72 5.1 Summing Node 72 5.2 Linear Dynamics 77 5.3 Nonlinear Function 78 5.4 Efficient Design of an RKII System 80 5.5 Digital Simulation and Chip Measurement 82 5.6 Discussions 86 6 CONCLUSIONS 91 APPENDICES A BASIC TYPES OF LOCAL BIFURCATIONS 94 A.l Saddle-Node Bifurcation 94 A. 2 Cusp Bifurcation 94 A. 3 Pitchfork Bifurcatation 95 A. 4 Hopf Bifurcatation 96 B DERIVATION OF SUFFICIENT CONDITIONS ON SYNCHRONIZATION OF COUPLED RKII SETS 98 C LINEARIZATION OF NONLINEAR FUNCTION 101 C.l Example I 101 C.2 Example II 102 REFERENCES 104 BIOGRAPHICAL SKETCH 109 vi

PAGE 7

Table LIST OF TABLES page 31 Comparison between analytic solution and numerical simulation .... 55 41 Set of parameters that realize the complete boolean primitives .... 63 Vll

PAGE 8

LIST OF FIGURES Figure page 1-1 Two different types of KO sets, (a) Excitatory, (b) Inhibitory 6 1-2 KI network consists of interconnected KOs with common signs .... 7 1-3 Full KII set consists of four KOs 7 14 Kill network represents the computational model of the olfactory system 9 21 RKII set consists of one mitral cell and one granule cell coupled through Kmg(> o) and K gm (< 0) 11 2-2 Plot of the nonlinear function Q(x) when Q m = 5 13 2-3 Equilibrium is determined by the intersection of two nullclines .... 19 2-4 Frequency of oscillation as a function of coupling strength 24 2-5 Derivative of Q(x) when Q m = 5 28 2-6 Fixed point solution in an RKII set when P= 0. (a) Temporal plot. (b) Phase plot 32 2-7 Limit cycle solution in an RKII set when P=0. (a) Temporal plot. (b) Phase plot 33 2-8 Fixed point solution in an RKII set when P= 1. (a) Temporal plot. (b) Phase plot 34 2-9 Limit cycle solution in an RKII set when P=l. (a) Temporal plot. (b) Phase plot 35 2-10 Fixed point solution in an RKII set with positive input, (a) External input P=0.35. (b) External input P— 26.1 36 211 Scanned parameter space of \K mg K gm \ 37 31 Two representations of coupled RKII sets, (a) Linearly coupled RKII sets (b) Under synchronization, coupled RKII sets could also be described as an individual RKII set with auto-feedback 43 viii

PAGE 9

3-2 Parameter space defined by K mm and K gg . The plot shows combined results of synchronization and bifurcation boundaries when input P = 0, K mg = 1 and K gm = -6 48 3-3 Values of \K gg \ and K mm are limited by 1. Three regions are determined where the coupled system shows synchronized oscillation, desynchronized oscillation or fixed point solutions 49 3-4 Simulations of coupled RKII sets when K gg —— 0.8. (a) K mm = 0.6. (b) K mm = 0.96. (c) -Kmm— 0.73. (d) K mm =0.74. (e) K mm = 0.91. (f) K mm = 0.93 51 3-5 Simulations of coupled RKII sets when K gg = —0.4. The coupled system desynchronizes and synchronizes at (a) K mm = 0.33. and (b) K mm = 0.4 53 3-6 Comparison between analytic solution and numerical simulation ... 54 3-7 Example of quasiperiodic motion in two coupled RKII sets, (a) Temporal signal, (b) Phase plot, (c) Power spectrum 56 38 Simulated regions of quasiperiodic oscillations 57 41 Time domain response when implementing coupled RKII sets as an XNOR gate. State outputs are shaped by the nonlinear function. Solid line shows Q(m^) while dashed line shows Q(m^) 62 4-2 Phase plot in the state space of Mi and M 2 when implementing coupled RKII sets as an XNOR gate 63 4-3 RKII network as an associative memory, (a) Stored pattern “0”. (b) Network response to input pattern “0”. (c) Stored pattern “1”. (d) Network response to input pattern “1”. (e) Stored pattern “2”. (f) Network response to input pattern “2” 68 4-4 Noise is added to an inactive channel and an active channel in “0” . . 69 45 Readout from an RKII network, (a) Readout pattern that effectively improves the discrimination between desired channels and colored channels, (b) Network response to colored input pattern “1” .... 70 51 Summing node with N inputs 73 5-2 Differential transconductance amplifier 74 5-3 Wide linear range tranconductance amplifier with source degeneration using double diffusors 75 5-4 Because the coupling strength is always greater than 1, input current range can easily exceed the limit of the base transamp 76 IX

PAGE 10

5-5 The 2 nd order linear filter. Filter and hold technique is used to increase the time constant 77 5-6 The nonlinear block 79 5-7 Asymmetric current mirror 81 5-8 New implementation of the asymmetric nonlinear function 81 5-9 Offset cancelation using two transamps 82 5-10 New implementation of the RKII system that only includes an adder and a linear system 82 5-11 Comparison between impulse-invariant filter and circuit simulation . . 84 5-12 Chip layout of a single RKII set 85 5-13 Measured excitatory and inhibitory nonlinear functions 86 5-14 Transient behavior measured from the chip. Channel 1 is excitatory output and Channel 3 is inhibitory output 87 5-15 Phase plot measured from the chip 88 5-16 Measured effective range of external input 89 5-17 Chip layout of a mixed-signal design that includes 8 interconnected RKII sets 90 C-l Nullclines obtained with three-section nonlinear function 102 C-2 Four-section nonlinear function 103 x

PAGE 11

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DYNAMICAL ANALYSIS, APPLICATIONS, AND ANALOG IMPLEMENTATION OF A BIOLOGICALLY REALISTIC OLFACTORY SYSTEM MODEL By Dongming Xu August 2005 Chair: Jose C. Principe Major Department: Electrical and Computer Engineering The Freeman model is a biologically plausible dynamical neural network that mimics the complicated wave patterns collected from the electroencephalogram (EEG). Freeman has studied the model to explain how biological systems interact and extract information from the environment. The Freeman model has hundreds of free parameters that need to be fine tuned either manually (through trial-anderror), adaptation algorithms, or numerical simulations to match experimental data. This work investigates the Freeman model and its dynamical building blocks as the basis for novel information processing systems based on nonlinear dynamics. We considered three issues. First, we performed a theoretical analysis of the reduced KII (RKII) basic building block, where simpler dynamics are observed. We showed, using the center manifold theorem, how the RKII dynamics are controlled by an Hopf bifurcation and designed the parameters analytically. Second, nonlinear dynamics of coupled RKII sets were studied analytically using the synchronization of output channels to define computational primitives based on xi

PAGE 12

specific dynamics of the network. The design of coupling coefficients in the network for obtaining a desired output response is based on our theoretical analysis of the RKII set dynamical behavior. The computational power of the RKII network is demonstrated by two applications that address logic computation and associative memory. Thirdly, we designed analog VLSI chips to implement this continuous dynamical system. Analog design faces the challenges of power consumption and limited chip area. To build a more efficient integrated circuit, the nonlinear function in the KII architecture is eliminated by redesigning the summing node. In the new design the summation block serves simultaneously as a current adder and nonlinear activation function preserving the original functionality. Chip tests show the good performance of the new design. xii

PAGE 13

CHAPTER 1 INTRODUCTION 1.1 Olfactory System Model as Information Processor The olfactory system is one of the oldest parts in the brain. It is a welldeveloped distributed system that achieves odor identification and segmentation. The olfactory bulb receives inputs from specialized odor receptors and sends nonconvergent signals to the olfactory cortex. Olfactory cortex, in cooperation with other parts of the brain, send the feedback signals to the olfactory bulb to accomplish recognition tasks. Through physiological and experimental studies, many researchers have built computational models that describe the architecture and functions of olfactory bulb. In this thesis we will focus on the K-sets model developed through the seminal work of Walter J. Freeman. The realistic computational model of the olfactory system proposed by Freeman [1] describes brain function as a spatio-temporal lattice of groups of neurons (neural assemblies) with dense interconnectivity. It is a hierarchical model that quantifies the function of one of the oldest sensory cortices, where there is an established causal relation between stimulus and response. It also presents the function as an association between stimulus and stored information, in line with the content addressable memory (CAM) framework studied in artificial neural networks [2]. Freeman uses the language of dynamics to model neural assemblies, which seems a natural solution because of the known spatio-temporal characteristics of brain function [3]. Although we believe that the full dynamical description of the entire model (a Kill network) is beyond our present analytical ability, one may still be able to understand the dynamics of lower level structures from first principles. One advantage of a dynamic framework to quantify mesoscopic interactions is the 1

PAGE 14

2 possibility of creating analog VLSI circuits that implement similar dynamics [4,5]. In this respect, dynamics is also independent of hardware (mimicking the wellknown hardware independence of formal systems). However, the dynamical approach to information processing is much less developed than the statistical reasoning used in pattern recognition. Only recently did nonlinear dynamics start being used to describe computation [6] and much work remains to achieve a nonlinear dynamical theory of information processing. Hence we are simultaneously developing the science and understanding the toolÂ’s capability; a situation that is far from ideal. The challenge is particularly important in the case of FreemanÂ’s model, where the distributed system is locally stable but globally unstable, creating nonconvergent (eventually chaotic) dynamics. Nonconvergent dynamics are very different from the simple dynamical systems with point attractors studied by Hopfield [7], because they have positive Lyapunov exponents [8]. FreemanÂ’s computational model of olfactory system has already been applied to several information-processing applications [4,5,9,10]. Ultimately, we plan to use FreemanÂ’s model as a signal-to-symbol translator, quantify its performance and use it in analog VLSI circuits for low-power, real-time processing in intelligent sensory processing applications. Deterministic systems with complex dynamics have been studied in the past as information processors and feature extractors. In addition to FreemanÂ’s model, other approaches using oscillatory or chaotic networks [11-14] have shown great computational potential. These computational systems are often built upon their corresponding dynamics that can be systematically described and controlled by energy functions or convergence properties. To accomplish such a task, we should construct a well-defined set of computational primitives to represent the state space. This is of course dependent on the systemÂ’s fundamental structures and its specific dynamics. Different dynamics will provide a different set

PAGE 15

3 of primitives. Among examples of information processing using dynamical systems, the approach is usually to create attractors (either convergent or nonconvergent) in the state space. The case of dynamical systems with point attractors (as in the Hopfield network) generate local minima in the energy function, which leads to the convergence of the system dynamics. More interestingly, the widely used recurrent networks (RNN) integrate information dynamics with system dynamics, which produces better representation of time signal and more efficient projection onto the output space [15, 16]. Yet, biological systems are still much more intriguing in both structure and observable behaviors. Brain functions deal with emerging properties from different functional layers, where nonconvergent dynamics are the carrier for both encoded and processed information. However, the case of nonconvergent dynamical systems (as in the Freeman model) is much harder to quantify. This is especially significant for the Freeman model, where detailed resemblances to the real biological system result in both practical implementation issues and the lack of fundamental understanding of its dynamical structures. Existing methods used to quantify global behaviors of the network, which are mostly based on energy clustering, are still far from satisfactory to unveil the advantages brought by the complex dynamics. Fundamental understanding of information storage (which depends on system controllability and invariance) and information retrieval (which depends on the understanding of projection space) both remain unsolved. One possible advantage of FreemanÂ’s system, that still has not been fully exploited for the analysis, is the uniformity of the dynamics of its individual components. In FreemanÂ’s model, the dynamical behaviors of the individual sub-systems, such as fixed point, multiple equilibria and limit cycles, are very simple. From an information-processing viewpoint, this is important because the global dynamics are built from these primitive dynamics. One fundamental step is to quantify the dynamics of the individual components in the parameter space.

PAGE 16

4 Another step is to learn the rules that compose the complex dynamics of the spatio-temporal system from the individual dynamics. This is challenging because the dimensionality of the overall system can be very high to lose controllability and invariance. In this work, we used a step-by-step approach with the goal of unraveling some fundamental mechanisms that naturally emerge due to the interconnectivity of the components. In this regard we used synchronization analysis among the processing elements (PE) as the first step to categorize different regions of dynamics in the parameter space to elucidate the possible composition of global dynamics. In fact, global behaviors generated by interactions among lowerlevel components are very rich. There are a wide variety of collective behaviors in a coupled oscillatory system including synchronization, partial synchronization, desynchronization, dephasing, phase trapping, and bursting in coupled neural oscillators through diffusive coupling. These are all possible tools to refine the state space of our system to provide different representations of embedded signals. If all of them can be used, computational ability and efficiency will be greatly improved. However, we should realize and emphasize that we are still far from building computational systems upon fully explored rich dynamics, where parameter design, controllability and system invariance need to be thoroughly investigated. While we could not provide now a full interpretation of the entire system dynamics for our model , as the first step, we tried in this work to analytically study synchronization that has relative simplicity and invariance property to construct an initial division of dynamical regimes in the parameter space. By controlling one of the emerging properties, the projection space can be built so that in these complicated and highly interconnected networks, dynamics collapse to a lower-dimensional space to facilitate information retrieval. Analytical studies have the added advantage of decreasing appreciably the need to scan parameters that may still be needed to systematically illustrate the

PAGE 17

5 global dynamics. To achieve the ultimate goal of designing a desirable computational system in the framework of a complex dynamical system, it is of great value to understand these basic yet rich dynamical behaviors of the target system, starting from the most basic elements. Moreover, in hardware designs, variations in design processes and fabrication, as well as noise, may degrade the performance of the system. Theoretical studies help recognize the range of the system’s working parameters to make sure that variations does not significantly change the system’s qualitative behavior. 1.2 Structure of the Freeman Model Generally, an ./V th order system in Freeman’s model is defined by 1 ab d 2 xAt) . ,.d xAt) ... . . V ' +{a + b)— ^+ {ab)x l (t) d t 2 d t = [W ij Q{x i (t),Q m ) + Wbf j (Q(x j (t), Q m ),t)\ + PM i,j = 1,.. .,N, ( 1 . 1 ) where a and b are experimental constants that determine the cutoff frequencies of the 2 nd -order dynamics. Wij is the coupling coefficient, fj models a dispersive delay and Pi(t ) is the external input. Each PE in Equation 1.1 models the independent dynamics of the wave density for the action dendrites and the pulse density for the parallel action of axons. Note that there is no autofeedback in the model. Q(x) is the asymmetric nonlinear function (at the output stage) in each PE. It describes the pulse to wave transformation [1], The mathematical model and properties of Q(x) are discussed later. Freeman’s model is a locally stable and globally unstable dynamical system in a very high dimensional space. Although it has great complexity, the model is built from a hierarchical embedding of simpler and similar structures. Based on the seminal work of Katchalsky [1], four different levels (KO, KI, KII and Kill) are included in the model defined next [1],

PAGE 18

6 (a) (b) Figure 1-1: Two different types of KO sets, (a) Excitatory, (b) Inhibitory A KO set represents about 10 3 to 10 8 neurons that receive the same input and have the same sign of output. Each KO set consists of three blocks as summation node, 2 nd order linear dynamics, and nonlinearity. It is the simplest and most basic building block in the hierarchy. All higher-level structures are made of interconnected KO sets. Spatial inputs to a KO set are weighted and summed. The resulting signal is passed through a linear time-invariant system with 2nd-order dynamics. The output of the linear system is shaped by the asymmetric nonlinear function Q(x). Two categories of KO sets (excitatory and inhibitory) are defined by the sign of the nonlinear function (Figure 1-1). There is no coupling among any single KO set when forming a KO network. A KI set also describes a set of neurons that have a common input source and sign of output. However, there are dense interconnections within the KI set. KO sets with a common sign (either excitatory or inhibitory) are connected through forward lateral feedback to construct each KI set (Figure 1-2). No auto-feedback is allowed in the network. A KII set is defined to realize the dense functional interconnections between two KI sets. Therefore, there are three possible topologies, which are formed

PAGE 19

7 Figure 1-2: KI network consists of interconnected KOs with common signs by two excitatory sets, two inhibitory sets or an excitatory and inhibitory pair, respectively. The last one is specifically denoted as a KII set. It is a coupled oscillator consisting of two KI sets (or four KO sets) of different signs (Figure 1-3). Mitral cells Ml and M2 are excitatory cells, while granule cells Gl and G2 are inhibitory cells. Each KII set has fixed coupling coefficients obtained from biological experiments. A KII set is the basic computational element in FreemanÂ’s olfactory system. The measured output from any nonlinear function has two stable states controlled by the external stimulus. The resting state occurs when external input is in the zero state. An oscillation occurs when external input is present.

PAGE 20

8 Therefore a KII set is an input controlled oscillator. The KII network is built from KII sets that are interconnected with both the excitatory cells (denoted by Ml) and inhibitory cells (denoted by Gl). This interconnected structure represents a key stage of learning and memorizing in the olfactory system. External Input patterns ( P(t )) and feedback signals through Ml cells are mapped into spatially distributed outputs. Excitatory and inhibitory interconnections enable cooperative and competitive behaviors, respectively. The KII network functions as an encoder of input signals or as an auto-associative memory [1,4]. The Kill network (Figure 1-4) embodies the computational model of the olfactory system. It has different layers representing anatomical regions of a mammalian brain. In a Kill network, basic KII sets and a KII network are tightly coupled through dispersive connections (mimicking the different lengths and thicknesses of nerve bundles). Since the intrinsic oscillating frequencies of each one of the KII sets in different layers are incommensurate among themselves, this network of coupled oscillators behaves chaotically. 1.3 Study Objectives We studied Freeman model (an oscillatory network) because it is a computation unit that mimics biological information processing with sufficient mathematical detail to enable engineering applications. Freeman wrote extensively about this model, but kept the focus of explaining biology, not to help build man-made information processing artifacts. Our goal is to theoretically analyze the model to gain a basic understanding of how this network works; how to control it; and most importantly, how to use its advantages to define a system of computational primitives. We investigated basic levels in the hierarchy, emphasizing dynamical behaviors through simulations and structural analysis that explains the experimental results. In simulations, our main task was to identify dominant and interested dynamics from different behaviors in different levels. We conducted detailed and meaningful

PAGE 21

9 Figure 1-4: Kill network represents the computational model of the olfactory system

PAGE 22

10 theoretical analysis for lower-level structures from the K0 to KII networks. Based on the theoretical analysis and computational primitive defined from categorized dynamics, potential applications were investigated. We also examined hardware design of the model using analog VLSI for real-time implementations. Chapter 2 begins with the analysis of Hopf bifurcation in the reduced KII (RKII) set. Hopf bifurcation characterizes the most fundamental dynamical behavior of the basic building block in the whole hierarchy. Analytical solutions will be given to quantify the desirable regions of control parameters in the model. The results will then be used to provide synchronization analysis of higher level structures such as coupled RKII sets, which will be presented in Chapter 3. Chapter 4 discusses the computational potential of RKII networks based on synchronization. Analog VLSI design is presented in Chapter 5. New architecture that reduces the complexity of circuit design is proposed. Finally, Chapter 6 presents conclusions of this work.

PAGE 23

CHAPTER 2 HOPF BIFURCATION IN A REDUCED KII(RKII) SET A clear understanding of the dynamics of FreemanÂ’s model is needed to control the network and to define its characteristics, so that we have insight on how and why to use the model in different applications. Basic observations of dynamical behaviors of the olfactory bulb could be achieved either experimentally or from simulations of the model. However, it is difficult to understand the networkÂ’s controllability considering the massive number of free parameters in such a complicated network. With a hierarchical structure, FreemanÂ’s model is built from very basic components such as a KII set. As a general approach, the study of the basic component helps us understand both local and global behaviors of the network. This chapter presents a theoretical solution to categorize different behaviors in lower-level structure of the model. 2.1 Dynamics of an RKII Set one granule cell coupled Figure 2-1: RKII set consists of one mitral cell and through K mg (> 0) and K gm (< 0) 11

PAGE 24

12 An RKII set is a simplified version of the full KII set [5] shown in Figure 1-3. Unlike a full KII set, it has only a single excitatory (mitral) cell and a single inhibitory (granule) cell (Figure 2-1). The M-cell and G-cell are coupled by two internal coefficients K m g and K gm . P(t) is a timevarying external stimulus but typically it only has binary values that are used in static pattern recognition [4, 5, 9, 10]. In such cases, P(t) is assigned either an off state (zero input) or on state (positive input). Although an RKII is structurally different from a full KII, the network dynamics of a large assemble of RKII sets generate similar global dynamics to that of a full KII [5]. Moreover, the individual dynamics of the RKII set are qualitatively similar to the full KII set, i.e., it is an input controlled oscillator. For our analytical studies, the RKII has the great advantage of fewer free parameters, which simplifies the analysis. In addition, the most significant property of the olfactory bulb is the interaction between mitral and granule cells (as an excitatory-inhibitory network). The number of cells is normally not balanced and their connections are not uniformly distributed. However, starting with the simplest yet general feedback architecture, higher level models can always be achieved by well defined intracellular connections. From a modeling perspective, the most important task here is to mimic the behavior rather than to mathematically match the target biological system. Therefore, we focus our work on the study of an RKII set and hope to use the analysis for understanding higher level structures in the system. An RKII set is described by a system of 2 nd -order ordinary differential equations (ODEs) as J_(^ + (a + 6)^) +mW ) = K amQ{9) + P ( 2 . 1 )

PAGE 25

13 where m(t) and g(t) are state variables of mitral and granule cells, and K mg > 0 and K gm < 0 are internal couplings. The values of a and b determine the cutoff frequencies of the 2 nd -order dynamics. They are given experimentally as a = 220 Hz and b = 720 Hz [1]. Q(x) is the nonlinear function that models the spatio-temporal integration of spikes into mesoscopic waves measured in the olfactory bulb and serves as the key function block to construct this population model [1]. Q(x) is defined by Equation 2.2 as ( Qm( l-e'W') x > x 0 (2.2a) Q(x) = < { -1 else, (2.2b) where x 0 = ln(l — Q m ln(l + 1/Q m )). Q m is a free parameter that determines the ratio between positive and negative saturation values of Q(x). In our study, for simplicity, we used Equation 2.2b to describe both positive and negative saturations to avoid discontinuity. This variation should not affect our results. Figure 2-2: Plot of the nonlinear function Q(x) when Q m = 5

PAGE 26

14 The olfactory bulb is a homogenous excitatory and inhibitory network constructed from basic building blocks (RKII sets). A typical activity of an excitatoryinhibitory pair is the birth of oscillations induced by stimulus. Actually, in such cases, the desired dynamics of RKII sets are expected to be exactly the same as those of the full KII sets. The system stays at its equilibrium (the origin) when external stimulus P(t ) is zero or negative, which indicates no sensory inputs are received. The transition into oscillatory states (limit cycles) is induced by a positive and large-enough input. A natural approach to investigate how the system undergoes such structural changes is based on bifurcation theory. Many well studied categories of bifurcations led to the birth of limit cycles. The RKII differs from conventional excitatory-inhibitory structures because the oscillator itself is a 4 th order system. However, next, we give an overview of dynamical system analysis and show that the transition into a limit cycle is actually through a Hopf bifurcation. This reduces our current model due to the fact that Hopf bifurcation resides in a 2D space around the local neighborhood of the fixed point, yet the RKII set is still biologically plausible. 2.2 Bifurcations in Nonlinear Systems A complicated nonlinear system is usually studied by linearizing the system equations at its operating points. In this way, its local behavior can be described by its linear counterpart. Moreover, the dynamics of a linear system are rather simple and are dependent upon the eigenvalues of the linear equations. A linear time-invariant system is defined as ^ = A x(t) (2.3) where x £ 5R A . A £ ?R NxN is a constant matrix. The behavior of Equation 2.3 is determined by the eigenvalues of A. The solution x(t) = x(0)e At has the following possible dynamics [17]:

PAGE 27

15 • The system has a stable fixed point solution if the real parts of all the eigenvalues of A are negative. • The system is unstable if at least one of the eigenvalues of A has a positive real part. • If every eigenvalue of A has a real part that is less than or equal to zero, and if all eigenvectors corresponding to the eigenvalues with zero real part are independent, than the system is stable; otherwise it is unstable. If none of the eigenvalues of A has zero real part, the fixed point solution of Equation 2.3 is called a hyperbolic fixed point. Otherwise, it is called a nonhyperbolic fixed point. The analysis of a linear system is rather simple and straightforward since all possible dynamics are clearly determined by the eigenvalues and eigenvectors of A. In a nonlinear system, more complicated behaviors may exist and even be unsolvable. In most cases, a nonlinear system is linearized around its equilibrium so that an explicit solution and qualitative analysis could be achieved around the neighborhood of the fixed point [18]. Conditions that guarantee a qualitatively similar phase portrait between a nonlinear system and its linearized version are described by the Hartman-Grobman Theorem [19]. Theorem 1 (Hartman-Grobman Theorem). If a nonlinear system x = f(x) has a hyperbolic fixed point at x = xo and a Jacobian A, then this system is locally topologically equivalent to the linearized system (Equation 2.3). Based on this theorem, if a nonlinear system (Equation 2.4) has no purely imaginary eigenvalues, the orbit of the flow generated by this system can be mapped to that of its linearization as defined in Equation 2.3 by a homeomorphism (a one-to-one correspondence of points in the two flows). = f(x) dx d t (2.4)

PAGE 28

16 Clearly, stability analysis of a nonlinear system can be greatly simplified while the qualitative property obtained remains the same. However, this happens only when the system is hyperbolic and not at a bifurcation point [19]. Bifurcation occurs when a system is structurally different (nonhyperbolic) with respect to the variation of its parameter set. A parameter-dependent system may present different behaviors in the phase space when the parameter passes through the bifurcation point [19]. While a nonlinear system can be reduced to its linearized counterpart, its behavior (or trajectory) around the bifurcation point cannot simply be described by a single set of linear systems. In fact, various bifurcations observable in biological systems are usually interesting but make the analysis difficult. Some basic types of bifurcations are shown in Appendix A. The question is, when a nonlinear system has nonhyperbolic fixed points, how do we simplify it? The most important method is the center manifold theory. 2.3 Center Manifold Theorem Consider a linear system dx ^ = Ax (2.5) dt ’ where A is a constant matrix and x e U N . The entire space 9?^ can be described as the direct sum of three subspaces determined by the eigenvalues of A. The three subspaces are stable, unstable and center subspaces. • if {ei, ...,e s } are the generalized eigenvectors of A whose corresponding eigenvalues all have negative real parts, the stable subspace is defined as E s = span{ei, ..., e s }. • if {e s+ i, ...,e s+u } are the generalized eigenvectors of A whose corresponding eigenvalues all have positive real parts, the unstable subspace is defined as E u = span{e s+ i, ..., e s +„}.

PAGE 29

17 • if {e s+u+1 , ...,e s+u+c } are the generalized eigenvectors of A whose corresponding eigenvalues are all purely imaginary, the center subspace is defined as E spon{e s -|u _|-i , e s + u -|. c }. The three subspaces are all invariant manifolds, which means that the solutions of the linear system with initial conditions in any of the subspace will stay in it forever. Given a system without any solution in the unstable space (that is, E u = 0), it is clear that any orbit or solution will converge to the center manifold. Depending on the dimension of the center manifold, asymptotic behavior of the bifurcating system under analysis can be approximated by the reduced system locally around the fixed point. This is the basic idea of the center manifold thorem [17]. Theorem 2 (Center Manifold Theorem). The nonlinear system defined by 2-4 can be parameterized by states in the center subspace (i.e., x G E c ) as C = {x + y| y = 0(x)}. (2.6) y = (x) G E s U E u is called the center manifold, is a smooth function and satisfies ( 0) = 0 and 0) = 0. Furthermore, the center manifold is locally invariant with respect to the original system, i.e., every solution of the nonlinear system (Equation 2.4) that originates from the center manifold will stay in the center manifold. The center manifold can also be attractive. When the unstable subspace does not exist (i.e., E u = 0j, all solutions of the original system that is close enough to the center manifold will approach the center manifold. Next, we characterize the dynamical properties of the RKII set and use the center manifold theorem to reduce its dimension of interesting dynamics.

PAGE 30

18 2.4 Hopf Bifurcation in an RKII Set By changing of variables, Equation 2.1 can be described as a system of firstorder ODEs by d m(t) = -abg (a + b)g v + abK mg Q(m), = —abm — (a + b)m v + ab(K gm Q(g) -IP) (2.7) where m and g represent the state variables of mitral and granule cells respectively. m v and g v are the l st -order derivatives of m and v. The internal couplings K mg , K gm , and input P(t) are the possible bifurcation parameters in this system. Before looking into the the actual causes and properties of the bifurcation, we first discuss some of RKII’s properties. 2.4.1 Properties of the Equilibrium The equilibrium point can not be solved analytically, but we will investigate its existence and uniqueness with the help of numerical simulations. Let us assume m* and g* are the the fixed-point solutions. Based on Equation 2.7, Equation 2.8 can be used to solve numerically for the actual value of the equilibrium. The equilibrium should be at the intersections of the two nullclines defined by the two equations in Equation 2.8. m K gm Q(g) P = 0 9 KmgQijYl) ~ 0 ( 2 . 8 ) First, we discuss the the existence and uniqueness of the equilibrium of an RKII.

PAGE 31

19 Figure 2-3: Equilibrium is determined by the intersection of two nullclines Property 1. When K mg > 0, K gm < 0, and P > 0, there is one and only one finite fixed-point solution in an RKII set defined by Equation 2. 7. The equilibrium only exists at the origin or in the first quadrant, i.e., m* > 0 and g* > 0. Proof. Recall that the equilibrium {m*,g*} of Equation 2.7 is determined by the intersections of the two nullclines m and g defined by Equation 2.8. The system has a fixed point at the origin when P = 0. When P > 0, m and g have the same sign only in the first quadrant. Hence, the intersection is always in the first quadrant, which means m* > 0 and g* > 0. When P > 0, it is straightforward that Q'(x) = e x e~ > 0 m'{g) = K gm Q'(g) < 0 g'(m) = K mg Q'(m ) > 0. (2.9)

PAGE 32

20 Therefore, m(g) is monotonically decreasing and g(m) is monotonically increasing. Therefore, there can be at most one intersection between the two nullclines m and g. The conclusions can also be verified by checking the intersection between the two nullclines in figure 2-3. Some other properties of the fixed-point solution are presented below. They will be of interest when the actual values of the internal coefficients need to be determined. Property 2. Assume that m* and g* all have finite values, when P is a constant, m* is a decreasing function of both K mg and \K gm \; g* is an increasing function of K mg and a decreasing function of \K gm \. Proof. According to property 1, m* > 0 and g* > 0, so we have Q(m*) > 0 and e x — 1 Q(g*) > 0. Recall that Q'(x ) = e x e Q™ > 0 . The proof is straightforward based on the signs of the l st -order derivatives computed from Equation 2.8. dm* * 3 * c? s 1 1 dKmg 1 + \K m gKgm\ Q'(m*)Q'(g*) dm* -Q{g*) d I Kgm | 1 + \K mg Kg m \Q'(m*)Q'(g*) dg* Q(m*) dKmg 1 -1\K mg K gm \ Q'{m*)Q'(g*) dg* -K mg Q(g*)Q'{m*) d | K gm | 1 + \K mg K gm \ Q'(m*)Q'(g*) ( 2 . 10 ) 2.4.2 Hopf Bifurcation Intuitively, the behaviors of an RKII set resemble those of a Hopf bifurcation during the onset of a periodic orbit. Specifically, by smoothly adjusting some of the free parameters, an RKII can bifurcate from a stable fixed point to a stable limit cycle. This matches a supercritical Hopf bifurcation (Appendix A) [17, 20]. Two steps will be taken to analyze this model. First, we will show that Hopf

PAGE 33

21 bifurcation exists. There are three free parameters, namely K mg , K gm and P. We will show that they can be considered together as a single parameter for this particular bifurcation. Second, properties such as frequency and amplitude of oscillation of the periodic solution can be estimated in the neighborhood of the unique equilibrium based on center manifold theorem. Apparently, we expect the center manifold to have a dimension of two. Conceptually, the Hopf bifurcation exists in a N dimensional system x = F(x, /i), where x, F € and F 6 C r , r > 3, when • Jacobian of F, DF(0, //), has a pair of complex conjugate eigenvalues A and A such that A (fj,) = a(n) + iu(fi) (2.11) where 0 ) = u> 0 > 0, a(g 0 ) = 0, and a'(g 0 ) 7^ 0. is called the critical value. • The remaining N — 2 eigenvalues all have strictly negative real parts. To examine whether a Hopf bifurcation exists in our system, we first compute the eigenvalues of Equation 2.7. The corresponding Jacobian matrix is A = 0 —ab 0 ClblPfYtgQ {tyi ) 1 0 0 {a + b) abK gm Q' (g*) 0 0 0 1 0 —ab —(a + b) ( 2 . 12 ) where m* and g* are the fixed-point solutions that are determined by K mg , K gm and P. Q' (x) is the derivative of Q(x ).

PAGE 34

22 To calculate the eigenvalues, we have |AI — A| = A ab 0 —abK mg Q'(m*) -1 A + (g + b) 0 0 0 0 -abK gm Q' (g*) 0 A -1 ab A ~f~ (a -f b) = 0. Thus, we obtain (2.13) [A(A + a + b) + ab } 2 — (ab ) 2 K mg K gm Q ( mn*)Q ( g *) = 0. (2.14) Solving the equation, we have A(A + a + b) — -ab ± iabyJ\K mg K gm \ Q' (m*)Q' (g*). (2.15) Therefore, we obtain the eigenvalues as Al,2, 3, 4 — ~(a + b)± J (a by ± i4aby/\K m gK grn \ Q' (m*)Q' (g*) (2.16) Note that Q' (m*)Q' (g*) > 0. Clearly, eigenvalues of A are controlled by all three free parameters in the form of g — \K mg K gm \ Q (m*)Q (g*). Without loss of generality, we will consider these parameters in terms of a single representation by g. Due to the fact that g, is not always a monotonic function with respect to any of the free parameters, each of them may lead to multiple bifurcation points. Later we will show how it affects the system dynamics. For now, we only consider the change of structural stability by g in a neighborhood of the critical value giQ. From our previous discussion, it is easy to check that one pair of the system’s complex eigenvalues always have negative real part. The conditions for an Hopf bifurcation are all satisfied at the point where a pair of purely imaginary eigenvalues are generated. The critical values are

PAGE 35

23 Ho = ( a + b) 2 /ab and o> 0 = y/ab. In a neighborhood of the equilibrium, the frequency of the periodic solution can be approximated by the system’s time constant (1/a and 1 /b). To determine the local dynamics of this bifurcation, we need to perform center manifold reduction to find the normal form in the neighborhood of the equilibrium and bifurcation parameter Ho [17]. The first step is to construct the real Jordan canonical form of A. Transformation matrix T = [i?e{v!} Im{vi} i?e{v 3 } /m{v 3 }], where Vi and v 3 are the eigenvectors corresponding to the pure imaginary eigenvalue and the eigenvalue with negative real part, respectively. At the critical value Ho, we have T _1 AT = (2,17) 0 — y/ab 0 0 y/ab 00 0 0 0 — (a + b) — y/ab 0 0 y/ab — (a + &) The first block represents the invariant center manifold while the second block represents the invariant stable manifold. Denote y = T 1 x. The new system y = G(y) can be used to construct the normal form of the original system. By solving the center manifold, we can have information about the frequency, amplitude and stability of the periodic solution in the neighborhood of the equilibrium. Analytical form of the center manifold is generally unsolvable. In the next section, we will directly analyze the bifurcations as a function of each of the free parameters to obtain accurate solutions that can control the dynamics of an RKII. Here, through numerical calculation important information such as frequency, amplitude and stability of the periodic solutions can be estimated from the reduced center manifold [20]. We should notice that the analysis is local. Thus, estimation is accurate only in a neighborhood of the equilibrium. From simulations, a reasonable estimation can be obtained in a rather large range beyond the critical value.

PAGE 36

24 Particularly, the periods of oscillations in a neighborhood of the equilibrium can be estimated using ujq as Ojr T = — [1 + k(a a 0 ) + o((a a 0 ) 2 )], (2-18) UJq where cu 0 = Vab and a 0 = (a + b) 2 /(ab). The values of k and a need to be evaluated after normal form transformation according to any specific set of system parameters [20]. Figure 2-4 shows an example, in which frequency of oscillation Figure 2-4: Frequency of oscillation as a function of coupling strength was recorded as a function of /r. Input P — 0 and K mg is fixed at at 1 so that H = KgmThe bifurcation point is at (a + b ) 2 jab which approximately equals to 5.6. Value of K gm is increased beyond the critical value. Estimated frequency from the reduced system approximates the real system very well up to two times of the coupling strength at the bifurcation point.

PAGE 37

25 2.5 Parameter Design in an RKII Set Bifurcation occurs when variations of parameters result in structural changes of the system. In other words, free parameters can effectively control the system behavior. In this section, by studying the real part of eigenvalues of an RKII set as a function of coupling coefficients and external input, we can find ways to achieve desired dynamics of RKII sets systematically. Recall that the corresponding Jacobian matrix is: A 0 10 0 —ab — (a + b) abK gm Q' (g*) 0 0 0 0 1 abK mg Q' (m*) 0 —ab — (a + b) (2.19) where m* and g* are the fixed-point solutions that are determined by K mg , K gm and P. Q' (x) is the derivative of Q(x). The eigenvalues are: Ai,: 2,3,4 -(o + b) ± \J (a — b) 2 ± i4aby/\K mg K gm \ Q' (m\)Q' (g\) ( 2 . 20 ) Let R = Re{)J ( a b ) 2 ± i4ab^\K mg K gm \ Q' {m*)Q' (g*)} I = Im {\J (a b ) 2 ± i4abyj\ K mg K gm \ Q' (m*)Q' (g*)} (2.21) It is clear that R > 0 and (a + b) > 0. Remember that system behavior switches when Re{A} passes zero. In this case, — (a + b)±R determines the sign of Re{A}, so we have the following sufficient and necessary conditions for the two distinguished states of the equilibrium that we are interested in. 1. An RKII set (Equation 2.7) has an unique stable equilibrium iff R-max < ( a + k ) 2 ( 2 . 22 )

PAGE 38

26 2. An RKII set (Equation 2.7) has an unique unstable equilibrium iff Riax > (« + b f (2.23) R = (a + b) is the bifurcation point of the system that represents the boundary for the purpose of controlling the behavior of an RKII set. Let R r = (a — b ) 2 and Ri = AabyJ\K mg K gm \ Q' (m*)Q ' (
PAGE 39

27 In general, with a fixed external input, \K mg K grn \ determines the dynamical behavior of an RKII set by when R 2 max < (a + b ) 2 (2.26a) when R max > (« + b) 2 . (2.26b) Therefore, for the purpose of controlling the dynamics (i.e., fixed point solution with zero input and limit cycle with positive input) of the system defined by Equation 2.7, the sufficient and necessary conditions on the coupling coefficients are • Condition 1: if and only if \KmgK gm \ < (2-27) \K-mgK gm\ 1 (a + bf Kmg R gm | Q'(m*)Q'(g*) ab 1 (a + b ) 2 Q'{m*)Q'(g*) ab an RKII is at rest when there is no input. • Condition 2: if and only if | R mg R gm \ > 1 (a + bf Q'(m*)Q'(g*) ab (2.28) an RKII oscillates when the input is positive. Note that the right side of Equation 2.28 is also a function of K mg and K gm , which is nonlinearly dependent on \K mg Kg m \. Condition 2 also sets restrictions on the value of Q m in Q'(x). Q'(x ) is defined by [ e x e~^ X > x 0 (2.29) Qf{x) = l 1 0 else (2.30) where x 0 = ln(l — Q m ln(l + 1 /Qm))Figure 2-5 shows a plot of Q'(x). Q'(x) > 0 and has a maximum value of Q m e~ 9 ^~ when x = ln(Q m ). To satisfy condition 2, Q'(x) must be greater than 1 for some x. Solving the maximum values, we have Q m > 1 . This is in accordance with the requirement in biological models that asymmetry of Q(x) is a necessary source of instability in the neural network [4],

PAGE 40

28 Figure 2-5: Derivative of Q(x) when Q m = 5 2.5.1 Determine Proper Values of K mg and K gm We investigated the conditions on K mg and K gm . In this section, we provide the specific procedures to choose suitable K mg and K gm that satisfy both conditions. (a + b) 2 As in the case when \K mg K grn \ < ab -, the boundaries of K mq and K gm is well defined by Kmg > 0 K g m < 0 | K m g K gm | < (< a + b ) 2 (2.31) The second case is more complicated because the 2 nd condition is the nonlinear function of K mg and K gm . Although we could not calculate an analytical solution of the region where valid K mg and K gm exist, we will show how to approximate

PAGE 41

29 the desirable areas. Again, the equilibrium is determined by m* K gm Q(g*) P = 0 g* K mg Q(m*) = 0 (2.32) where m* and g* are the fixed point solutions. Solving Equation 2.32 and writing Q'(x) in terms of Q(x), the boundary of Condition 2, \K mg K gm \ = could also be defined as where m*(> 0) and g*(> 0) are also functions of K mg and K gm . It is hard to find an explicit solution for the lower boundary of desired K mg and K gm , especially in the form of \K mg K grn \. However, we can still find ways to determine useful properties of the desired regions. Property 3. Given m* > 0 and g* > 0, if (2.33) ab Q'{m*)Q'(g*) = Q' (m*)Q'(KmgQ(m*)) = 1, (2.34) then Q'(m*)Q'(KmgQ(m *)) > 1 (2.35) for all 0 < m < m* . Proof. Q'(m*)Q'(KmgQ(m*)) has similar shape as Q'( x), So it has a lower bound of 1 when m decreases.

PAGE 42

30 Property 4. Suppose that K^ g and K* m is a set of values that satisfies both gm conditions. If we fix the value of K mg = K^ g and increase K gm until \ K^K, (a + bfi ab , then \K Kq m \ > 'mg 1 mg gm \ (a + bfi m9 9m 1 Q'{m*)Q'(g*) ab is automatically satisfied Proof. At the point where I K*K* > 1 (a + bfi , we have ' Q'(m*)Q'(g*) ab Q'(m*)Q'(g*) > 1. With increased K gm , both m* and g* are decreased (Properties 1). According to Properties 3, Q' (m*)Q' (g*) > 1 is also true when \K*K, (a + b) 2 is increased to — . Thus, ab 1 \tc K | = li±t£>1 "*» ab Q'(m')Q'(g') (a + bfi ab is automatically satisfied. mg gm | (2.36) Because Equation 2.33 is a continuous function, based on the above properties, / j 2 we could start from \K mg K gm \ = — , then fix the value of K mg and decrease the value of \K gm \ to satisfy both Condition 1 and Condition 2. The first exponential term in Equation 2.33 will be monotonically decreasing while the other two terms will be monotonically increasing with respect to a reduction of m* and g*. Two cases could possibly occur. \K mg K gm \ could be dropping until it reaches the bifurcation points that we start with. Or, it will reach another bifurcation 1 ( a + b ) 2 point before that. Similarly, starting from \K mg K, gm | Q'{m*)Q'(g*) ab and increasing only the value of |A' 9m | will also give us a range of valid values. However, there is no guarantee that these two areas are overlapped so that starting from any valid K mg and K gm , all values with larger \K gm \ will satisfy the conditions. Nevertheless, experiments show that for reasonable and normally used parameters (such as the a and b given previously, and Q m —5), we can use the above procedures as a rule of thumb to easily determine in which direction we should change the invalid values to obtain the working ones.

PAGE 43

31 2.5.2 Dynamics Controlled by an External Input The state of an RKII is controlled by an external stimulus P. Here, P is a time-invariant input with two states ( off and on). Off state (P = 0) is guaranteed by Condition 1 to be stable. However, not every positive value (on) could induce the oscillatory state in an RKII set. We will see in the following that P creates oscillatory behavior only in a bounded region. Property 5. In the system defined by Equation 2.7, the fixed point solutions m* and g* are both monotonically increasing functions with respect to the external input P. Proof. The l st -order derivatives of m* and g* with respect to P are dm* 1 dP 1 + \K mg K gm \ Q'(m*)Q'(g*) > dg* _ K mg Q'(m*) dP 1 + \K mg K gm \Q'{rn*)Q'(g*) (2.37) Recall that I\ mg > 0 and Q'(x) > 0 with finite x. So both derivatives in Equation 2.37 are positive, m* and g* are both monotonically increasing functions with respect to the external input P. Theorem 3. Given a set of fixed coupling coefficients K mg and K gm , the values of external input P that enable oscillatory state in an RKII set only exist in a bounded region. Proof. The interval of x where Q'(x) > 1 (Figure 2-5) is limited. The conditions obtained previously require Q' (m* (P))Q' (g* (P)) > 1. Therefore, m* and g* should only stay in a unique finite region. According to Property 5, P must be bounded.

PAGE 44

32 0.15 (b) o.i m(t) VS. g(t) * Initial Value 0.05 0 0.05 0.1 0.25 Figure 2-6: Fixed point solution in an RKII set when P= 0. (a) Temporal plot, (b) Phase plot 2.6 Experiments 2.6.1 Control of RKII Dynamics by the Coupling Coefficients We give two examples in which the external input is fixed to either zero or a positive value. In the case of zero input, an exact boundary is shown. When P(t) = 0, Equation 2.26a gives a fixed bifurcation boundary that divides the space into areas of oscillation and fixed point solutions. As given in [1], a = 220 Flz and b = 720 Hz. So when \K m gKgm\ < 5.5783, (2.38)

PAGE 45

33 Figure 2-7: Limit cycle solution in an RKII set when P— 0. (a) Temporal plot, (b) Phase plot an RKII set is stable and has a fixed point solution of (0, 0). When \K mg K gm \ > 5.5783, (2.39) an RKII set oscillates. Note that in this case, the actual values of K mg and K gm do not change the qualitative behavior as long as \K mg K gm \ is unchanged. In the examples, P = 0, K mg is fixed to one and K gm is varied to set different values of \KmgK gm \. Initial condition of m and v is set at (0.1, 0.1). In Figure 2-6, when K mg = 1 and K gm = —5.5 the output converges to a fixed point at (0, 0) after a

PAGE 46

34 0.3 0.28 0.26 0.24 0.22 o.2 O) 0.18 0.16 0.14 0.12 0.1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 m(t) Figure 2-8: Fixed point solution in an RKII set when P= 1. (a) Temporal plot, (b) Phase plot very long transient time. When we increase \K gm \ to 5.6, the RKII set converges to a stable limit cycle (Figure 2-7). In the second example, we set input to P = 1. As discussed in the previous section, there is no explicit expression of a lower boundary in the form of \K mg K grn \ but once we determine one set of valid values, all other values with a larger \K gm \ are all possible choices to keep the system oscillating with a positive external input. Again, we set K mg to 1. Figure 2-8 shows the transient output of M-cell and G-cell with an testing value of K gm = —4. Apparently, the coupling strength is not large enough to sustain an oscillation. In this case, the equilibrium is at (0.1776,0.1906)

PAGE 47

35 Figure 2-9: Limit cycle solution in an RKII set when P= 1. (a) Temporal plot, (b) Phase plot and | Kmg Kgm | — 4 < 1 (a + 6) 5 4.1852, (2.40) Q'(m*)Q'(g*) ab which is less than 4. Trying another K gm = —5, the RKII set enters the oscillatory state as shown in Figure 2-9. Here, the values of m* and g* decrease and the equilibrium is at (0.1502,0.1595). Condition 2 is satisfied with 1 ( a + b ) 2 \KmgK, gm | 5 > Q'(m*)Q'(g*) ab 4.3762 (2.41) After scanning different values of K gm , we obtained an approximate minimum value of 4.237 as the lower bound. So, in this case, K mg = 1 and —5.5783 < K gm <

PAGE 48

36 Figure 2-10: Fixed point solution in an RKII set with positive input, (a) External input P— 0.35. (b) External input P=26.1 —4.237 are all possible solutions to achieve the desired dynamical behavior in an RKII set. Note that, with a relatively small \K gm \, the amplitude of the oscillation will be very small and the transient time to reach a stable limit cycle is rather large. So, normally, \K gm \ should have a relatively large value. 2.6.2 Control of RKII Dynamics by the External Input To show that P only exists in a bounded region to achieve desired system dynamics, we fix K mg = 1 and \K gm \ = 5 and scan different values of P. Approximately, with this set of coupling coefficients, P must be kept within (0.42, 26.06) to effectively excite the RKII set. Figure 2-10 shows the transient signal of the system

PAGE 49

37 Scanned Boundaries: P=1 Figure 2-11: Scanned parameter space of \K mg K gm with two different P values (0.35 and 26.1). We see that, not all positive input can force an RKII into the oscillatory state. In fact, when P = 0.35 \Km 9 Kgm\ = 5 < Q'( m *)Q'( r ) ( ^ = 5 ' 0974 ’ ( 2 ’ 42 ) and when P — 26.1 \K mg K gm \ = 5 < ^ = 5 2395 ’ (2 ‘ 43) 2.7 Discussions The Hopf bifurcation occuring in an RKII set can be controlled in terms of K mg , K gm and P. The condition on the fixed-point state is entirely determined by the system’s time constants and can be solved analytically. Although generally the second condition on input induced oscillation does not have a close form solution,

PAGE 50

38 simple rules can be used to adjust the parameters. As long as the first condition is still satisfied, the coupling strength can always be increased to achieve oscillatory state under stimulation. We also scanned the parameter space of K mg and K gm to give a sense of the shape of the boundaries (Figure 2-11). The simulation is performed with P — 1 and Q m = 5. We see in the plot that, the lower bound follows the properties we discussed before. If an arbitrarily determined coupling coefficients could not make the RKII set controllable as we expected, we need to increase the value of either K mg or \K gm \.

PAGE 51

CHAPTER 3 SYNCHRONIZATION OF COUPLED RKII SETS A biological system, which usually presents very complicated behaviors, is often a highly interconnected network that is built from dynamically similar subsystems. The behaviors of such sub-systems can be very simple ones such as stable fixed point and multiple equilibria [21], or complicated ones such as oscillations [22] and even chaos [23]. An interesting question is how the complicated behaviors of the overall system could possibly be generated by the interactions among its sub-systems. This leads us to study the behaviors of coupled systems. There are a wide variety of collective behaviors in a coupled oscillatory system including synchronization, partial synchronization, desynchronization, dephasing, phase trapping, and bursting in coupled neural oscillators through diffusive coupling. A broad range of analysis has been undertaken from physical systems to biological systems such as those presented in [24-30]. An important aspect of this research work is to understand the global behavior of FreemanÂ’s KII network so that we can achieve an efficient and robust way to use the network. We investigate here the synchronization of the basic building block in the computational model of olfactory system. Basically, FreemanÂ’s model presents chaotic behaviors through the outputs of the olfactory bulb layer when it is either in the basal state or stimulated. While the network as a whole creates signals that are similar to EEG [31,32], the basic building blocks (K0, KI and KII sets) are either single or coupled nonlinear dynamical systems that have simple dynamics such as fixed point solution and limit cycle. Although to understand how the chaotic behavior is generated in the Kill network is a very challenging question, by using synchronization analysis, we expect to understand one of the 39

PAGE 52

40 global behaviors by only using the analysis obtained in previous chapters to avoid complicated analysis when dealing with higher level structures. It can be a helpful first step to start understanding higher level system dynamics. Recall that, an RKII set is an input controlled oscillator. Given the external input P, a bifurcation boundary is defined by I K mg K, (' a + 6) 5 grn | Q' (m*)Q' (g*) ab where Q' (x) is the derivative of Q(x) and ( m*,g *) is the unique equilibrium of Equation 2.1. Especially, when P = 0, (3.1) \K m ,K, m \ = (3.2) In this case, when \K mg K gm \ < , Equation 2.1 is stable at the origin while it enters oscillatory state when \K mg K gm \ > . While dynamical behaviors of individual RKII set are clearly understood, we still do not know how to properly control coupled sets. Analysis on synchronization of these coupled oscillatory elements is very important toward our goals of understanding collective behaviors of higher level structures in the Freeman model. Besides, learning and memorizing in the olfactory system are still a challenging problem in that the coding of information is still unclear. The OB layer, as a coupled RKII network, is the key element for information encoding and computation. It has been used as a dynamical associative memory just like the other oscillatory networks that are being used in learning and memorizing [12-14]. The analysis of synchronization of coupled RKII sets may also give hint on possible implementations of the OB layer (RKII networks) for the purpose of information processing. Synchronization of coupled chaotic systems has been widely studied [33, 34] due to many challenging applications such as data communications, chemical reactions [35-37] and collective behaviors in biological systems. Pecora and Carroll

PAGE 53

41 [33] studied the synchronization of a subsystem that is a replica of the original system where the driving input comes from. Later, Ding and Ott [38] extended it to the cases that the synchronized systems are not necessarily replicas of the original system. Brown and Rulkov [39] studied the sufficient conditions to design proper coupling strength of two chaotic systems that guarantee a synchronization between them. Although the analysis are for coupled chaotic systems, most of them also apply to systems with simple dynamics such as a limit cycle. This analysis is of particular interest in our study of neural systems. In the following sections, we discuss how to use Brown and RulkovÂ’s method to determine the coupling strength where coupled RKII sets synchronize. Section 3.1 presents the methodology of synchronization analysis. Analytical solution of coupling strength to achieve synchronization in coupled RKII sets is discussed in Section 3.2. Simulation results are presented in Section 3.3. The derivation of the analytical solutions of the sufficient conditions is given in Appendix B. 3.1 Sufficient Condition on Synchronization of Coupled Chaotic Systems The synchronization of two dynamical systems have different definitions based on different purpose and applications. Generalized synchronization (GS) between a driving system and a response system is discussed in [40]. In this chapter, we consider the case that follows the definition of identical synchronization (IS), in which the transformation from driving trajectory to response trajectory is an identity function. Different approaches on synchronization analysis of coupled chaotic systems have been discussed by many researchers. Normally, the method is based on Lyapunov functions, of which the Lyapunov exponents are difficult to be evaluated. Brown and Rulkov [39] proposed a sufficient condition on linear stability of trajectories in the synchronization invariant manifolds of two identical chaotic

PAGE 54

42 systems. In their method, the Lyapunov exponents do not need to be explicitly calculated. Also based on rigorous analysis, they gave an approximated and simple result that is easy to be calculated. Although solved for chaotic systems, the results are also applicable to general nonlinear systems. Given driving and response systems as dx . v . . — = F(x; t ) (3.3) and ^ = F(y;() + E(x-y) (3.4) where x, y E 3?^, x represents the driving trajectory and y is the response system. E is a system of functions describing the coupling between driving and response systems. It can be either a linear matrix or a set of nonlinear functions. Define the deviation of system y from system x as w = y — x, we have r/w -=[DF(x;t)-DE(0)]w (3.5) where DF(i; t ) is the Jacobian matrix of F, and DE(0) is the Jacobian of coupling matrix E at the origin. This equation describes the motion transverse to the synchronization manifold. The synchronization manifold is linearly stable if w (t) linearly converges to zero for all possible driving trajectories x(t) within the chaotic attractor of the driving system. DF(x;f) DE(0) can be further decomposed into a time independent part A = (DF) — DE(0) and an explicit time dependent part B = DF (x-,t) (DF). Assume that matrix A has eigenvalues Ai, A 2 , . . . , \n and are ordered by their real parts as 3?{Ai} > 5R{A 2 } > . . . > Ji{A,v}. Denote the matrix of the corresponding eigenvectors of all the eigenvalues as T = [e 1 ,e 2 ,...,e w ]. Brown and Rulkov [39] solved the condition for linear stability of the invariant trajectory in

PAGE 55

43 the synchronization manifold as S[A 1 ]><||T1 [B(x;f)T]|| >, (3.6) where < • > is the operator of time average. In addition to the rigorous result, a quick and simple rule is given as »[Ai] < 0, (3.7) where 5R[Ai] is the largest real part of the eigenvalues of A. That is, conditions on synchronized behaviors can be studied by evaluating the eigenvalues of deviation system. Next, we use the simple rule to study the synchronized behaviors of coupled RKII sets. 3.2 Analytical Solutions of Couplings for Synchronized RKII Sets (a) (b) Figure 3-1: Two representations of coupled RKII sets, (a) Linearly coupled RKII sets (b) Under synchronization, coupled RKII sets could also be described as an individual RKII set with auto-feedback By the original definition, coupling among different KII set is through the same nonlinear function Q(x) as defined in the previous chapter. Linear coupling between oscillatory components in olfactory [41] and many other neural system

PAGE 56

44 models has also been widely used. While different couplings all address the interconnection among different set of neurons, intuitively, linear coupling has much simpler dynamics and is easier to analyze. Here, we shall first consider the case of linear coupling through coefficients K mm and K gg and derive an analytic solution. In the next section, we discuss the effects of nonlinear coupling and compare the differences and similarities between the two coupling scheme in terms of theoretical results and simulations. Assume we have two identical systems defined by Equation 2.1, in linear coupling mode (Fig. 3-1 (a)), the system is described by a system of ODE’s as rn^ ml 1 ' 1 abm (l) (a + b)rriv ) + ab(K gm Q(g (1) ) + P + K mm rn (2) ) g {1) = gi l) (3.8) gi l) = -abgW (a + + ab(K mg Q(m^ ) + K gg gW) where K mm > 0 and K gg < 0 are the excitatory and inhibitory couplings respectively. The superscript (1) and (2) denote the two systems respectively. In this case, coupling matrix DE(0) is DE(0) = abK„ 0 0 0 0 0 0 0 0 0 0 0 0 abK gg 0 and A = (DF)-DE(O) 0 ofe(l -(A mm ) 0 abK mg (Q' (m^)) -(a + b) abK gm (Q'(gW )) 0 0 a6(l + Kgg) 0 0 1 -(a + b) (3.9) (3.10)

PAGE 57

45 where (Q'(x)) is the measure (time average) of the derivative of Q(x). The outputs are simple limit cycles so we use time average to evaluate them. To find the sufficient condition that guarantees the synchronization between two RKII sets, we should solve for the real part of the Jacobian of A. Detailed analysis on how to solve for the conditions will be presented in Appendix B. The results are given here. The sufficient condition on coupling coefficients K mm and K gg is the union of Equations 3.11-3.16. When Kmm + \Kgg\ > 4 K B , (3-11) either of the following three inequalities (Equations 3.12, 3.13 and 3.14) needs to be satisfied. When (K mm ?f)(\K M \ + ^f) A to Kmm I K gg | Km 2 (3.12) 2 < K mm l-Rjgl < Km 2 (1 + K mm ){\Kgg\ — 1) < K b (3.13) Kmm 1 Kgg I Km 2 (1 + K mm ){\Kgg\ — 1) < K b (K mm ^f)(\K„\ + ^f) > K b (3.14) 0 < K mm + \K gg \ < \J 4Kb, (3.15)

PAGE 58

46 Equation 3.16 needs to be satisfied. -2 < K„ I K, 99 1 4A B 4A'. 42 2K A 'JK mm \K„\) (3.16) Values of Kai, Ka 2 and Kb are calculated as (a bf Kai = ab Ka2 K b (a + b) 2 ab K ml K, m (Q'(mm))(Q'(gW)) . (3,17) These conditions can be further simplified under certain assumptions. Synchronized RKII sets could have different kinds of behaviors. For our purpose, we want these coupled sets to keep the same dynamics as those of a single RKII set when they are synchronized. Under identical synchronization, dynamics of two coupled sets can be described as a single system with auto-feedback (Figure 3-1 (b)) as rh = m v rh v = -ab(l K mrn )m (a + b)m v + ab(K gm Q(g) + P) 9 9v g v = -ab( 1 K gg )g (a + b)g v + abK mg Q(m). (3.18) K mrn and K gg interpret couplings from the other system as auto-feedback coefficients. Only when K mm < 1, the above equation defines the same system as a single RKII. Thus, we limit K mm to be less than 1. Moreover, in experiments, the absolute value of K gg is normally much smaller than K mm . Thus the two coupling coefficients are limited to the unit square. Note that the first set of conditions require K mm + \K gg \ > 2 \JK B From experiments, K B is always greater than 1. Thus, when K mm < 1 and \K gg \ < 1, the sufficient condition on synchronization can

PAGE 59

47 be simplified to {K mm + \K gg \) 2 > 4 K B — 2K A2 {2 + K mm — \K gg \). (3.19) 3.3 Simulations We consider the case of synchronization when an uncoupled RKII set is in the oscillatory state. Therefore, if a single RKII oscillates, the synchronized systems should also stay in the oscillatory state. In order to keep the qualitative behavior of coupled RKII sets the same as the behavior of an individual one (oscillation in this case), the Jacobian matrix of Equation 3.18 (i.e., DF(i*) + DE(i’), where x* is the equilibrium) should have at least one eigenvalue with positive real part. In this case, solutions of K mm and K gg could be obtained similarly as given before but with a different K B as K* K m g K gm Q m )Q'(9 •) (3.20) Based on Equation 3.1, an oscillatory RKII set must satisfy KmgKgmQ' (m*)Q' (g*) = K“ b > E±fll > ab (3,21) where ( m*,g *) are the equilibrium. Thus, similar to discussions in the previous section, we obtain a simplified bifurcation boundary that enables oscillation in synchronized RKII systems as {Kmm + \K gg \) 2 < 4 K b 4 K A2 + 2 K A2 (K mm \K gg \). (3.22) We denote Equation 3.22 as the bifurcation boundary. Note that Equation 3.22 is only valid when synchronization is achieved. If the synchronization condition is satisfied before bifurcation happens, we expect the coupled sets to converge to a fixed point. The bifurcation boundary is more accurate because it uses the exact values of equilibrium instead of estimations of limit cycles used in conditions for

PAGE 60

48 Figure 3-2: Parameter space defined by K mm and K gg . The plot shows combined results of synchronization and bifurcation boundaries when input P = 0, K mg = 1 and K gm = -6. synchronization. The two boundaries are actually two parabolas (Figure 3-2) that divide the parameter space into three possible regions, where two coupled RKII sets behave differently as synchronized oscillation, desynchronized oscillation, and fixed point. Figure 3-3 gives an example of a detailed view of the actual boundaries inside the unit square. The lower bound of the fixed-point region is defined by synchronization conditions. However, we used the time average of limit cycles to estimate Kb but in the fixed-point region the trajectory is not oscillatory anymore. Therefore, we can replace K B in Equation 3.19 with K* B to fine tune the lower bound of the fixed-point region. The section of bifurcation boundary that is in the desynchronized area is removed. There are three other parameters (K mg , K gm and P ) that could change the position and shape of the boundaries. In most cases, these three parameters are predetermined so we can decide the boundaries

PAGE 61

49 Figure 3-3: Values of \K gg \ and K mm are limited by 1. Three regions are determined where the coupled system shows synchronized oscillation, desynchronized oscillation or fixed point solutions. accordingly. In the case when their values need to be changed, it is still convenient to know how the boundaries move. In fact, only K B (or K* B ) will be affected by the three parameters. Increasing or decreasing K B (or K B ) will shift the parabolas along the position vector (1,-1) (Figure 3-2). We should also note that Equation 3.10 defines a mutually coupled system. Driving trajectory is not determined until the the system reaches synchronization. This affects the position of the synchronization boundary because the value of K B changes with different values of coupling strengths (hence, the synchronization boundary may not be strictly a parabola function). If K B does not have large variations, it can be simply estimated by a single oscillatory RKII set without assuming any coupling strength in advance. To achieve a more accurate result, for every fixed K gg ( K mm ), different values of k mm ( K gg ) can be evaluated by

PAGE 62

50 computing corresponding values of K B using the single RKII model with autofeedback (Equation 3.18). This seems to lose the advantage of a simple analytic solution. However, a single RKII set usually runs much faster in real time to reach steady state for a good estimation of K B than two coupled RKII sets run to achieve synchronized oscillation. Single RKII set is also simulated more efficiently because it has fewer dimensions. It should also be pointed out that, by adaptively evaluating the coupling coefficients, only the synchronization boundary needs to be calculated to decide the sufficient condition on synchronization. This is because when computing the time averages, we assume that the RKII set has an oscillatory state, which automatically satisfies the bifurcation boundary. Moreover, when external input is zero, the other two boundaries can still be easily calculated because the driving trajectory is fixed at the origin. Apparently, the bifurcation boundary should be always below or overlapped with the accurately calculated synchronization boundary. Next, two simulations are presented, where we chose K mg = 1, K gm = -6 and P = 0 as an example. First, we consider two cases with linear coupling where \K gg \ = 0.8 and \K gg \ = 0.4, respectively. For each example, we show that only when both synchronization and bifurcation conditions are satisfied, the coupled systems behave as expected. Second, the effect of nonlinear coupling is discussed. Numerically simulated boundaries and calculated boundaries are compared for both linear and nonlinear coupling. We will also show that, in addition to simple oscillations, quasiperiodic motion is also observed in a small region around the synchronization boundary. 3.3.1 Example 1: K gg = 0.8 When increasing K mm from a very small value, the system first meets the boundary that generates synchronization behaviors. However, in the synchronized system, both of the reduced KII sets converge to a fixed point instead of

PAGE 63

51 Figure 3-4: Simulations of coupled RKII sets when K gg —~ 0.8. (a) K mm 0.6. (b) Kmm= 0.96. (c) K mm = 0.73. (d) K mm = 0.74. (e) K mm = 0.91. (f) K mm = 0.93

PAGE 64

52 oscillating, because K mm has not passed the bifurcation point yet. Theoretically, the bifurcation point locates at K mm = 0.92 (Fig. 3-3). Figure 3-4 shows the steady-state behavior of the coupled systems at K mm — 0.6, 0.96, 0.73, 0.74, 0.91 and 0.93, respectively. When K gg = -0.8 and K mm 0.6, the coupled RKII sets are desynchronized. When K mm = 0.96, the coupled RKII sets are synchronized. When K mm = 0.73, desynchronization occurs below the synchronization boundary. When K mm = 0.74, the coupled system converges to a fixed point above the synchronization boundary. When K mm = 0.91, the coupled system converges to a fixed point below the bifurcation boundary. When K mm = 0.93, the coupled system synchronizes and oscillates above both the synchronization and bifurcation boundaries. The results are as expected. We see that theoretically obtained bifurcation boundary predicts very well experimental results. The solution serves well as sufficient conditions for synchronized dynamics. Generally, bifurcation boundary and lower bound of the fixed-point region are more accurate because they are calculated exactly. 3.3.2 Example 2: K gg = 0.4 In this case, the bifurcation condition is first satisfied when changing K mm from a small value (Figure 3-3). However, without satisfying the synchronization conditions, the bifurcation point is invalid. In this case, synchronized behavior should be governed solely by the conditions on synchronization. Figure 3-5 shows the difference between the two systems at K mm = 0.33 and K mm = 0.4. Synchronized behavior is observed when K mm passes 0.37 while theoretically the sufficient condition is about 0.42. Again, although there exist errors between calculated and actual values of the boundary, the analytical solution is still a sufficient condition on synchronization as expected. Figure 3-6 illustrates the differences between analytical solution and numerical simulation. We see that there are differences between the two synchronization

PAGE 65

53 Figure 3 5: Simulations of coupled RKII sets when K gg = —0.4. The coupled system desynchronizes and synchronizes at (a) K mm = 0.33. and (b) K mm = 0.4

PAGE 66

54 Figure 3-6: Comparison between analytic solution and numerical simulation boundaries but the analytical solution guarantees synchronized oscillations. As discussed before, analytical solution gives almost the same values of the other two boundaries as those in numerical simulation. 3.3.3 Nonlinear Coupling In the case of nonlinear coupling, Q(m) and Q(g) are coupled through K mm and K gg . Therefore, weights of K mm and K gg can be considered as time varying according to a specific trajectory. To use the simple criterion, similarly, it results in estimating time averages of K mrn Q(m) and K gg Q(g). Therefore, by denoting K* mm = K mm {Q'(rn)) and K* g — K gg (Q'(g )), the three boundaries (synchronization, bifurcation, and fixed-point) can be rewritten, respectively, as (Kmm + |if s s |) 2 > 4 K b 4 K A2 2 K A2 (fC mm |jr;,|), (3.23)

PAGE 67

55 ( K m m + NJ) 2 < 4A-J 4^2 + 2K A2 (K' mm \IC„\), (3.24) and (K’ mm + \K' m \? > 4 Ki 4 K A2 2K A2 (K' mm (3.25) When external input is zero, the bifurcation boundary and the lower bound of the fixed-point region should be exactly the same as those in the case of linear coupling. However, simulations show that the simple criterion (Equation 3.7) fails to compute the synchronization boundary when the input is zero and all the boundaries when the input is positive (where in nonlinearly coupled system, multiple equilibrium may exist). Table 3-1 compares the simulated results with analytic solutions when K mg — 1, K gm = —6 and P — 0. Data shown are the boundaries (synchronization and bifurcation boundaries) for synchronized behaviors. When K gg = —0.7 (-0.8), solution of the lower bound of the fixedpoint region for both linear and nonlinear coupling is K mm = 0.681 (0.739) which matches the numerical simulation. In summary, for the case of linear coupling, boundary for synchronized oscillations can be calculated using the synchronization condition, where with a fixed K gg , K mm can be evaluated using the time average estimated from a single RKII with auto-feedback. If the external input is zero, part of the synchronization boundary can be easily calculated using the bifurcation condition. While the simple criterion generally fails in the case of nonlinear coupling, it still applies when the input is zero. In such cases, the three different dynamics in the coupled system can still be accurately decided in part of the parameter space similar to the case of linear coupling. Table 3—1: Comparison between analytic solution and numerical simulation 1 K„ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Krrirn Simulation 0.053 0.156 0.261 0.369 0.479 0.593 0.734 0.912 Linear Analytic 0.149 0.238 0.328 0.419 0.510 0.604 0.734 0.912 A"mm Simulation 0.74 0.94 1.08 1.19 1.27 1.27 0.734 0.912 Nonlinear Analytic 0.18 0.29 0.38 0.47 0.55 0.62 0.734 0.912

PAGE 68

56 3.3.4 Quasiperiodic Motion in Coupled RKII Sets Through simulations, it is found that the coupled system shows dynamic behaviors consistent with the parameter partition shown in Figure 3-2, except that quasiperiodic motion occurs in a narrow region below the actual synchronization boundary. From simulations, the second frequency in a linearly coupled system is normally slow. So, for illustration purpose, a nonlinearly coupled system is used, where K mg = 3, K gm — —3, K mm = —0.4, K gg — 0.1885, and P= 0. Figure 3-7 shows the time-domain signal, the phase plot, and the power spectral density, respectively. In fact, the transition between synchronized and desynchronized oscillations does Figure 3-7: Example of quasiperiodic motion in two coupled RKII sets, (a) Temporal signal, (b) Phase plot, (c) Power spectrum not always occur with a sharp boundary. Figure 3-8 shows an example of the region where quasiperiodic motion occurs (between the actual synchronization boundary and the lower bound of quasiperiodic motion). Note that this behavior is

PAGE 69

57 only observed under the actual synchronization boundary. Effort was made through numerical computations to find out whether chaos may also occur for linearly coupled RKI1 sets in the region where quasiperiodic oscillations are observed. K gg and K mm are scanned inside the region where quasiperiodic motion occurs by fixing K gg for different values and varying K mm in steps of 10 -4 . Power spectrum is examined and no chaotic motion was observed. Figure 3-8: Simulated regions of quasiperiodic oscillations 3.4 Discussions Many biological systems are modeled as interconnected networks of neural oscillators. Synchronized oscillations are believed to play a key role in such models, especially for the purpose of information processing. In applications such as associative memory, coupling strengths between different RKIIs are set to achieve

PAGE 70

58 desired temporal-spatio response, where different channels are clustered based on energy and/or phase information. Usually, these weights are set empirically or through Hebbian like adaptive learning. In adaptive leanings, proper selection of initial condition and normalization of weights are normally needed to avoid instability in training. To better understand the dynamics of this model and help for network design in applications, we studied the synchronization of two coupled RKII sets based on a simple criterion to gain insight on how to properly select the coupling coefficients. Linearly coupled RKIIs can be analytically solved with respect to k mm and K gg . In general, with fixed K gg , the sufficient condition for synchronization can be solved by calculating K mm based on the oscillatory trajectory estimated from a single RKII with auto-feedback. This is a simple procedure to determine the regions of different dynamics. It can effectively avoid simulating the coupled system through the whole parameter space. In addition, when external input is absent, part of the synchronization boundary can be easily obtained due to the stable fixed-point at the origin. The criterion does not apply to general nonlinear coupled systems. However, when the external input is zero, part of the parameter space can be solved similar to that in the linear case. More sophisticated method is needed to fully solve the problem of nonlinear coupling. However, the complexity to find the boundaries should always be a concern compared with the effort of trial-and-error experiments. Analytic solutions and observations were validated by numerical simulations. It was further observed that near the synchronization boundaries, the dynamics of the coupled oscillators may become quasiperiodic.

PAGE 71

CHAPTER 4 COMPUTATION BASED ON SYNCHRONIZATION In this chapter, we propose to use synchronization in the output space of the Freeman model as the readout. In many other oscillatory networks [13, 14, 42], synchronization among groups of oscillators has been used in various applications such as image segmentation, audio processing, and pattern recognition. More interestingly, phase coding has also been studied for FreemanÂ’s model, which shows advantages over conventional pattern recognition techniques based on amplitude [43]. In this chapter, more specifically, we will not only use synchronization to construct the output space but also use it directly in designing the structure of the network based on the desired functionalities we want to achieve. In this complicated and highly interconnected network, in response to input stimulus, the dynamics collapses to a lower dimensional space, and different groups of oscillators can be categorized by their synchronized behaviors. Based on the discussions in previous chapters on analyzing dynamical behaviors of the basic components in the Freeman model, we will show how to design critical parameters in the network to obtain desired output responses, especially synchronized and desynchronized behaviors. Two applications will be presented. The first application provides a different view of computation in FreemanÂ’s model. With fixed coupling coefficients, a combination of external inputs will lead the coupled oscillators in a reduced KII network to two different states as synchronization and desynchronization. It turns out that with only two oscillators, by determining the proper values of the coupling coefficients, we can build functional blocks to implement two-input boolean logic gates including AND, OR, XOR, NAND, NOR, XNOR, and NOT. In the second application, an associative memory is constructed 59

PAGE 72

60 by using synchronization among output channels in a reduced KII network as the readout. The network combines both amplitude and phase relations among RKIIs for information retrieval. 4.1 Logic Computation From previous discussions, we know that there are two basic behaviors for two coupled RKII sets. If we denote synchronization as state 1 and desynchronization as state 0 , the coupled system is a logic function when defined as /(I) = S'(ro< 1 W 2) ,0) (4.1) where I = {0, 1} x {0, 1} represents the input space and S — {0, 1} is any function that determines whether or not the outputs of the two systems and m ^ are synchronized. 0 is the threshold used in S. Note that I is not limited to binary values of 0 and 1 . Basically, any two values that represent higher and lower level inputs can form the input space. Function S is just a symbolic system where one can assign either 0 or 1 to any of the two states. This means that, if / can implement one set of the basic logic computations (for example, AND, OR, XOR), it should also be able to realize their complements with a different assignment of states in the actual measure of synchronization. The NOT function can also be implemented with many options. One example is to short the inputs of NAND gate or fix one of its inputs to 1. A two-input logic gate has a total of 16 boolean functions. Here, we only consider the case that the coupled oscillators have a symmetric structure, so it cannot distinguish between the inputs (0,1) and (1,0). However, this may be solved by using weighted inputs. The weights then become the parameters that need to be designed in addition to K mm and K gg . By considering only the symmetric structure, without combination of different gates, the coupled oscillators have the ability to implement 8 out of the 16 logic functions that include all the boolean primitives. They cover most of the universal sets (for

PAGE 73

61 example, {AND, OR, NOT} and {NAND}) that can express any logic functions. Note that in this application, analytic solution of the couplings between two RKII sets is not necessary. Any parameter can be used as long as synchronized and desynchronized behaviors can be achieved. Synchronization can be measured using either correlations between the outputs of the two sets or direct detection of phase difference. Equation 4.2 calculates the correlations between the time averagings of two state variables . < m 1 m 2 > — < mi >< m 2 > C(mi,m 2 ) (4.2) y/(< m\ > — < mi > 2 )(< ml > — < m 2 > 2 ) Notice that when the two sets are synchronized, the phase plot is a straight line. When they are desynchronized, there may be either a limit cycle or quasiperiodic motions in the phase plane. So another way is to plot the bifurcation diagram, so that multiple period could be detected in the case of desyncronization. In simulations, we use nonlinearly coupled RKII sets and fix K mg = \K gm \ = 3. Input space is formed by binary values {0,1}. Different values of K mm and K gg are searched to see if all the six basic logic gates could be realized. We relax the limitations on the two parameters and allow them to have absolute values that are greater than one. Given the simple structure of just two coupled oscillators, the results are very promising. Using a synchronization criterion that assigns state 1 to synchronization and state 0 to desynchronization, the coupled RKII sets can implement AND, NOR and XNOR functions. That is, by assigning 1 to desynchronization and 0 to synchronization, we have the complementary functions, i.e., all the six logic functions. Figures 4-1 and 4-2 show the implemented XNOR function in the time domain and phase plane respectively. Transient behaviors are discarded to guarantee a steady-state of the outputs. The outputs of the nonlinear function, which have saturations at -1 and 5, are plotted. In the case when inputs are [1 1] and [0 0], the two reduced KII sets are perfectly synchronized. When

PAGE 74

62 Input: [0 0] Input: [0 1] u u u u y 3.1 3.15 3.2 Input: [1 0] 3.25 cu o Q. E < Input: [1 1] Figure 4-1: Time domain response when implementing coupled RKII sets as an XNOR gate. State outputs are shaped by the nonlinear function. Solid line shows Q(m^) while dashed line shows Q(m^). inputs are [1 0] and [0 1], the two oscillators are desynchronized and even present chaotic behaviors. Note that, in this application, zero inputs to RKII sets still induce limit cycles. Actually, when both channels are synchronized, there are no chaotic behaviors locally or globally no matter what the inputs are. This is different from the conventional Freeman model where chaotic behaviors are always present. This is because here in the logic computation system, the output space is specifically designed to be either synchronized or desyncrhonized to construct binary functions. And as discussed previously, when two channels are synchronized, we expect the system to behave as a single RKII set that only has the limit cycle attractor. Table 4-1 gives the parameter values in two coupled RKIIs to build the logic gates. In the second column of the table, the label (0 or 1) defined for synchronized behaviors in S is indicated.

PAGE 75

63 Input: [0 0] Input: [1 0] -10123 Ml 2.5 2 1.5 1 0.5 0 0.5 -1 0 2 4 6 Input: [1 1] Ml Figure 4-2: Phase plot in the state space of Mi and M 2 when implementing coupled RKII sets as an XNOR gate. Table 4-1: Set of parameters that realize the complete boolean primitives Function Sync. Kmm K 99 Kfng Kgm AND (NAND) 1(0) 1.75 -2.0 3 -3 NOR (OR) 1(0) 1.2 -0.4 3 -3 XNOR (XOR) 1(0) 1.5 -0.4 3 -3 One important question is how to extend this result to a network of oscillators. There are basically two structures. The first one is to implement the logic gates just as in digital design. That is, outputs from one set of coupled oscillators will be the input of the next one. And combinations of different gates can implement more complicated logic functions. The other way is to fully connect N RKII sets. In this network, if we consider the synchronization of every pair out of the N oscillators, there will be a total of N(N — l)/2 status readouts. Therefore, the size of the binary output space could be as large as 2 N
PAGE 76

64 be possible to design a network that can access all the possible outputs). Although this shows another aspect of computational implementation using FreemanÂ’s model, we should also notice that this form of computation still falls into that of a normal system due to the much simplified binary state space. 4.2 Associative Memory Using an RKII Network An auto-associative memory is a content-addressable memory that stores a set of patterns during learning and recalls the learned pattern that is most similar to the present input. There are basically two types of associative memories: static feed-forward structures and the recurrent dynamic structures proposed by [11]. Hopfield networks have a stable point attractor. Here we will be showing that the reduced KII network can also be used as dynamic associative memories, but they differ from the Hopfield type because they have non-convergent dynamics (limit cycles or chaos). Let us consider the case where input patterns are static and only have binary values (positive and zero). The channels (RKIIs) that receive positive (zero) stimulus will be called active (inactive) channels. Ideally, oscillatory states of mÂ’s that are receiving the same inputs should be grouped together. That is, we expect zero phase lag among active channels. Under a strong constraint, we should expect only two kinds of different oscillations that are corresponding to high and low input values, respectively. However, in real applications, noise in the background or pattern distortions would make identical synchronization unrealistic. Here, we will still require active channels to be synchronized in the sense that phase differences between active channels only deviate from one another in a small neighborhood around zero. Behaviors of inactive channels and those caused by errors in the input space will be relaxed. Therefore, the only criterion for recognizing a pattern is to detect the active group that has strong synchronization. All other output states that are not strongly correlated to this group will be classified as null-state.

PAGE 77

65 Many adaptive methods have been investigated for training the interconnection weights (K mm and K gg ) in FreemanÂ’s model. Most of them are based on Hebbian learning [4, 44, 45] that could also be simplified to assign binary values, strong or weak, to coupling coefficients among active or inactive channels. From Chapter 3, we know exactly how to control the behaviors of coupled RKII sets by different values of connections. Thus, coupling strength will be determined targeting specific desired behaviors. We consider four scenarios in the input space that specify the values of couplings between different channels. To construct such a network as an associative memory, based on the excitation coming from input patterns, the following cases determine the coupling strength among different channels: Active channels If two channels are to be excited by at least one of the stored patterns, when applying positive stimulus, we expect the outputs from both channels to be synchronized. At the same time, when receiving incomplete patterns, they should also be synchronized. Inactive channels If two channels are never excited by any of the stored patterns, output states should be synchronized with weaker oscillations to be distinguished from the active channels. At the same time, if one or both of them are excited, they should be able to desynchronize from one another so that all of them would not be considered as active channels. Active and inactive channels Without the overlap between different channels, two channels that belong to active and inactive channels respectively within the same pattern should be always descynchronized. Overlapped channels If two patterns share the same channel, connections between this mutual channel and other non-overlapped channels will conflict for different patterns. That is, for one pattern, two channels are both active while it becomes case three for another. In such situations, intuitively, we

PAGE 78

66 will reduce the inhibitory connections between this mutual channel and other possible active channels. Therefore, the activation of the mutual channel will be heavily dependent on the activations of other active channels in the incoming pattern but not affected much by active channels from another pattern that is not currently presented (inactive for the incoming pattern). Based on the above conceptual design, we use the following as steps to set coupling strength for an RKII network. The network is initialized assuming case 2. For every pattern, the coupling strength between two active channels is set according to case 1. Couplings between active and inactive channels are set according to case 3 except for the overlapped channels, which will be set according to case 4. The four cases are similar to the process of Hebbian learning. However, although the couplings are designed according to the properties of input-output correlations in both cases, the crucial difference lies in the structure of the readouts. Outerproduct rule or adaptive algorithms such as Hebbian learning result in similarities among mutual channels in terms of energy. However, they can not realize our goal of guaranteeing a solution that specifies the correlations such as phase information among output channels. Moreover, the values of couplings are calculated directly in our case instead of through learning where issues such as convergence rate and stability may be a concern. A 64-channel RKII network is implemented as an associative memory that is trained to memorize and recall patterns of digits “0” , “1” , and “2” in the size of 8 x 8. To differentiate different groups of synchronized channels and their levels of activities, we use the following quantitative measure. First, cross correlations Cy’s between the i th and j th channels are calculated. Higher correlated channels are grouped together. Denote different groups as Gi, i < N, where N is the total number of channels. The evaluation value for each group is simply described by the total amount of input received by all channels in the group (Oj = YlkeG Ab

PAGE 79

67 where Ak is the input value received by the k th channel). The same evaluation value is then assigned to all channels in the same group. Note that the network is designed such that only stored patterns will be recalled as synchronized output channels. Noise comes with higher input amplitude instead of zero but all the output channels related to noise or false input will mostly not be synchronized and not be grouped together. Of course, we assume that noise or false input has an amplitude smaller than or at most comparable with ideal input values. For most cases, this is a simple but very effective criterion. Individual RKII set is configured with K mg = 1, K gm = —6, and positive input P = 3. For two coupled RKII sets. Case 1 couplings are set as K mm = 0.2 and K gg = —0.4. Case 2 couplings are K mm = 0.2 and K gg = —0.1. Case 3 couplings are K mm = 0.1 and K gg = —0.8. Case 4 couplings are K mm = 0.2 and K gg = —0.3. All couplings are further normalized according to number of channels in the network. 0.8 is set to be the threshold of correlation coefficients between two channels. Two channels with correlation value larger than the threshold are assigned to the same group. In the presence of clean patterns, the readout successfully recalls the patterns stored in the network (Figure 4-3(a)-(f)). In the time domain, we see that all channels that receive positive stimuli are synchronized and have much higher amplitude. In Figure 4-3(b), desired active channels are almost identically synchronized (small phase differences exist between the active channels for “0” and those overlapped with “1”) and have larger amplitude of oscillation. Desired inactive channels have small amplitude of oscillation and are desynchronized with active channels. Using the criterion given in the previous section, pattern “0” can easily be recovered to the stored pattern as shown in 4-3(a). In Figure 4-3(d), the phase differences among desired active channels are slightly larger than those presented in the case of pattern “0”. However, using the given threshold, pattern “1” can be perfectly recovered. In these cases, a

PAGE 80

68 Figure 4-3: RKII network as an associative memory, (a) Stored pattern “0”. (b) Network response to input pattern “0”. (c) Stored pattern “1”. (d) Network response to input pattern “1” . (e) Stored pattern “2” . (f) Network response to input pattern “2”

PAGE 81

69 simple energy readout would also work perfectly. However, when input distortion (incompleteness and noise) is presented, energy or distance based readout may fail. Figure 4-4: Noise is added to an inactive channel and an active channel in “0” In Figure 4-4, positive inputs ( P = 3) as noise are fed into a desired inactive channel and an active channel for “0” only. Figure 4-5 (a) shows the total amount of input received in each synchronized group (O t = J^keGi as the readout. Although noise increases the received input values for undesired channels, those channels tend to be grouped separately so the readout value will be kept low. When more channels are distorted by noise, as long as they are desynchronized, the final readout will always be much lower than the value of large number of active channels that are grouped together. From the time domain (Figure 4-5(b)), we see that undesired channels interfere with desired active channels. In this case, simple amplitude or energy based readout will fail the test. However, correlation between those colored channels and desired channels will still divide them into two different groups. Pattern “1” can still be perfectly recovered by choosing the group of channels that have maximum readout in 4-5 (b). In most cases, output level from the undesired channels can be high enough to interfere with the desired active channels. Energy and distance based method in this case will have difficulty distinguishing false activities (noise) while the synchrony readout still extracts the phase information between channels efficiently.

PAGE 82

70 1 2 3 4 5 6 7 8 Time (s) Figure 4-5: Readout from an RKII network, (a) Readout pattern that effectively improves the discrimination between desired channels and colored channels, (b) Network response to colored input pattern “1” However, instantaneous detection of phase distributions may also fail because the frequency of oscillations would also be affected by external inputs. From experiments, oscillation frequency of the false channel is close to but not exactly the same as that of the true active channels and the phase difference between the two is time varying. That is, if timing is not properly selected, instantaneous phase information will generate wrong information. However, from simulations, long term correlations between two channels still work because correlation between active channels remains strong. The window length used to track correlations depends on specific problems. Nevertheless, extra memory that stores previous estimation of average values of output state and state powers will enable online update of cross correlations and save computational time needed for long time-series. 4.3 Discussions Synchronized oscillations are believed to play a key role in many biological models, especially for the purpose of information processing. Based on our previous work on analyzing how to control an RKII network to achieve desired behaviors, we proposed to use synchronization as the computational primitive in an RKII network. Information regarding collective behaviors in such networks is used to design the readout. Network parameters (coupling strengths among different RKII

PAGE 83

71 sets) are designed specifically to achieve desired responses. Two applications are presented. By utilizing two coupled RKII sets as a logic computation unit, the intrinsic structure of the coupled sets provide us with very flexible configurations to implement any logic gates and the system can realize universal sets of logic operations. The potential capacity of this network is very large, however, the controllability of all the attractors in the output space is not guaranteed and needs to be further investigated. In the second application, we demonstrated how to design RKII network as an associative memory by considering synchronization of groups of oscillators in the output space. Finding parameter values in this case is straightforward comparing to conventional learning algorithms. Synchronization as a readout uses correlation of states to represent input patterns in the output space. Particularly, the synchrony readout performs well in the presence of noisy patterns when energy based methods may fail. There are still many problems that need to be explored. Capacity and robustness of the network in terms of stored patterns and signalto-noise ratio need to be investigated. In the applications pursued so far, only fixed point solutions and synchronized oscillations have been used for encoding information. This strategy may severely limit the capacity of neural network based models. It would be very interesting to develop schemes where desynchronized oscillations, quasiperiodic oscillations, and possibly chaos, together with fixed point solutions and synchronized oscillations, can all be actively used to encode information.

PAGE 84

CHAPTER 5 HARDWARE IMPLEMENTATION USING ANALOG VLSI The olfactory system model is a continuous dynamical system described by ODEs. Analog circuits are the most natural way to duplicate continuous computations for real time applications. In this chapter, the RKII set is designed using analog VLSI that works in the subthreshold region. Subthreshold region is used to take into account size, power consumption and functionality. However, we will also see that this may bring other problems such as offset and interconnection problems. Based on the architecture of an RKII, a straightforward design is divided into three major tasks including summing node, nonlinear function and the linear dynamics. Design issues are discussed individually. A new architecture that combines the summing node and the nonlinear function is also proposed to reduce the complexity and power consumption. 5.1 Summing Node The summing node receives external and feedback inputs. The output of this block is the weighted sum of all incoming signals. Specifically, in an RKII, we need to implement a unit gain for external input and user configurable coupling coefficients K mg and K gm . In [5], all these weights were implemented using operational amplifiers outside the chip. External implementation has the advantage of high precision, but it cannot be scaled well and requires large power consumption. Here transconductance amplifiers (transamp) working in subthreshold region are integrated on chip to realize current additions. The coupling coefficients are normally greater than one, so follower aggregation circuit cannot be used. Instead, a current adder followed by a current-to-voltage converter is implemented (Figure 5-1). Each of the transamps (G m i, G m2 , . . . , G mN ) converts an input voltage to its 72

PAGE 85

73 Figure 5-1: Summing node with N inputs. corresponding current using different transconductances. Summation of all currents is fed into another transamp G mB (base transamp) with negative feedback and is converted into to a voltage output. Therefore, the output voltage is N Vout = i= 1 mB (5.1) The transamp can be designed simply using a five transistor differential pair shown in Figure 5-2. When working in the subthreshold region, the circuit has a transfer function as Io = h-h e KnVin+/V T _ e K nVv in _/V T (>KnVin+/VT -jgK-nVin— /V t = / 6 tanh [^-iV in+ F m _)^ , (5.2) where K n is the capacitive coupling ratio from gate to channel in NMOS transistors and V T = kT/q « 25.7 mV is the thermal voltage. K n is typically set to 0.7. The shape of the I-V curve is sigmoidal. When the differential input (V„ + — V^ n _) is

PAGE 86

74 Vdd Figure 5-2: Differential transconductance amplifier small, (5.2) can be approximated as I 0 ~ G m (Vi n + Fin—)) where G m = /(,/«„/ (2 Vy). It is clear that this approximated linear relationship only works when the differential input is rather small. In our case, linearity in the summation block is very important because weighting of every input is expected to be a constant. However, the input-output relations make it hard to keep constant weight values in a relatively large range of input values. There are many methods to improve linearity such as source degeneration, bump circuit and floating gate. Figure 5-3 depicts a transamp with source degeneration using double diffusors [46]. The output value of I Q is KjT 2^7 ~(Vin+ F in _) tanh (V in+ ~ V in _) > l 1 I 0 — lb tanh 2 m + 1

PAGE 87

75 Vdd Figure 5-3: Wide linear range tranconductance amplifier with source degeneration using double diffusors where m is the ratio between the value of W/L of the diffusors (M5-M8) and that of the input transistors (Ml and M2). Assume that V T = 25.7mV, K n = 0.7, if linear range is defined by 1% distortion from the maximum transconductance obtained at V in+ — V in _ = 0, then the circuit shown in Figure 5-3 has an approximate linear range of 30 mV [46]. This is not a promising value that could make the circuit accommodating the large amount of inputs from a network. Based on the functionality of the transamp, assume that I B is the bias current of the base transamp G m B, and Vi nj MAX is the maximum value of the input signal before the input transamp enters saturation, we have N „ V inM Ax V 77 ^ oc I B . (5.3) ~7 ^rnB

PAGE 88

76 Moreover, to obtain coupling strengths greater than 1, G mi must be greater than G m B (Figure 5-4). A large value of output from the series of current adder could Input Voltage (v) Figure 5-4: Because the coupling strength is always greater than 1, input current range can easily exceed the limit of the base transamp easily exceed the linear range of the base transamp in the current to voltage converter, which makes the base transamp working out of range and the summation inaccurate. Hence, the largest weight value and input range in this circuit are limited. One way to solve this problem is to utilize the theoretical analysis obtained in previous chapters. Because it is the value of the product of internal couplings that determines the dynamic behavior of an RKII, we can average down the individual values of K mg and K gm but still keep the product satisfying the necessary operating conditions. Later in this chapter, a new design will be presented that combines the summation block and the nonlinearity so that a wider dynamic range and less dependance on linearity can be achieved.

PAGE 89

77 5.2 Linear Dynamics The 2" d -order linear time-invariant system can implemented using the Filter and Hold (FH) techinque [5]. Figure 5-5 shows the diagram of this implementation. The two stages of G m — C filters realize the functionality of a 2 nd order low-pass Figure 5-5: The 2 nd order linear filter. Filter and hold technique is used to increase the time constant. filter. Switches 5j and S 2 may suffer from charge injection and clock feedthrough. Dummy switches Su and S 2d are used to neutralize the charges that may be injected into the capacitor when clock switches to ‘off’. W/L ' s of both S id and S 2d are one half of those of Sj and S 2 because every time when the clock switches to ‘off’, half of the channel charges from Si and S 2 would be injected to C's. Analog output of each stage in the 2nd-order system is held for a certain period by using the switches. When using proper timings of the clocks $1 and $2, the actual time constant of the system increases. The transfer function can be shown as [5] 1 _ e -(G m i/Ci)kT j _ e -(G m2 /C 2 )kT = 1 _ e -(G ml /C x )kT z -\ X 1 _ e -(G m2 /C 2 )kT z 1 X Z where k is the pulse width of the clock (or inverse of the duty cycle). Thus, by setting proper duty cycles of the clocks, capacitance (time constant) can be effectively increased. With a duty cycle of 1%, the integrated capacitors can be scaled by 100, which greatly reduces the chip area needed.

PAGE 90

78 The purpose of using a very large time constant is to mimic the exact frequencies of the measured data from animal experiments [1]. However, in practice, if all parameters are adjusted accordingly, a scaled time constant will only affect the speed of temporal development in the network. Recall that, boundary of the interesting dynamics in an RKII is solely determined by a nonlinear function that involves a, b, K mg and K gm . Therefore, as long as the bifurcation condition is satisfied, the overall dynamics of the network is expected to be unchanged. Actually, an equivalent scaling for both a and b does not change the boundary set by (a + b) 2 /(ab). 5.3 Nonlinear Function Recall that the nonlinear function is given as { _ e x — 1 Qm( 1 ~ e Qrn ) X > x 0 (5.4a) — 1 else. (5.4b) In [5], the sigmoidal shape is well approximated by a differential pair working in subthreshold region. The asymmetric shape with different positive and negative saturations by setting different sizes for transistors of the current mirror load in the transamp. In this way, Q m is predefined and can have a relatively high precision if the differences in transistor sizes are properly designed (for example, by using parallel transistors). However, ability to control the actual shape of the nonlinear function is also appealing to bring more freedom to generate the desired dynamics. Especially, different Q functions result in different values of couplings necessary for oscillations. Therefore, instead of building into the chip a fixed asymmetric ratio, we use an externally controlled current multiplier to adjust the current going through the load of the differential pair. The circuit is shown in Figure 5-6. We have (5.5)

PAGE 91

79 Vdd Figure 5-6: The nonlinear block. where I\ and / 2 are controlled by and Vj, 2 respectively. The relation between the differential input and output current can be expressed by: I l e iyin-VreS)IV T _ l ««•) = fr V-'ww + i (56) Equation 5.6 is similar to the implementation given in [5]. They both approximate the sigmoidal shape required by the nonlinear block in a KO. However, here the actual asymmetric ratio and shape of nonlinearity in use can be controlled by bias voltages set outside the chip and are not fixed at the time of fabrication. An interesting observation is that both the summation block and nonlinear block use similar functional circuit, i.e., transamps. The output from the intermediate linear system is shaped and added. Intuitively, it seems feasible to combine the procedures of shaping and summation to a single block. We will show in the next

PAGE 92

section that this is true and can be utilized to reduce the complexity and power consumption of the whole system. 80 5.4 Efficient Design of an RKII System Recall that the general network architecture is described by 1 ab d 2 xAt) . , .d xAt) ... . . -^ + ( a + b)-± 1 + ( a b)x i (t) = Ef* [WvQMt), Qm ) + Q n ), *)] + rn i,j = 1,...,N, (5.7) Current circuit design separates the shaping and weighting processes before adding different input and feedback signals together. In this way, weighting circuits must satisfy high linearity although the signals being added are all shaped by a nonlinear function. Moreover, the current adder and the nonlinear function are all implemented by the same circuitry, transamps. Instead of separately building coupling weights and Q(x), a unified K mg Q(x) (K gm Q(x)) function can be designed using transamps that realize weighting and nonlinear shaping at the same time. It has the clear advantage of reducing system complexity. In addition, linearity is not required anymore for the input transamps. The natural exponential functionality of the differential pairs can be used without any approximation. Now, the question is how to design a controllable nonlinear function that can retain a desired asymmetric ratio and at the same time generate different power levels. At least two free parameters are needed to control individually both the slope and saturation level of the nonlinear function. Bias current can adjust both values but not separately. Figure 5-7 shows a simple way to control the ratio of the current mirror in subthreshold region. Io p{v b 2~Vb l) T rr (5.8)

PAGE 93

81 Figure 5-7: Asymmetric current mirror. If v b 2 is set to the power supply, output current only depends on v b i Consider the circuit shown in Figure 5-8, Output current can be expressed as N e (Vin+-Vin-)/v T _ i Q(V m ) e (V in+ -V in -)/V T + 1 ' (5.9) The value of N is determined by the bias V br This nonlinear function has a Vdd Figure 5-8: New implementation of the asymmetric nonlinear function built-in offset at zero input. To compensate the offset, two such transamps can be

PAGE 94

82 used [5] (Figure 5-9). We should note that this scheme can only cancel systematic offset. Bias current determines the negative saturation where v bI can be used to control the asymmetric ratio. Without the need for shaping the output at the end of the system, RKII can be redesigned by removing the nonlinear function and loosing the requirement of the linearity of the adder (Figure 5-10). Figure 5-10: New implementation of the RKII system that only includes an adder and a linear system 5.5 Digital Simulation and Chip Measurement One rather important intermediate medium between theory and analog implementation is simulation. In particular when one is dealing with a distributed nonlinear dynamical system for which our analytical ability is still rather limited, simulations play a center role because they allow us to verify the theoretical analysis and check the behavior of the analog VLSI chips we build.

PAGE 95

83 An equivalent digital model of the olfactory model is developed [5, 47] using a digital-signal-processing (DSP) approach. The linear dynamics of the basic system equation of a KO set is described as ^x(t) + ^^x(t) + x (t) = P(t). (5.10) The olfactory system is built from this basic linear system that is followed by a static nonlinearity. DSP technique can be used to transfer this system to a difference equation. The transfer function of a KO set is H(s) = ab , ,s + a s + b. = ab(+ r ). (s + a) (s + b) ~ K b — a ' a — b The time-domain solution can be expressed as 0—bt (5.11) -,—at h(t) = ab( + b — a a — b ) (5.12) Using the impulse invariant transformation technique with sampling period T s , the discrete approximation filter has the impulse response p—anT p—bnT h[n\ = T s h(nT s ) = T s ab(1 -) (5.13) b — a a — b Denote aq = e~ aT , = e~ bT , C\ — T s ab/(b — a), and ci = T s ab/(a — b), the transfer function of the equivalent digital system is obtained as H(z) Cl C 2 1 — aqz -1 1 — a 2 z~ l (5.14) Generally, the discretization of a continuous-time system using the impulseinvariant transformation is subject to aliasing. However, the poles of a KO set are all positive and the system has typical behavior of a low-pass filter. Hence, aliasing can be effectively reduced by choosing small sampling period T s . Practical implementation of the discretized system can be achieved using conventional architectures such as parallel realization. When modeling a KO set specifically based

PAGE 96

84 16 impulse invariant system circuit simulation 12 10 LI MIT CYCLE REGION 6 4 2 FIXED POIU, 0 0 5 10 15 |Kgm| Figure 5-11: Comparison between impulse-invariant filter and circuit simulation on the actual nonlinearity designed in the circuit, we can use digital simulations to verify the circuit design. In Figure 5-11, bifurcation boundary simulated from impulse-invariant system is compared to that obtained from circuit simulation. The result indicates a good match between digital simulations and circuit design. The analog chip (Figure 5-12) was designed using AMI 0.5 technology and includes an RKII set. Each KO set has an area of 400 x 150 /jm and a power consumption of 20 fiW excluding the buffers. Individual components were also fabricated for testing purpose. Figure 5-13 shows the measurement of nonlinear functions for excitatory and inhibitory cells. Negative saturation is fixed at -100 mV. Three different values of Q m are determined by the bias that controls the ratio between positive and negative saturations. The measured nonlinear function has a relatively small offset around 5 mV. Figure 5-14 shows the response from the excitatory cell

PAGE 97

85 Figure 5-12: Chip layout of a single RKII set

PAGE 98

86 Figure 5-13: Measured excitatory and inhibitory nonlinear functions and inhibitory cell. Figure 5-15 shows the phase plot of the measurement. A square wave is connected to the input of the excitatory cell. As we expected, the qualitative behaviors of the KII set follow the theoretical conclusions and exhibits two states of dynamical behaviors controlled by an external stimulus. As discussed in Chapter 2, effective input is limited to bring oscillatory behaviors. Taking a triangular signal as the input, the measurement (Figure 5-16) verifies that not all positive inputs can induce oscillation in an RKII set. Note that, in Figure 5-16, amplitude of oscillation at a specific time is not the steady-state amplitude corresponds to its input levels due to the speed of the triangular input. Nevertheless, the result still demonstrates the limitation on the external input. 5.6 Discussions We have designed and fabricated all the building blocks necessary for building the KII set. The dynamical behavior measured from the chip is qualitatively the same as that obtained from analysis and simulations. A chip includes 8

PAGE 99

87 Figure 5-14: Transient behavior measured from the chip. Channel 1 is excitatory output and Channel 3 is inhibitory output interconnected RKII sets was also designed and fabricated (Figure 5-17) to experiment possible implementations when building an RKII network. Because the number of output pins is limited to 40, this circuit consists of an analog part (8 RKII sets) and a digital part (a multiplexer). Despite the huge number of coupling weights (bias) that need to be set, only four different bias voltages (weights obtained from outer product rule in associative memories) are provided externally. The purpose of the digital circuit is to multiplex the reference voltage so that multiple weighting circuit can share the same one. However, the chip was not tested successfully. In the designed chips, no dedicated circuits are used to cancel offset and to provide on-chip bias. Offsets of a fabricated chip are normally generated from sources such as systematic mismatch, fabrication mismatch and chip package. In a 0.6/im technology, a lmV mismatch of the threshold voltages

PAGE 100

88 is approximately equivalent to a 300fF channel capacitance at the input [48]. In a highly interconnected network, offset will affect the performance in terms of speed and power consumption. For our purpose, a deviation from mathematical model of a functional block may directly affect the behavior of the dynamical system. In a network of KII sets, where rich structure is presented and probably with sophisticated boundaries between different attractors, the offsets may change the desired behaviors completely. To reduce the negative effect, electronic cancelation of offset should be considered in the design. The same idea applies to on-chip bias circuit. Every part of an oscillator needs to be biased. Most of them, such as linear dynamics and nonlinear functions, are supposed to have exactly the same functionality even in different channels. So a biasing circuit may be incorporated into the chip so that accurate and stable bias voltage or current could be provided.

PAGE 101

89 After these considerations, we should expect the circuit to perform better on matching the behaviors as what the mathematical models and digital simulators can achieve. Moreover, for every circuit simulation or observed offsets in the chip, we could also provide a corresponding mathematical model to be used in the digital simulation. In this way, we have the advantages of obtaining fast simulation results and verifying any theoretical conclusions more efficiently. This could greatly reduce the effort needed to perform trial-and-error procedures in the analog VLSI design and increase productivity. We should also point out that based on previous analysis, qualitative behaviors of a RKII set are mostly determined by the product of the two coupling coefficients. Therefore, qualitative behaviors in the circuit of a single RKII set has relatively high tolerance for variations because the adjustment of the couplings is straightforward.

PAGE 102

90 Figure 5-17: Chip layout of a mixed-signal design that includes 8 interconnected RKII sets When the ability of the circuit to duplicate the results of theoretical models can be guaranteed, a KII or RKII network can be built to perform real-time applications. While we used large time constants (large capacitors) to mimic biological system, when we actually utilize the hardware, a higher speed in the part of linear dynamics may be more desirable in terms of computational efficiency. At the same time, the cost of area in the chip for the capacitors can be reduced. However, we should also note that the implementation of a KII network and the full Kill model in a chip also raise practical problems. The big difficulty is the size of the interconnections and weighting circuits (Figure 5-17) required for full connectivity (interconnection grows with the number of the processing elements squared). A multiplexing scheme [49] or digitized signal communications such as address event representation (AER) [50-52] will be needed to resolve this problem.

PAGE 103

CHAPTER 6 CONCLUSIONS This dissertation aims at exploring dynamical behaviors of a kind of biologically inspired oscillatory neural network (FreemanÂ’s computational model of olfactory cortex) as well as implementing such a network as an information processer, based on which we hope to understand more of the fundamental concepts of human learning mechanism in the view of nonconvergent networks. We have obtained analytical solutions for parameter optimization in an RKII set and performed synchronization analysis of coupled RKII sets. Using the analysis of basic building blocks in the system, we constructed a system of computational primitives based on synchronization. By using this property, the network can have the ability to deal with certain temporal-spatial patterns that includes both amplitude and phase information. Different approaches such as PCA, energy based methods and feature extraction in the frequency domain have been studied in the literature. An important job is to determine which is the best one that fits into the particular dynamical properties of our network. This is still not clear at the current stage. Synchronization is only one of the many rich dynamics that may exist in the system. We should notice that although it shows potentials, it can greatly limit our view from a global perspective. At the same time, however, the complexity and heterogeneity of the brain discourage a global approach. Therefore, we wanted to study this hierarchical model step by step and hope to proceed with these initial analysis so that different levels of complex behaviors can be well recognized and controlled one by one. More practically, from a system point of view, there is still a large extent of unexplored freedom in this network. Free parameters play the center role to 91

PAGE 104

92 control system dynamics. At the same time, they form the bridge between the input and output spaces for obtaining causality. For the applications pursued so far, we have not efficiently utilized all their capabilities. Inputs are not time varying and weighting is limited by input space representation (coupling strengths is specifically designed according to given input-output pairs). However, we see that all the parameters bring diversity in dynamics even based on synchronization analysis where four different behaviors are observed (2 bit information in a 2channel system). Different levels of inputs change immediately the synchronization boundaries obtained previously so that they can possibly bring up all available rich dynamics in a single fixed-weight network. Time varying inputs that traverse the space and break through different attractors will certainly provide more freedom in building the output projection space. System capacity is maximized when the richness of dynamics is obtained with minimal system complexity. Certainly, the related fundamental question goes back to the problems of characterizing available attractors and representation of output trajectories (for example, detection of trajectories and statistical modeling of transitions between attractors). These being said, a meaningful future work would investigate and characterize through simulations all possible dynamics in the system. from a general perspective, with the advances in understanding of real biological systems, mathematical models or engineering applications should also adjust accordingly and at the same time try to predict and help discover new evidence from biology. That is, the process of building man-made biologically realistic system should always be an integrative process, where physiological studies provide biological evidence and practical implementations facilitate further understanding of real biological systems as well as help designing improved experiment to validate engineering predictions.

PAGE 105

93 We have designed and fabricated all the building blocks necessary for building the KII set. The dynamical behavior measured from the chip is qualitatively the same as the simulation. However, in fabricated chips, there are more deviations and offsets in every system block (due to fabrication process, design mismatches and testing environments, etc). Our analysis and chip measurement indicate that in terms of qualitative behaviors, fault tolerance in the chip is very positive. Particularly, behaviors of an individual RKII is largely determined on the product of its internal couplings. This leaves much room for choosing and adjusting the proper values of control parameters. However, in a higher level where a network of RKIIs need to be implemented, consistency of the performance is crucial. Advanced techniques that reduce systematic offsets and noise should be applied in the future so that variations can be limited. Moreover, analog design has the natural difficulty of scaling due to the massive connections that need to be built. A complete DSP design or mixed signal design that uses digital protocols (such as AER) for commutation among analog PEs may be a promising approach.

PAGE 106

APPENDIX A BASIC TYPES OF LOCAL BIFURCATIONS Bifurcation is a change of qualitative behavior of a dynamical system. Here we introduce some basic types of local bifurcations [53]. A.l Saddle-Node Bifurcation Consider a one-dimensional dynamical system x = f(x, fi), x e 5?, fj, € 3?", n > 0, (A.l) where /i is a vector of the bifurcation parameters, x 0 is an equilibrium at the critical value /jl 0 . That is, f(x 0 ,Ho) = 0. (A. 2) If x 0 is nonhyperbolic (Equation A. 3), / has a nonzero quadratic term (Equation A. 4), and / is nondegenerate (Equation A. 5) with respect to // 0 , then the system is at a saddle-node bifurcation. dj_ dx (z 0 ,// 0 ) = 0 av dx 2 d J_ dfi (xo, Ao) ^ 0 (xoj Ah)) ~ f ~ 0 The saddle-node bifurcation has a canonical form as (A.3) (A.4) (A.5) x — a + x 2 . (A.6) A. 2 Cusp Bifurcation Consider a one-dimensional dynamical system x = f(x, n), x e 5R, n G 9T\ n > 1, (A. 7) 94

PAGE 107

95 where p is a vector of the bifurcation parameters. x 0 is an equilibrium at the critical value hq. That is, f{x 0 ,Ho) = 0. (A.8) If x 0 is nonhyperbolic (Equation A. 9), / has a nonzero cubic term (Equation A. 10) but a zero quadratic term (Equation A. 11), / is nondegenerate (Equation A. 12) with respect to p 0 , and / satisfies the transversality condition (Equation A. 12 and Equation A. 13 are linearly independent), then the system is at a cusp bifurcation. ^(x 0 ,li 0 ) = 0 (A.9) 9 3 f g x 3 (*0,/h>) 7^ 0 (A. 10) d 2 f dx 2 (zo,Ao) =0 (A. 11) ^(z 0 ,/i 0 ) + o (A.12) d 2 f The cusp bifurcation has a canonical form as (A. 13) x = a + bx + cx 3 . (A-14) A. 3 Pitchfork Bifurcatation Consider a one-dimensional dynamical system x = f(x, fi), x € K, fi € 5ft”, (A.15) where /jl is a vector of the bifurcation parameters and / is an odd function. x 0 is an equilibrium at the critical value /i 0 . That is, f(xo,Mo) = o. (A. 16)

PAGE 108

96 If x 0 is nonhyperbolic (Equation A. 17), / has a nonzero cubic term (Equation A. 18) but a zero quadratic term (Equation A. 19), / is nondegenerate (Equation A. 20) with respect to fi 0 , and / satisfies the transversality condition (Equation A. 20 and Equation A. 21 are linearly independent), then the system is at a pitchfork bifurcation. df dx d*l dx 3 av dx 2 df dfjL d 2 f d[idx The pitchfork bifurcation is a special case of the cusp bifurcation when a has a canonical form as (x 0 ,Ho) = 0 (A.17) r(x 0 ,/i 0 ) 7 0 (A. 18) |-(afo, A*o) = 0 (A.19) (zo, Ah)) 7 ^ 0 (A. 20) (^o, Ah)) # 0 (A.21) 0. It x = bx + cx 3 . (A. 22) A. 4 Hopf Bifurcatation Consider a two-dimensional dynamical system x = f(x,y,p) (A. 23) V = f{x,y,(JL) (AM) where x, y G 5R, /jl G 9ft". If the system has a pair of purely imaginary eigenvalues at x 0 and po> and it satisfies the transversality condition, then the system is at an Hopf bifurcation. The most significant behavior of an Hopf bifurcation is the birth of oscillatory attractor when control parameter is changed. A typical form using

PAGE 109

97 polar coordinates can be written as r = ar + or 3 (A. 25) 0 = w + 7 r 2 (A. 26) Generally, if a stable limit cycle attractor appears with increased value of the bifurcation parameter, the bifurcation is called a supercritical bifurcation. If a the system has an unstable limit cycle with decreased value of the critical parameter, the bifurcation is called a subcritical bifurcation. Detailed analysis of Hopf bifurcation can be found in [20].

PAGE 110

APPENDIX B DERIVATION OF SUFFICIENT CONDITIONS ON SYNCHRONIZATION OF COUPLED RKII SETS Eigenvalues of matrix A can be computed analytically by solving |AI — A| = A -1 0 0 ab(l + K mm ) A + (a + b) — abK gm (Q (g ^)) 0 0 0 A -1 —abK mg (Q\m^)) 0 ab(l + K gg ) A + (a + b) (B.l) We have det ( | AI — A|) — [A(A + a + b) + ab{ 1 + A mm )] •[A(A + a + b) + ab[ 1 + K gg )] -(ab) 2 K mg K gm (Q\mU))(Q\gM)) (B.2) A ~ (a 2 + b) ± [(a b) 2 2 ab(K mm + K gg ) (B.3) ±2 ab{{K mm K gg ) 2 + 4 K mg K gm (Q' (m^))(Q'(g ^))) 1/2 ] 1/2 /2 (B.4) There are two cases regarding the sign of 5R[Ai]: 1. (K mm I< gg ) 2 + 4:K mg K grn (Q' (g^)) > 0 In this case, the term under the outer square root is guaranteed to be either real or pure imaginary. Let (a 6) 2 Kai = K b = ab KmgKgmiQ' (m (1) )) (Q' (tf (1) )) (B.5) 98

PAGE 111

Equations B.6 and B.7 are the possible cases in which A has all negative eigenvalues. 99 (a b) 2 2 ab(K mm + K gg ) + 2 ab^J (K mm K gg f AK B < 0 (B.6) (a b) 2 2 ab(K mm + K gg ) + 2 abyj ( K mm K gg ) 2 AK B > 0 -(a + b) 4y^(a b ) 2 2 ab(K mm + A 99 ) + 2a&^/ (A'mm -^ss) 2 ~ 4A# < 0 (B.7) By solving Equations B.6 and B.7, we could obtain first part of the results given in Section 3.2. 2. ( K mm K gg ) 2 + 4 K m9 K grn (Q'(m^))(Q'(g^)) < 0 Let R = (a b) 2 2 ab(K mm + K gg ) I = 2abyJ\ {K mm K gg y + 4K mg K gm (Q'(mW))(Q'(gW))\ a — arctan(— ), (B.8) R we need -{a + b)±Re{y/R + jI} < 0. (B.9) Thus, Re 2 {y/R + jI} V R 2 + 7 2 cos 2 (a/2) Lj VR 2 + 1 2 + R 2 (a + b) 2 . < (B.10)

PAGE 112

100 When Krnm I K 99 1 > 2 Ka2 2 ’ solving Equation B.10, we obtain | (K mm K gg ) 2 + 4K mg K gm (Q'(mW))(Q'(gW))\ < ~ a "ab + K mm + K gg ). (B. 11) Let Ka2 K b (a + b ) 2 ab K mg K m {Q'(mW)){Q’(g^)) , (B.12) we have (K mm — K gg ) 2 < AKb + 2A' j 42(2 + K mm + K gg ) {K mm — K gg ) 2 > AK B — 2/Oi2(2 + K mm + K gg ). (B.13) From Equation B.ll, we have Kmm \K gg \ > -2, (B.14) so the upper boundary of ( K mm + |A^ 99 |) 2 in Equation (B.13) is automatically satisfied. Thus, we obtain the second part of the results in Section 3.2 as 0 < K mm + |Ap AK b ~ 4LOi2 — 2 ^ 42(2 + K mm + K gg ). (B.15)

PAGE 113

APPENDIX C LINEARIZATION OF NONLINEAR FUNCTION Simple nonlinear functions can be used to achieve similar input controlled oscillations in an RKII set. With these modified nonlinear functions, it may help to build digital circuits where only finite sections of operations need to be constructed. Here are two examples [54] of nonlinear functions that only have a few sections of linear functionalities. One of the key elements to enable input controlled oscillations in an RKII set is the maximum slope (Q'(x)) of the nonlinear function. To achieve the desired dynamics, the nonlinear function should have at least two different slopes. The simplest linearization is a three-section nonlinear function as shown in Figure C-l. The two slopes are K s in the linear section and 0 in the saturation regions. Negative and large positive inputs result in an infinite bifurcation boundary. P t h sets the lower threshold of the input to induce oscillation. The upper bound of P is determined by where Q m is the ratio between positive and negative saturations. The equilibrium of this system is at C.l Example I (C.l) m 9 * P+\K mg K gm \K 2 P th 1 + \K mg K gm \K] K mg • K S {P P th ) 1 + \K mg K gm \Kl ' (C.2) 101

PAGE 114

102 Figure C-l: Nullclines obtained with three-section nonlinear function C.2 Example II Note that one of the nullclines in the previous example needs to have a lower saturation at zero so that stable fixed-point can be obtained with zero input. The approximate shape of the original nonlinear function can be preserved by adding another section in the linearized nonlinear function (Figure C-2). The modified nonlinear function Q(x) is defined by Q(x) -1 X < Xi (C.3a) K sl x X\ < X < X 2 (C.3b) K s 2 • X X 2 < X < X$ (C.3c) Qm X < Xs (C.3d) To achieve input controlled oscillations, k | K m g Kg m | < sl must be less than K s2 When 1 (a + b ) 2 W, ’ (C.4)

PAGE 115

103 Figure C-2: Four-section nonlinear function any equilibrium (intersection between sections of the tow nullclines with smaller slope) around zero is stable. When i js „ I . 1 (a + b ) 2 ^ ^ \K mg K g m\ > ab , (C.5) an RKII set enters the oscillatory state.

PAGE 116

REFERENCES [1] W. Freeman, Mass Action in the Nervous System, Academic, New York, 1975. [2] R. Llinas, I of the Vortex: From Neurons to Self, MIT-Press, Cambridge, MA, 2001 . [3] S. Kelso, Dynamic Patterns: The SelfOrganization of Brain and Behavior, MIT-Press, Cambridge, MA, 1995. [4] J. Principe, V. Tavares, J. Harris, and W. Freeman, “Design and implementation of a biologically realistic olfactory cortex in analog VLSI,” Proceedings of the IEEE , vol. 89, no. 7, pp. 569-571, July 2001. [5] V. Tavares, Design and Implementation of a Biologically Realistic Olfactory Cortex Model, Ph.D. dissertation, Univ. of Florida, Gainesville, May 2001. [6] S. Grossberg, “Pattern learning by functional-differential neural networks with arbitrary path weights,” in Delay and Functional Differential Equations and Their Applications, Schmitt, Ed., pp. 121-160. Academic, New York, 1972. [7] J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad. Sci., vol. 81, pp. 3088-3092, 1984. [8] D. Kaplan and L. Glass, Understanding Nonlinear Dynamics, SpringerVerlag, New York, 1995. [9] B. B. Y. Yao, W. Freeman and Q. Yang, “Pattern recognition by a distributed neural network: An industrial application neural networks,” Neural Networks, vol. 4, no. 1, pp. 103-121, Jan. 1991. [10] R. Kozma and W. Freeman, “Chaotic resonance-methods and applications for robust classification of noisy and variable patterns,” Neural Networks, vol. 4, no. 1, pp. 103-121, 1991. [11] J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci., vol. 79, pp. 2554-2558, 1982. [12] L. Chua, Chua98: A Paradigm for Complexity, World Scientific Pub Co, New York, 1998. 104

PAGE 117

105 [13] D. Wang, “Relaxation oscillators and networks,” in Wiley Encyclopedia of Electrical and Electronics Engineering , J. G. Webster, Ed., pp. 396-405. Wiley Sons, New York, 1999. [14] K. Chen and D. Wang, “A dynamically coupled neural oscillator network for image segmentation,” Neural Networks, vol. 14, no. 3, pp. 423-439, Apr. 2002. [15] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, NJ, 1999. [16] J. C. Principe, N. R. Euliano, and W. C. Lefebvre, Neural and Adaptive Systems: Fundamentals through Simulation, John Wiley and Sons, NY, 1999. [17] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, SpringerVerlag, New York, first edition, 1990. [18] F. Hoppensteadt, Analysis and Simulation of Chaotic Systems, SpringerVerlag, New York, second edition, 2000. [19] Y. Kuznetsov, L. Kuznetsov, and J. Marsden, Elements of Applied Bifurcation Theory , SpringerVerlag, New York, second edition, 1998. [20] B. D. Hassard, N. D. Kazarinoff, and Y.-H. Wan, Theory and Applications of Hopf Bifurcation, Cambridge University Press, Cambridge, UK, 1981. [21] J. Hasty, J. Pradines, M. Dolnik, and J. Collins, “Noise-based switches and amplifiers for gene expression,” Proc. Natl. Acad. Sci., vol. 97, pp. 2075-2080, 2000 . [22] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, SpringerVerlag, New York, 1984. [23] A. Goldbeter and M. J. Berridge, Biochemical Oscillations and Cellular Rhythms: The Molecular Bases of Periodic and Chaotic Behaviour, Cambridge University Press, Cambridge, UK, 1996. [24] S. Han, C. Kurrer, and Y. Kuramoto, “Dephasing and bursting in coupled neural oscillators,” Phys. Rev. Lett., vol. 75, no. 17, pp. 3190-3193, Oct. 1995. [25] P. Bressloff and S. Coombes, “Desynchronization, mode locking, and bursting in strongly coupled integrate-and-fire oscillators,” Phys. Rev. Lett., vol. 81, no. 10, pp. 2168-2171, Sept. 1998. [26] I. Pastordiaz and A. Lopezfraguas, “Dynamics of two coupled van der pol oscillators,” Phys. Rev. E, vol. 52, no. 2, pp. 1480-1489, Aug. 1995. [27] P. Bressloff and S. Coombes, “Synchrony in an array of integrate-and-fire neurons with dendritic structure,” Phys. Rev. Lett., vol. 78, no. 24, pp. 4665-4668, June 1997.

PAGE 118

106 [28] D. Reddy, A. Sen, and G. Johnston, “Time delay induced death in coupled limit cycle oscillators,” Phys. Rev. Lett., vol. 80, no. 23, pp. 5109-5112, June 1998. [29] R. Herrero, M. Figueras, J. Rius, F. Pi, et ah, “Experimental observation of the amplitude death effect in two coupled nonlinear oscillators,” Phys. Rev. Lett., vol. 84, no. 23, pp. 5312-5315, June 2000. [30] D. Reddy, A. Sen, and G. Johnston, “Experimental evidence of time-delayinduced death in coupled limit-cycle oscillators,” Phys. Rev. Lett., vol. 85, no. 16, pp. 3381-3384, Oct. 2000. [31] K. Shimoide and W. Freeman, “Dynamic neural network derived from the olfactory system with examples of applications,” IEICE Trans. Fundament., vol. E78-A, pp. 869-884, 1995. [32] W. Freeman, “Simulation of chaotic EEG patterns with a dynamic model of the olfactory system,” Bio. Cybern., vol. 56, pp. 139-150, 1987. [33] L. Pecora and T. Carroll, “Synchronization in chaotic systems,” Phys. Rev. Lett., vol. 64, no. 8, pp. 821-824, Feb. 1990. [34] H. Fujisaka and T. Yamada, “Stability theory of synchronized motion in coupled-oscillator systems,” Progr. Theor. Phys., vol. 69, pp. 32-47, 1983. [35] N. Nishiyama and K. Eto, “Experimental study on three chemical oscillators coupled with time delay,” Journal of Chemical Physics, vol. 100, no. 9, pp. 6977-6978, May 1994. [36] M. Yoshimoto, “Phase-death mode in two-coupled chemical oscillators studied with reactors of different volume and by simulation,” Chemical Physics Letters, vol. 280, no. 5-6, pp. 539-543, Dec. 1997. [37] K. K.P. Zeyer, M. Mangold, and E. Gilles, “Experimentally coupled thermokinetic oscillators: Phase death and rhythmogenesis,” Journal of Physical Chemistry A, vol. 105, no. 30, pp. 7216-7224, Aug. 2001. [38] M. Ding, and E. Ott, “Enhancing synchronism of chaotic systems,” Phys. Rev. E, vol. 49, no. 1, pp. 945-948, Jan. 1994. [39] R. Brown and N. F. Rulkov, “Designing a coupling that guarantees synchronization between identical chaotic systems,” Phys. Rev. Lett., vol. 78, no. 22, pp. 4189-4192, June 1997. [40] L. Kocarev and U. Parlitz, “Generalized synchronization, predictability, and equivalence of unidirectionally coupled dynamical systems,” Phys. Rev. Lett., vol. 76, no. 11, pp. 1816-1819, Mar. 1996.

PAGE 119

107 [41] Z. Li and J. J. Hopfield, “Modeling the olfactory bulb and its neural oscialltory processings,” Bio. Cybern., vol. 61, pp. 379-392, 1989. [42] V. Nekorkin02 and M. Velarde, Synergetic Phenomena in Active Lattices , SpringerVerlag, New York, 2002. [43] A. Gutierrez-Galvez and R. Gutierrez-Osuna, “Pattern completion through phase coding in population neurodynamics,” Neural Networks, vol. 16, pp. 649C656, 2003. [44] Y. Yao, W. Freeman, B. Burke, and Q. Yang, “Pattern recognition by a distributed neural network: An industrial application neural networks,” Neural Networks, vol. 4, no. 1, pp. 103-121, Jan. 1991. [45] R. Kozma and W. Freeman, “Chaotic resonance-methods and applications for robust classification of noisy and variable patterns,” International Journal of Bifurcation and Chaos, vol. 11, no. 6, pp. 1607-1629, 2001. [46] P. M. Furth and H. A. Ommani, “Low-voltage highly-linear transconductor design in subthreshold cmos,” in Proceedings of the 40th Midwest Symposium on Circuits and Systems, 1997, vol. 1, pp. 156-159. [47] M. Skowronski, Biologically Inspired Noise-robust Speech Recognition for both Man and Machine, Ph.D. dissertation, Univ. of Florida, Gainesville, May 2004. [48] B. RazaviOl, Design of Analog CMOS Integrated Circuits, McGraw-Hill, New York, 2001. [49] D. Xu, L. Deng, J. Harris, and J. Principe, “Design of a reduced kii set and network in analog vlsi,” in Proc. of Inti Sym. on Circuits and Systems, 2003, vol. 5, pp. 837 840. [50] K. Boahen, “A burst-mode word-serial address-event channel-i: Transmitter design,” IEEE Trans, on Circuits and Systems I, vol. 51, no. 7, pp. 1269-1280, 2004. [51] K. Boahen, “A burst-mode word-serial address-event channel-ii: Receiver design,” IEEE Trans, on Circuits and Systems I, vol. 51, no. 7, pp. 1281-1291, 2004. [52] K. Boahen, “A burst-mode word-serial address-event channel-iii: Analysis and testing,” IEEE Trans, on Circuits and Systems I, vol. 51, no. 7, pp. 1292-1300, 2004. [53] F. Hoppensteadt and E. Izhikevich, Weakly Connected Neural Networks, Springer, New York, 1997.

PAGE 120

108 [54] M. C. Ozturk, D. Xu, and J. C. Principe, “Modified freeman model: a stability analysis and application to pattern recognition,” in Proc. of Inti. Joint Conf. on Neural Networks ,, 2004, vol. 4, pp. 3207-3212.

PAGE 121

BIOGRAPHICAL SKETCH Dongming Xu was born April 28, 1978 in Wuhen, China. He received a Bachelor of Science degree in the Department of Electrical Engineering from XiÂ’an Jiaotong University, China, in 1999. Since the Fall of 1999, he has been a research assistant of Dr. Jose C. Principe in the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida. In August of 2001, he was awarded a Master of Science degree in Electrical and Computer Engineering from the University of Florida. Since then, he has continued to pursue his Ph.D. degree in CNEL. His research interests include machine learning, biomedical signal processing, nonlinear dynamical systems and information theory. 109

PAGE 122

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. )se Q. I>ilicjW, Chair iguished Professor of Electrical and Computer Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Jdfyn G. Harris Associate Professor of Electrical and Computer Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Jianbo Gao Assistant Professor of Electrical and Computer Engineering I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosoph^. James Keesling / Professor of Mathematics

PAGE 123

This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate School and was accepted as partial fulfillment the requirements for the degree of Doctor of Philosophy. August 2005 P of Pramod P. Khargonekar Dean, College of Engineering Kenneth J. Gerhardt Interim Dean, Graduate School