Switching Hidden-Markov Model and Hardware Implementation for a Brain-Machine Interface

Material Information

Switching Hidden-Markov Model and Hardware Implementation for a Brain-Machine Interface
DARMANJIAN, SHALOM ( Author, Primary )
Copyright Date:


Subjects / Keywords:
Algorithms ( jstor )
Markov models ( jstor )
Modeling ( jstor )
Monkeys ( jstor )
Neurons ( jstor )
Primates ( jstor )
Signals ( jstor )
Software ( jstor )
Trajectories ( jstor )
Velocity ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Shalom Darmanjian. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
Resource Identifier:
71279531 ( OCLC )


This item has the following downloads:

darmanjian_s ( .pdf )












































































Full Text







Copyright 2005


Shalom D i, ,iii ,11


When I was around 6 years old, I saw a television program showing a little girl

being fitted with a robotic prosthetic. At that time, I thought she was controlling the

arm with her mind. It inspired me to go to college and learn how to help others regain

mobility with this ostensibly cyborg technology. As I grew older, I came to understand

that no such technology existed in the world and realized that the girl must have been

controlling the arm through muscle contractions. My childhood dream was washed away

in much the same way that most children who aspire to become astronauts have their

dream fade away.

Then one di- I made a left instead of a right. I had the choice of going to class

in Larson Hall (right side) or asking Dr. Gugel for a letter of recommendation to grad

school in Benton Hall (left side). By shear chance, he was in his office and able to

talk without student interruptions. That simple letter turned into a meeting with Dr.

N. I lvba, then a meeting with Dr. Principe and finally an introduction to the Brain

Machine Interface (BMI) project. All three professors have given me more than I can

ever repi- and I am truly grateful for the wonderful opportunities. I am also grateful

to my fellow Applied Digital Design Lab (ADDL), Computational Neural Engineering

Lab (CNEL), and Machine Intelligence Lab (M\lI,) comrades from whom I have also

learned a great deal. Scott Morrison, Jeremy Parks and Joel Fuster welcomed me into

the ADDL lab and put up with my sarcasm. In particular, Scott provided guidance and

expertise with all of the hardware that I helped to create. Additionally, Phil Sung Kim

greatly enhanced the work in this thesis with his work on the Least Mean Square (LMS)

and wiener filters for use in the Bi-Modal structure (discussed in this thesis). Greg and

Ben were also encouraging and helpful during the process of making this thesis and the

final hardware work. There are ri ir_: MIL members that I have become friends with. I

appreciate them all. Finally, I would like to give special thanks to Jeremy Anderson and

Sinisa Todorvich for their very helpful criticisms of the thesis. I truly am nothing more

than the contributions from the professors and the students of ADDL, CNEL, MIL, all

wrapped into one lovable fuzzy ball of a fella. I personally thank them all for letting me

become an astronaut.


ACKNOWLEDGMENTS ...................

LIST OF FIGURES ......................

ABSTRACT ... .. .. .. .. ... .. .. .. .. ...


1 INTRODUCTION ....................

1.1 M otivation .. ............... ....
1.2 Brain Machine Interface: Collaborative Effort
1.3 Overview .................. ....


2.1 Duke-Primate Behavioral Experiments .....
2.2 Duke-Data Acquisition .. ..........
2.3 Elementary Statistical Classfier .........
2.3.1 Basic Feature Extraction .........
2.3.2 Basic Stucture ...............
2.3.3 R results . . . . .
2.4 D discussion . . . . . .

3 HIDDEN MARKOV MODELS .. ..........

3.1 M otivation .. ..........
3.2 Hidden Markov Model Overview
3.3 Vector Quantizing-HMM .
3.3.1 Structure .. .......
3.3.2 Training .. .......
3.3.3 Results .. ........
3.4 Factorial Hidden Markov Model
3.4.1 Motivation ........
3.4.2 Structure .. .......
3.4.3 Training .. .......
3.4.4 Results .. ........
3.5 Discussion .. ..........


4.1 Structure . . . . . . .
4.2 R results . . . . . . .
4.3 Conclusion ............



FOR A BMI ...................... ............ 42

5.1 Introduction . .. . . . . . .. 42
5.2 System Requirements ........ ............ ..... .. 43
5.3 System Design . .. . . . . . .. 45
5.3.1 Processor . .. . . . . . .. 46
5.3.2 W wireless connection ............... ... .. .47
5.3.3 Boot Control ............... .......... 49
5.3.4 USB ..... .... ................ ....... 50
5.3.5 SRAM and Expansion ............... .. 50
5.3.6 Power Subsystem ............... ....... 51
5.4 Complex Programmable Logic Device ............. 52
5.5 System Software ................ . .... 53
5.5.1 PC Software ... ............ . .... 53
5.6 Results. .................. ......... .. ...... 54

6 FUTURE WORK ...... .......... ............ 56

6.1 Hardware . ............... .......... .56
6.2 FHMM Applications ............... ......... 56

REFERENCES ................... ................... 62

BIOGRAPHICAL SKETCH .................. ............ 65













BMI concepts drawing .................. . .....

UF-BMI overview ..................... . .....

Rhesus and owl monkeys ................ . .....

Feeding experiment ................... . .....

Neural prosthetics and operations...... . .....

Example neural waveforms........ . .....

Optotrak . . . . . . . . ....

N.M .A.P. system ..................... . .....

Binned neural data and corresponding velocity (with threshold) .

Instantaneous velocity .................. . .....

2-9 Velocity vs displacement . ...........

2-10 Sample spatial statistics (one-second sliding window).

2-11 Sample thresholded spatial statistics (one-second sli

2-12 Performance of threshold-based classifier (spatial vai

3-1 Discrete HMM chain . ............

3-2 Stationary/moving classifiers . .......

3-3 LGB VQ algorithm on 2D synthetic data . .

3-4 "Leaving-k out" testing . ..........

3-5 Sequential testing . .............

3-6 Single neural channels . ...

3-7 Average ratios . ...............

3-8 Before and after log . ............

3-9 General FHMM . ..............

3-10 Comparision of our FHMM to general model .

ding window

riance, all

w). .


























3-11 Biasing plot for naive classifier .. ...........

3-12 Biasing plot for modified naive classifier ........

3-13 FHMM evaluation .. .................

3-14 Training data on naive classifier .. ..........

3-15 Training data on FHMM .. .............

4-1 Bimodal mapping overview .. ............

4-2 Local linear model .. ...............

4-3 Predicted and actual hand trajectoris A) Single LLM,

4-4 CEM plots . . . . . .

4-5 SER plot . . . . . . .

5-1 Hardware Overview .. ................

5-2 Hardware Modules of WIFI DSP .. .........

5-3 DSP Features .. ..................


C) Bi-modal

5-4 Memory map of the C33 DSP showing internal SRAM blocks ..

5-5 Two different methods to boot the C33: EEPROM or USB .......

5-6 Block Diagram of USB Interface .. ..................

5-7 512K by 32 bit external SRAM architecture .. ...........

5-8 W IFI DSP System .. ........................

5-9 U ser Interface . . . . . . . .

5-10 NLM S Performance .. ........................

6-1 Derivatives of spherical coordinates with pink movement segmentations
super im posed . . . . . . . .

6-2 Derivatives of spherical coordinates with blue movement segmentations
superim posed . . . . . . . .

6-3 Spherical coordinates vs average ratios .. ...............

6-4 Worst neuron to best neuron predictors ........

6-5 Worst neuron to best neuron predictors (with bias) .. .........

Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science



Shalom D i, ii 1- i

May 2005

C('! i: Jose Principe
Major Department: Electrical and Computer Engineering

In pursuit of the University of Floridaa (UF's) Brain Machine Interface (BMI)

goals, this thesis focused on how to improve the 3D modeling of a primate's arm

trajectory and the implementation of such algorithms. In terms of 3D modeling, we

argue that the best way to improve the trajectory prediction is by first using a switching

classifier to partition the primate's neural firings and corresponding arm movements

into different motion primitives. We show that by switching or delegating these isolated

neural/trajectory data to different local linear models, prediction of final 3-D arm

trajectories is markedly improved. Although our study focused on primitives of motion

and non-motion, we propose that our work can expand to include more primitives and

thus increase the final performance manifold.

Concerning implementation of BMI algorithms, our first aim was to achieve

a portable wireless computational DSP. Next we determined the software and

hardware component 1iv, r- integrated in this evolving design. Finally, we detail

the distributed implementation of the switching model over a parallel computing

architecture, investigating offline training and evaluation as well as possible future

real-time implementations.


1.1 Motivation

The Olympics are not the only place to witness masterful control of muscles and

limbs. Simply attending a local ping-pong match or a basketball game can also provide

examples of precise dexterity and bodily control. Whether we are Olympiads or regular

people, we all require the ability to plan trajectories and execute accurate motions to

serve the needs and desires of our daily lives.

To achieve this precision, we rely on the brain to decode our thoughts and control

many individual components in our bodies without specifically commanding it to do

so. Along with the brain, the spinal cord and peripheral nerves make up a complex,

integrated information-processing and control system. Specifically, they help the brain

control blood flow, oxygen intake, blood pressure, and a myriad of other functions that

help us grow and stay healthy. The brain must continuously do all of these actions while

processing feedback from visual centers, tactile sensors, and many other internal systems;

and provide higher-order cognition for thesis writing.

Unfortunately, millions of people's brains suffer a partial or full disconnect from

their bodies, hindering the control of physical movement. Although some technological

solutions give these individuals various interaction with the external world, they often

fall short of what is required for daily life. The noted physicist, Stephen Hawking, is

an example of a human possessing an exceptional mind with little or no control of his

exterior limbs. He must navigate daily life with a simple computer joystick that often

limits his ability to present his theories to peers and conference proceedings. This crude

device and others like it can only help select disabled individuals who still posses some

motor control. If a communication pathway could bridge the gap between the brain and

the external world, it would empower millions of disabled people who now have little or

no motor control. This is the idealistic vision of a Brain Machine Interface (BMI).

'9 1 .. ... .." 4
I ,, i I .

a robotic prosthetic arm

3D arm trajectory

Figure 1-1: BMI concepts drawing

A Brain Machine Interface is one of many v- -i- that researchers are trying to

develop this pathway between man and machine. Specifically, a BMI is a system that

can directly retrieve neuronal firing patterns from dozens to hundreds of neurons

and then translate this information into desired actions in the external world. The

functionality is similar to how the brain currently works but provides a bypass between

the patient's brain and an external device. A BMI is not limited to only patients with

paralysis in their extremities; future applications could serve military or commercial

sectors. Although progress has been made toward these goals since the 1970s [1], work

must be accomplished before machines (under brain control) can hit a simple baseball as

well as a smiling 8 year old child.

1.2 Brain Machine Interface: Collaborative Effort

To design a complicated system like a BMI, experts must come together from

multiple fields to solve the many technological and biological problems. To circumvent

these barriers, the Defense Advanced Research Projects Agency (DARPA) brought

biologists, neurologists and a broad spectrum of engineers to collaborate under a

single umbrella of leadership and finance. Dr. Miguel Nicolelis and his staff at Duke

University are undertaking this leadership role for the entire project by coordinating the

collaboration of all member institutions. His group also provides neurological expertise

and an experimental primate-testing platform, which is required before implanting

the BMI into humans. MIT, SUNY and UF are the other institutions providing their

expertise to complement the skills at Duke.

Specifically, our group at UF is involved in developing algorithms that can predict a

primate's arm trajectories based on the spatial sampling of hundreds of neurons within

the multiple cortical and subcortical areas. In turn, these models must be multiple-input

multiple-output systems of huge dimensionality to accommodate all of these data at

once. In designing such a structure, accuracy must be tempered with speed, especially

when implementing these models in hardware. Some of our group's initial experiments

examined linear and nonlinear models to determine which models are the most efficient

with regard to the constraints of accuracy and speed in our application [2].

Figure 1-2: UF-BMI overview

Implants into the Motor CortW
called neural spike information while C33 DSP Rurnnig realte adaplie and
the monkey performs motor tasks artlitclal neural network to earn
neural-to-motor translation.

\__st q-1 --
U M *S tRobot grip force and slip eedback The predicted hand position
Monkey performing is given to th monkey via s used to control the robot arm
motor tasks a pressure cuff

In tandem with algorithm development, our group is investigating the use of hybrid

analog and digital technologies to achieve low-power portable devices that will run these

models. Consequently, we are evolving the digital and analog designs to evaluate their

feasibility as we shrink and integrate them into a single system. Our group's Very Large

Scale Integration (VLSI) expertise can assist in this type of system by designing custom

low-power hybrid (VLSI-DSP) chips. Eventually though, all processing needs to move

into a single-chip solution so that it can be implanted into a patient's body and can

independently control an external device with actionable thoughts.

1.3 Overview

In service of UF's BMI goals, this thesis focuses on how to improve the 3-D

modeling of a primate's arm trajectory and the implementation of such algorithms. In

terms of 3D modeling, we argue that the best way to improve the trajectory prediction

is by first using a 'switching' classifier to partition the primate's neural firings and

corresponding arm movements into different motion primitives. We show that by

switching or delegating these isolated neural/trajectory data to different local linear

models, prediction of final 3D arm trajectories is markedly improved. Although this

thesis focuses on primitives of motion and non-motion, we propose that our work can

expand to include more primitives and subsequently increase the final performance


Concerning implementation of BMI algorithms, we first discuss our initial step

in trying to achieve a portable wireless computational DSP. Secondly, we describe

the software and hardware component l. r-i integrated within this evolving design.

Finally, we detail the distributed implementation of the switching model over a parallel

computing architecture, discussing the results in offline training/evaluation as well as

possible future real-time implementations.


In this chapter, we first depict the experimental environment in which the non-

human primates carried out their behavioral tasks. Secondly, we describe the retrieval

of neuralogical and trajectory data along with certain properties they exhibit. Finally,

we discuss our elementary analysis of this data and present the results of a rudimentary


2.1 Duke-Primate Behavioral Experiments

Dr. Nicolelis's laboratory, at the Duke University Medical center, is responsible for

carrying out the behavioral experiments with primate subjects for the DARPA funded

BMI project. The primate species they use within these experiments are the Rhesus

Macaque (I\ .. .. mulatta) and the Owl Monkey (Aotus trivirgatus), each having

varying physical characteristics. Different exemplars of these small species train on a

multitude of experiments like 3-D food grasping, 1-D pole control, 2-D pole control and

2-D pole control plus gripping. For the purposes of this thesis, we focus on 3-D food

grasping experiments that involve a female Owl Monkey.

Figure 2-1: Rhesus and owl monkeys

In this particular 3-D food grasping experiment, once an opaque barrier lifts, the

female monkey is required to retrieve fruit from four fixed tray locations and then place

it in her mouth. In order to constrain the feeding movements, Dr. Nicolelis's group, use

a constraining apparatus to hold the primate's body in place. The neck, torso, left arm

and legs fasten into a clamp-like gripper so that the only motion that can take place

is right arm movement. In turn, the right arm movement is digitized and transmitted

to the prediction models along with the neural data and subsequently to a robotic arm

[21, 25]. This type of experiment is important for two reasons. First, it mimics the

behavior that a human would require in daily life, and second, the experiment is cyclic

so that the prediction models can have similar data examples with which to train and


Food 3 2
Locations 4 1



Figure 2-2: Feeding experiment

2.2 Duke-Data Acquisition

The acquisition of data falls within two areas: the neurological data and the

physical arm trajectory data. Within these two areas, we explain how the data are

acquired and the characteristics they display.

To retrieve neurological data, Dr. Nicolelis and his staff first drill craniotomies into

the primates. For each primate subject, his team then places five to ten different cortical

implants in multiple cortical regions. Specifically, they implant different arrays in the

Figure 2-3: Neural prosthetics and operations

posterial parietal cortex (PP), primary motor cortex (l\ I), and dorsal pre-motor cortex

(PMd) since each region has been found to contribute to motor control [3]. The micro-

arrays placed in these regions consist of traditional metal microelectrodes [3] that are

electrolytically sharpened wires (pins), 25 to 50 um in diameter and insulated to define

an exposed recording area at the tip of around 100 um. On average, each microelectrode

records from four neurons, consequently requiring signal conditioning and spike sorting

of the waveforms at a later stage in order to distinguish single neural cells [19].

To record the primate's arm trajectory, Dr. Nicholelis's group emploiv, l a non-

invasive recording system to accurately measure the position of the monkey's arm in

3-D space. This commercial product, Optotrak, is accurate to .1 mm with a resolution

of .01 mm when recording real-time 3-D positions. The system employs three cameras

that track infrared markers placed on the monkey's wrist. By using different marker


Cjin .


Figure 2-4: Example neural waveforms

combinations, the Duke team can isolate and track different parts of the arm as it moves

through space.

Figure 2-5: Optotrak

As both data streams are simultaneously recorded from the monkey, it is necessary

to combine them in order for the mapping models to train and evaluate the trajectory.

This can be a difficult task since both types of data have different properties associated

with them. For example, the neural signals come in the form of raw electrode voltage

potentials that range from the noise level [3](20 V or so) to about 1 mV. The signals

can range in frequency from 30 Hz to 9 KHz depending on the spiking rate and activity.
Whereas the Optotrak system generates digitized 200 Hz signals to define the tracking
of the monkey's 3-D arm trajectory. In order for this processing to be useful for the
models, the neural data are binned into 100 ms bins and the trajectory data are down-
sampled by 20 to generate a corresponding 10 Hz signal, which is consistent with other
investigators [12, 17, 19].



Figure 2-6: N.M.A.P. system

2.3 Elementary Statistical Classfier

2.3.1 Basic Feature Extraction

Among neural scientists, there is an on-going, unresolved debate regarding how
the motor cortex encodes the arms movement [12]. Various researchers argue that the
motor cortex encodes the arms velocity, position or both within the neuronal firings
[12, 17]. In an attempt to observe any obvious patterns or correlations in this encoding,
we began with a visual and numerical inspection of the velocity-neural data. In viewing
the data, we were able to discern some basic properties. First, there appeared to be a
slight increase in binned firing counts when the hand moved at noticeable velocities.
Second, some of the 104 neurons rarely fire, while others virtually fire all the time; but
interestingly, we witnessed that specific neurons increased their firing rate at certain
points along the arm trajectory [18].

To complement our visual analysis of identifiable correlations, we also computed the
cross correlation with up to a one second shift in time (forward and backward) between
individual neural channels and hand velocities. Overall, visual and correlation analysis
showed that patterns existed between neural firings and hand movement [18].
However, this method is limited in that only a few obvious (visually discernable)
patterns were perceivable with the velocity-neural data. This lack of correspondences
complicated our training and evaluation of the switching model, since we needed to have
clearly definable and separate classes.
Therefore, we wanted a more objective approach to define the segmented classes
before training and evaluating our model. To avoid exclusion of potential neural
encodings, we generated the data classes for two different segmented data sets, one based
on velocity, the other based on displacement.


II ilI ||l III .

Eli UeUs neEfativtneIJ

Figure 2-7: Binned neural data and corresponding velocity (with threshold)

For the velocity hypothesis, ideally, the first class of neural firings should contain
data where the arm appears to be stationary, while the second class should contain data
where the monkey's arm appears to be moving. We used a simple threshold to achieve

this grouping: if the instantaneous velocity of the arm is below the noise threshold of the

sensor (determined by inspecting the velocity data visually), the corresponding neural

data were classified as stationary ; otherwise, the neural data were classified as moving.

In Figure 2-8, we plot the instantaneous velocity of the monkey's arm for a 500 second

segment of the data, where the monkey is repeatedly performing a reaching task. Based

on this plot, we chose 4 mm/sec as the noise threshold for the above procedure.

StiGe idvel (10 msnecs


0 1Q100 2000 3000 4GGO 5000
time index (100 msec)

Figure 2-8: Instantaneous velocity

For the position hypothesis, we wanted to classify the monkey's arm movements

based on displacement. To demonstrate this concept, we plot a sample feeding session

for the monkey (Figure 2-9). The three colored trajectories represent displacement along

the Cartesian coordinates, as the monkey is moving its arm from rest to the food tray,

from the food tray to its mouth and, back to the rest position. Figure 2-9B indicates the

segmentation of this data into two distinct displacement classes: rest and active, which

are analogous to the stationary and moving classes in the velocity-based segmentation


In Figure 2-9C and 2-9D, we plot the velocity of the trajectories from figures 2-9A

and 2-9B, here we see that this segmentation is not the same as the velocity-based

segmentation. Note from the dotted line (indicating the velocity threshold previously

described) that some of the active class in the displacement-based segmentation are

classified as stationary in the velocity-based segmentation. Now that we understood

what our segmented classes should represent, we wanted to apply simple statistical
methods to evaluate their performance.

I "I

(a) (b)

11 (c) T (d

Figure 2-9: Velocity vs displacement

2.3.2 Basic Stucture

In the next sequence of experiments, we computed the mean and variance for the
neural spike data, temporally across individual neural channels, as well as spatially

across all neural channels. Due to the relatively sparse nature of the neural data, we
computed the statistics over 50'. sliding windows of one- and four-second lengths.
With a statistical description of the neural data, we proceeded to test if the .,.:-: regate
quantities could tell us anything about the corresponding hand movement. In this initial
in ,!-i- we set out to distinguish hand movement from non-movement by applying

thresholds to the computed statistics. If a particular statistical indicator for a given
time index was below a corresponding threshold, we set the predicted hand motion to

stationary. While indicator values above that same threshold were labeled as moving.
For each time index, we also labeled the one-dimensional velocity data as stationary
or moving. We then checked the accuracy of this simple movement predictor. To
compensate for possible misalignment between the neural spike data and the hand

movement data, we repeated the same procedure for data shifted by a maximum of

two time indices both forward and backward in time. Finally, the entire analysis above

was repeated for a subset of neurons that appeared most relevant for hand-movement

prediction, as indicated by the weights in previously trained recurrent neural networks


2.3.3 Results

With the rudimentary analysis (Section 2.3.2), we computed four basic quantities:

Temporal mean per neural channel.

Temporal variance per neural channel.

Spatial mean across neural channels.

Spatial variance across neural channels.

Of these computed quantities, the spatial statistics appeared to be the most useful

predictive indicator of hand movement. Figure 2-10 shows a sample of the computed

spatial statistics for all neurons as well as for a subset of neurons (determined from the

recurrent neural network models).

spatial variance
i1 i ll I

(all neurons)

one-dimn sulocitv | I

4100 4150 4z00

time inderl

Figure 2-10: Sample spatial statistics (one-second sliding window).

Given the statistical data (Figure 2-10), we proceeded to develop a threshold-based

classifier to discriminate between hand movement and non-movement. From Figure 2-10,

we observe that spatial variances appear to be a better predictor of hand movement than

spatial means. Therefore, we designed our classifier to rely on spatial variances, rather

than spatial means. We applied two different variance thresholds: the first was chosen as

the mean of the variances, while the second was somewhat lower to reduce the number of

hand movement (moving class) misclassifications.

spatial variance
(neuwon Ii, I
400 4U50 4400 4450

spatial variance
(all neurons)
4300 4350 4400 44 -

one-dim. velocity

4200 4250 4400 44 i
spatial mean
(all neurons)
4300 150 14400 44'I5

spatial mean
(neutvn subset)
4200 4U50 4400 l'rr
time index

Figure 2-11: Sample thresholded spatial statistics (one-second sliding window).

To test the classifier, we used 4,000 data samples, corresponding to 201 distinct

instances for statistics computed over four seconds, and 801 for statistics computed over

one second. In this data sample, 58 out of 201 instances corresponded to significant hand

movement for the four-second statistics, while 125 out of 801 instances corresponded

to significant hand movement for the one-second statistics. Figure 2-12 summarizes

the performance of this spatial-variance based classification scheme, while Figure 2-11

illustrates the effect of a threshold on the spatial statistics in Figure 2-10. Note from

both Figure 2-12 and Figure 2-11 that even this simple classification scheme is able to

achieve successful discrimination of hand movement from non-movement. Results for

neuron subsets selected from trained recurrent neural networks proved to be similar to

those in Figure 2-12.

statiionaiv hand moving hand
(% .... ,' (% ...,:,J
four-second statistics
S57. 0' (81//42) 84.8% (49/5 8)
(y = 0.500)
folr-second statistics
(y =0.557 85.9% (12I142) 65.5% (38/58)

one-second statistics
(y 57.4% (388/676) 90. 4N, ( I.,"-

onte-second s statistics
S .66 92.9'% (628/676) 62.4% (78/125)
(y = 0.665)

Figure 2-12: Performance of threshold-based classifier (spatial variance, all neurons)

2.4 Discussion

The rudimentary classifier and analysis discussed in this chapter had two goals.

First, we sought to familiarize ourselves with the neural firing and trajectory data

to better understand the intricacies of the primate experiment. Second, we explored

whether or not even simple statistical analysis can yield useful insights into this problem.

To do this, we developed a threshold-based classifier of hand movement vs. non-

movement that relied exclusively on a single statistical indicator -namely, spatial

variance in the neural firing data.

However, the question remains as to which type of segmentation (velocity or

displacement) is likely to be more biologically plausible and, consequently easier to

learn. While we will defer our thoughts on this question until C'! ipter 3, we do note

that keeping an arm stationary (1) at rest, or (2) in extension requires different muscle

actions. In the first case, muscles can be relaxed, while in the second case, at least some

muscles must be tensed.

Despite the simplicity of this approach, we nevertheless were able to achieve

surprisingly good results in classification performance (see Figure 2-12) of the two simple


primitives of motion and non-motion. Encouraged by these results, we proceeded to

develop an advanced classifier that relies on more sophisticated trainable statistical

models. We describe that work in the next chapter.


3.1 Motivation

We trained statistical models corresponding to the two classes of data discussed in

C'! lpter 2. Based on previous statistical work [18], we feel that these statistical models

should capture the temporal statistical properties of neural firings that characterize the

monkey's arm movement or lack thereof. One such statistical model, the Hidden Markov

Model (HMM), enforces only weak prior assumptions about the underlying statistical

properties of the data, and can encode relevant temporal properties. For these important

reasons, we choose to model the two classes of neural data (stationary vs. moving)

with HMMs. This choice follows a long line of research that has applied HMMs in the

,in 1.,-i of stochastic signals, such as in speech recognition [4, 5] modeling open-loop

human actions [13], and analyzing similarity between human control strategies [13].

3.2 Hidden Markov Model Overview

Although continuous and semi-continuous HMMs have been developed, discrete-

output HMMs are often preferred in practice because of their relative computational

simplicity and reduced sensitivity to initial parameter settings during training [16]. A

discrete Hidden Markov C'!i oi (Figure 3-1) consists of a set of n states, interconnected

through probabilistic transitions, and is completely defined by the triplet, A = {A, B, 7}

, where A is the probabilistic n x n state transition matrix, B is the L x n output

probability matrix (with L discrete output symbols), and 7 is the n-length initial state

probability distribution vector [14, 16]. For an observation sequence O, we locally

maximize P(A 0) (i.e. probability of model given observation sequence O) with the

Baum-Welch Expectation-Maximization (EM) algorithm.

The discrete HMMs discussed in this thesis, are trained on finite-length sequences,

so that rare events with nonzero probability may be possible yet, at the same time,

may not be reflected in the data (i.e. may not have been observed). The probabilities
corresponding to such events will therefore converge to zero during HMM training.
Alternatively, a sample sequence may have erroneous readings due to sensor failure,
etc., and such sequences will evaluate to zero probability on HMMs previously trained
on less noisy data. In order to train discrete-output HMMs on continuous-valued data
effectively, we use discretization compensation, namely, semi-continuous evaluation.
In semi-continuous evaluation, the HMM is first trained on discrete data (vector
quantized -VQ from real-valued data). When new sequences of real-valued data need
to be evaluated, we assume that the VQ codebook previously generated represents
a mixture of Gaussians with some uniform variance a that can be thought of as a
smoothing parameter. Below, we first discuss the overall structure of this VQ-HMM and
then detail the training of the classifier in section 3.2.2.

Qt-i Q, Q,+I Q +2

0 0 0 0
t- x+l xxt+2
Xt- I Xt Xt+1 Xt+2

Figure 3-1: Discrete HMM chain

3.3 Vector Quantizing-HMM

3.3.1 Structure

In this section, we broadly describe our VQ-HMM-based classifier (Figure 3-2
illustrates the overall structure). The classifier works as follows:

1). At time index t, we convert a neural firing vector vt of length 104 (equal to number
of neural channels) into a discrete symbol Ot in preparation for discrete output HMM

Figure 3-2: Stationary/moving classifiers

evaluation. The method of signal-to-symbol conversion will be discussed later in this


2). Next, we evaluate the conditional probabilities P(OIAs) and P(OIAm), where,

0 {OOt-N+1, Ot- Ot Ot-1, Ot}, N > 1,


and A8 and Am denote HMMs that correspond to the two possible states of the monkey's

arm (stationary vs. moving).

3). Finally, we decide that the monkey's arm is stationary if,

P(O|A,) > P(OI|A),


and is moving if,

P(OI|A) > P(OIA).

In order to explicitly compute P(OIA), we use the practical and efficient forward

algorithm [16]. For the forward algorithm, let us define a forward variable at(i):



HMM (stationary)

P(O* iX)

HMM (moving)
.I0 0 0 0

at(i) = P(O1,..., Ot, XA),

which refers to probability of the partial observation sequence {Oi,..., Ot} and being in

state Xi at time t, given the model A [16]. As explained by Rabner [16] and others [14]

the at variables can be computed inductively with the use of the probabilistic transition

and output matrixes, in turn, we evaluate P(O A) with:

P(O|A) r(i). (3.5)

Furthermore, the classification decision in Equations 3.2 and 3.3 is relatively

simplistic in that it does not optimize for overall classification performance, and does

not account for possible desirable performance metrics. For example, it may be very

important for an eventual modeling scheme to err on the side of predicting arm motion

(i.e. moving class). Therefore, we modify our previous classification decision to include

the following classification boundary:

P(O ) (3.6)

where y now no longer has to be strictly equal to one.

Note that by varying the value of y, we can essentially tune classification

performance to fit our particular requirements for such a classifier. Moreover,

optimization of the classifier is now no longer a function of the individual HMM

evaluation probabilities, but rather a function of overall classification performance.

In the following subsection, we discuss signal-to-symbol conversion and HMM

training in somewhat greater detail.

3.3.2 Training

Our particular dataset contained 23000 discrete binned firing counts for the 104

neurons (each binned count corresponds to the number of firings per 100ms). As

discussed, we must first convert this multi-dimensional neural spike data to a sequence

of discrete symbols. This process involves vector quantizing the input-space vectors to

discrete symbols in order to use discrete -output Hidden Markov Models. We choose

the well-known LBG VQ algorithm [6], which iteratively generates vector codebooks of

size L = 2', and can be stopped at an appropriate level of discretization, as determined

by the amount of available data. By optimizing the vector codebook on the neural

spike data, we seek to minimize the amount of distortion introduced by the vector

quantization process. Figure 2-3 illustrates the LBG VQ algorithm on some synthetic,

two-dimensional data (gray area).

.I I
---. -- I--- 1- --

.- i _*

/ I Ir "

Figure 3-3: LGB VQ algorithm on 2D synthetic data

After vector quantizing the input, we use the generated discrete symbols as input to

a left-to-right (or Bakis) HMM chain; in this structure non-zero probability transitions

between states are only allowed from left to right, as depicted in the HMMs in Figure 6.

Given that we expect the monkey's arm movement to be dependent not only on current

neural firings, but also on a recent time history of firings, we train each of the HMM

models on observation sequences of length N. Since the neural spike data used in this

study is binned at 100 msec, N= 10, for example, would correspond to neural spike data

over the past one second (Equation 3.1). During run-time evaluation of P(OIAs) and

P(OIAm) we use the same value of N as was used during training.

In order to maximize the probability of the observation sequence O we must

estimate the model parameters (A, B, r) for both AM and As. This is a difficult task;

first, there is no known way to analytically solve for the parameters that will maximize

the probability of the observation sequence [16]. Second, even with a finite amount

of observation sequences there is no optimal way to estimate these parameters [16].

In order to circumvent this issue, we use the iterative Baum-Welch method to choose

A = {A, B, 7} that will locally maximize P(OIA)) [14].

Specifically, for the Baum-Welch method we provide a current estimate of the HMM

A = {A, B, 7r} and an observation sequence 0 {O1,..., Or} to produce a new estimate

of the HMM given by A = {A, B, 7r}, where the elements of the transition matrix A,

ri'7t(i 70
a = ( t{1,...,N}. (3.7)

Similarly, the elements for the output probability matrix B,

b St7t(j)(whereVOt ,)
bj(k) Tewj {1,.., N}, k {1,..., L}, (3.8)
Et1 7t(j)

and finally the 7r vector,

= 71(i),i E {1,..., N}, (3.9)


(t t(i)aijbj (Ot+1)t+I(j(j)
(t(i,j) b= and (3.10)

7t(i) = (t i, (3.11)

Please note, 3 is the backward variable, which is similar to the forward variable a

except that now we propagate the values back from the end of the observation sequence,

rather than forward from the beginning of 0 [14].

3.3.3 Results

Given our classifier structure in Figure 4 and our decision rule in Equation 3.4,

there are a number of design parameters that can be varied to optimize classification


L= number of prototype vectors in VQ codebook;

N= length of observation sequences ;

n= number of states for the HMM;

y- classifier threshold boundary;

In Tables 2 and 3, we report experimental results for different combinations of the

four parameters and subsets of neural channels. These tables are a small representation

of the classification results produced from a large number of conducted experiments.

The L parameter (no. of prototype vectors) was varied from 8 to 256; the N parameter

(observation length) was varied from 5 to 10; and the n parameter (no. of states) was

varied from 2 to 8.

The two tables differ in how the data was split into training and test sets. In the

'leaving-k-out' approach (Figure 3-4), we took random segments of the complete data,

removed them from the training data, and reserved them for J --I ii-. care was taken that

no overlap occurred between the training and test data. In the second approach (Table

3), we split the data sequentially into training and test data in equivalent fashion to

our group members at UF [2]. The advantage of the first testing approach is that we

can repeat the procedure an arbitrary number of times, leading to more test data, and

hence, more statistically significant results. Alternatively, the advantage of the second

testing approach is that it uses test data in a manner more likely to be encountered in

an eventual BMI system, where a period of training would be followed by a subsequent

period of testing.

At this point, we make some general observations about the results in Figure 3-4

and 3-5. First, the displacement-based segmentation results are substantially better than

the velocity-based segmentation results. Second, we note that the results in Figure 3-4

are marginally better than those in Figure 3-5. We -ii--.- -1 that one reason for this is

that the neural encoding of the small population of 104 neurons is non-stationary to

some extent. If the data is non-stationary, we should expect the first testing approach

to produce better results, since test data in the 'leaving-k-out' approach is taken from

within the complete data set, while the second testing approach takes the test data

from the tail end of the complete data set. Overall, we note that the sequential testing

with displacement-based data is probably the best. We also note that since a subset

of neural channels at the input yielded the best performance, some of the measured

neural channels may offer little information about the monkey's arm movement which

subsequently directs the motivation in the next section.

Subset number stationary moving L N n
I 81.5% 83.8% 16 10 6
4 84.0% 75.1% 64 10 4
5 81.0% 82.0% 16 10 3
6 80.4S 81.2% 32 10 6
7 83.2% 82.9% 32 10 3

Subset number stationary moving L N n
1 87.0% 89.2% 16 7 4
4 84.4% 86.6% 64 10 3
5 86.8% 90.3% 32 10 4
6 83.7% 87.8% 32 7 4
7 '.3% 90.0% 32 10 3

Figure 3-4: "Leaving-k out" testing

Subset number stationary moving L N n
1 82.1% 816% 8 7 7
4 81.1% 74.3% 128 10 7
5 1 1% 75.7% 16 10 3
6 80.9% 752% 128 10 7
7 81.7% 833%, 128 10 6

Subset number stationary moving L N n
1 82.1% 85.6% 8 7 7
4 83.5% 87.5% 128 10 6
5 84.0% 87.5% 256 10 6
6 75.6% 81.3% 256 10 4
7 87.3% 86.1% 256 10 5

Figure 3-5: Sequential testing

3.4 Factorial Hidden Markov Model

3.4.1 Motivation

As mentioned in section 3.3.3, our previous classifier required the conversion of the

multi-dimensional neural data into a discrete symbol for the discrete-output HMMs

[7]. We used the LBG-VQ algorithm, since it has the ability to generate this discrete

symbol with a relatively minimal amount of distortion. Unfortunately, this 'minimal'

distortion was later revealed to hamper classification performance when used with the

neural data [7]. We note that since the 104-channel data does not form tight clusters

in the 104-dimensional input space, the VQ signal-to-symbol conversion introduces a

substantial loss of information and consequently degrades classification performance

[6]. Combining this result with the error that occurs from the linear models (within the

bi-model mapping framework), we are only able to produce neural-displacement mapping

results marginally better than our group's previous work [2].

As discussed, we attempted to circumvent these VQHMM limitations by exploring

different neural subsets to see if we could eliminate noisy unimportant neurons and

retain useful ones. To differentiate important neurons from unimportant neurons, we

examined how well an individual neuron can classify movement vs. non-movement when

trained and tested on an individual HMM chain. We are able to directly train a single

HMM chain since the neural data is already in discrete form, ranging in value from zero

to twenty (firings per 100ms bin).

During the evaluation of these particular HMMs, we compute the conditional

probabilities P(O(k) k)) and P(O(k) A)) for each neural channel k with its respective

observation sequence of binned firing counts. To give a qualitative understanding of

these weak classifiers, we present in Figure 3-6 the probabilistic ratios from the top

14 single-channel HMM classifiers (shown between the top and bottom movement

segmentations). Specifically, we present the probabilistic ratio

P(O(k) (k))
pQ A) (3.12)
p(Q(k) (k)

for each nueral channel in a -i -' i1 gradient format; darker bands represent ratios

larger than one (indicating a stronger probability toward movement) and lighter bands

for ratios smaller than one (indicating stronger probability toward non-movement).

The probabilities roughly equal to one another show up as grey bands. We glean from

this figure that the group of single-channel HMMs can roughly predict movement and

non-movement from the data.

In order to observe the relevance of these single HMM chains further, we compute

the average of the probabilistic ratios

K p p(O) (k)(3.13)
K k1 P(O(k) (k)

for a given observation sequence. Figure 4 presents the average of the ratios (light grey),

as well as the variance (dark grey) of the single-channel HMM output probabilities.

Figure 3-6: Single neural channels

Figure 3-7: Average ratios

We also superimpose our segmentations of movement and non-movement with a dotted

line in order to demonstrate that the larger of the two probabilities will cause the

output ratio to dip below or rise above the threshold boundary of 'one'. Specifically, the

averaged ratios appear to be significantly larger than the threshold boundary during

movement and less than the boundary during non-movement.The ratios that appear

near the threshold boundary of 'one', indicate that the probabilities of movement and

non-movement are equivalent.

The above ain 1 ,i-;- led us to believe that by combining these weak single-channel

predictions, we could generate an improved classification, and subsequently, an improved

5unti h or DMerL 'i PirrEtri.
This Neuron is sensitive to
beginning of movement.

This Neuron is sensitive to f
end ufrroveirent .

AairInard Non MoventI II
AnaMntiret mmmuw- Imp. IlP i

final 3D mapping. In the next section, we investigate a framework that will incorporate

the probabilistic output from these weak classifiers (see table 1) into a strong correct

Table 3-1: Classification performance of select neurons

Neuron # Stationary Moving
23 83.4 75.0
62 80.0 75.3
8 72.0 64.7
29 63.9 82.0
72 62.6 82.6

3.4.2 Structure

In section 3.4.1, we illustrated the result of finding the average ratio of the output

probabilities from individual movement/non-movement HMM chains using corresponding

neural channels as input. This simple measure motivated our first attempt to combine

the probabilities into a simple classifier. We used the average ratio and applied a

decision rule to the threshold in order to determine whether movement or non-movement


1 K P(O(k) (k)
> Y. (3.14)
K k P(O(k,) (3.14)

Although simplistic, we demonstrate in section 3.3.5 that this approach is

remarkably better than the VQ-HMM model. Unfortunately, the ratio itself is

susceptible to infinitesimal probabilities, which in turn can cause extremely large

output values. Consequently, these large ratios will bias the overall model and increase

erroneous classifications. To minimize this bias, we apply the log to Equation 3.2:

1 P(O(k) I))
S Plog( o \ ) (3.15)
K =1 (O(k)

Figure 3-8: Before and after log

Q,-1 Q, Q+I Qt+2

Xt-1 t Rt+1 Xt+2

-l r t+l t+2

Figure 3-9: General FHMM

In turn, this approximates:
K P(O(k) (k))
log( p((" ) (3.16)
k1 P(O(k) ())
We see that by applying the log to the ratios (Eq 3.6), we are essentially finding the
relative scaling between each chain and avoiding the effects of any infinitesimal output
probabilities (Eq 3.7). Similarly, the summing of the log ratios results in finding the log
likihood and subsequently takes the form of a particular HMM variation known as the
Factorial Hidden Markov Model (FHMM) [8].
The graphical model for a general FHMM is shown in Figure 3-9. The system is
composed of a set of K chains indexed by k. The state node for the kth chain at time

Figure 3-10: Comparision of our FHMM to general model

t is represented by Xt(k) and the transition matrix for the kth chain is represented by

A(k). The overall transition probability for the system is obtained by taking the product

across the intra-chain transition probabilities:

P(XlX_,i) JA()(Xk)Xk) (3.17)

Our departure from this general model occurs at the output vector node(Figure

3-3). Instead of each chain being stochastically coupled at the output node (represented

by a vector), our HMM multi-chain structure independently uses a single element from

the output vector node for each chain (Figure 3-4) leaving the chains fully uncoupled.

This FHMM variant is used in other research involving speech processing [9] and has an

association with another structure called parallel model composition [9].

Specifically during evaluation, our model uses the neural binned spike data from the

kth channel in order to evaluate the kth conditional probabilities

P(O(k) J)) and P(O(k) IA ), (3.18)

o(k) _10(k) 0(k) () 0( N > 1 (3.9k)
) t-N+l t-N+2 t-l It 1 1 > N1 (3.t9)

and A k) and Am) denote HMM parameters that represent the two possible states of the

monkey's arm (moving vs non-moving) for a particular HMM chain k. Since our FHMM

, f
QCf- C-l Q

p KD )? -

0I S

reduces to a set of uncoupled HMM chains, we evaluate the individual chains with the

same procedure described in section 3.332. Before evaluation though, each HMM chain is

previously trained on the neural spike data (using the Baum-Welch algorithm) which we

describe in the next section.

3.4.3 Training

Updating the parameters for a FHMM is an iterative, two-phase procedure and is

only slightly different from the training of a single HMM chain as described in section


In the first phase, we use the Baum-Welch method to calculate expectations for the

hidden states. This is done independently for each of the K chains, making reference to

the current values of the parameters At(k)

In the second phase, the parameters At(k) are updated based on the expectations

computed in the first phase [16]. The procedure then returns to the first phase and

iterates. We note that the input into each left-to-right (or Bakis) HMM chain k is the

spiking bin counts of a corresponding neuron k.

Coincidentally, our simple variation of the FHMM naturally appears as a

substructure approximation (Figure 3-10) to the computationally difficult joint

probability distribution,
P(Xt ) [ (k) (X Ik)) A() (X) I(k) )] (k)) (3.20)
I f Ai t t (3.20)
k=1 t=2 t=1

[8, 14, 16]. Consequently, our FHMM variant has an advantage in the training procedure

since it simplifies to the training procedure for a single HMM chain described in 3.2.2

(just repeated K times). Additionally, with the reduction in computation, our model

can be trained with pre-existing software and even has the ability to be distributed

over a parallel computing architecture (as opposed to a general FHMM). In C'! lpter

6, we detail our particular distributed implementation of this model using such an


Figure 3-11: Biasing plot for naive classifier

3.4.4 Results

After observing some qualitative properties of the ratios, we now seek to quantify

our initial approach (Equation 3.2) as well as our FHMM classifier. Shown in Figure

3-12, we present a biasing plot of our initial classifier with the decision criterion

K p(O(k) (k))
K k1 P(O(k) b)

From this graph, we notice that as the threshold boundary is manipulated,

classification performance for the movement/stationary classes shifts. We can clearly see

that the joint maximum (or equilibrium point) occurs near the area where the threshold

is 1.04 (a value of one represents the 1:1 relationship of the ratio of probabilities). We

also observe that this simple classifier has a significant improvement in classification as

compared to our previous VQ-HMM classifier since the equilibrium point shows that

movement and non-movement classifications occur around I 'I. (as opposed to S7'.

in our previous work). Note that without the threshold, the results do not show any

important significance (except better than random).

As explained earlier, the criterion in Equation 3.14 is prone to extreme bias if an

individual classifier produces infinitesimal probabilities. In an attempt to avoid biased

probabilistic ratios, we also evaluated the ratios-of-means criterion

Figure 3-12: Biasing plot for modified naive classifier

Sy(O(k) )) EP(O(kA))
A > y. (3.22)

Unfortunately, we see in Figure 3.22 that despite the ability to avoid infinitesimal

probabilities, the ratio-of-means classifier produced inferior results compared to the

initial mean-of-ratios classifier. Considering our discussion earlier, we know that since

some neurons are more tuned the movement/non-movement than others, by averaging

the probabilities we degrade individual neural contributions and simply provide a diluted

consensus to which motion primitive is occurring. Therefore, finding the relative scaling

between the outputs of single HMM chains allows us to incorporate amplified predictions

into a better overall classification.

Our FHMM model has the ability to find the relative scaling between the outputs

and can circumvent infinitesimal probabilities. In Figure 3.22 we see that the FHMM is

able to achieve much better performance than the ratio-of-means classifier and slightly

better performance than the mean-of-ratios classifier. Consequently, we must now

observe which model, after training, can calculate a threshold boundary that will allow

maximal classifications during the testing of new data. Without the calculation of

this threshold boundary, none of the multi-chain models can perform better than the

VQHMM model.

icY! US U

Figure 3-13: FHMM evaluation

a (I

0. I



1. 101 .( .a Cm



Figure 3-15: Training data on FHMM

Since the means-of-ratios classifier retains problems with infinitesimal output values,

we only compare the ratio-of-means classifier and the FHMM. We see in Figures 3.22

and 3.22 that both methods demonstrate a similar threshold point is achievable with the

training set. Unfortunately, we also notice a distinct classification difference. Specifically,

we observe that the ratio-of-means classifier in Figure 3.12 performs less effectively

than the FHMM, which confirms our -,ii::. -Ii. i that the overall averaging effect

dilutes the strong neural classifiers into a weak classification (since training data should

produce higher classifications). Alternatively, Figure 3.44 shows FHMM results that are

consistent with a model trained and tested with the same data (high classifications -


3.5 Discussion

We make several observations about our results. First, there appears to be a

significant statistical difference in the neural spike data for the two classes of arm

motion. Second, increased temporal structure leads to better classification performance;

that is, arm motion is correlated not only with the most recent neural firings, but a

short-time history of neural firings. We can also hypothesize a number of sources of

residual classification error.

First, the amount of data ,in 1v. '1 in these experiments is relatively limited, given

the size of the statistical models employ. I1 Note, for example, that the classification

results for the moving-arm class are worse than those for the stationary class; this may

be a reflection of the relative amount of data available for training and testing in each


Second, in the signal-to-symbol conversion of the multi-channel data (for VQHMM),

we lose a substantial amount of information. Even for 256 prototype vectors, the

consequent distortion (uncertainty) in the symbol data is substantial. We see that by

using the FHMM to break down the problem space into individual HMM chains, we

avoid the introduced distortion from VQ and can classify the monkey's arm state more


Overall, we see that the FHMM is a better switching model than our previous

attempts. We demonstrated that it could avoid infinitesimal probabilities and yet find

a relative scaling between the movement and non-movement probabilities. Additionally,

the FHMM is able to represent a large effective state space with a much smaller number

of parameters than a single unstructured HMM. Consequently, this model is easily

distributed across a parallel computing architecture and can be used with pre-existing

training/evaluation software. Finally, we remark that since the FHMM is a probabilistic

framework, we can find other unique connections between the nodes in this graphical

model to better exploit the spatial and temporal dependencies of the neuronal firings.

In the next Chapter, we describe how we use our switching classifier in combination

with multiple continuous local linear models to predict the 3D coordinates of the

monkey's hand.


4.1 Structure

The final step in our bimodal mapping structure is to take the outputs from

the HMM-based classifier and generate an overall mapping of neural data to 3D arm

position. To establish a baseline for this approach -namely the prior segmenting of

neural data at the input into multiple classes -we assign a single local-linear model

(LLM) to each class, and train each of the LLMs only on data that corresponds to its

respective class, as shown in Figure 4.1. Each LLM adapts its weights using normalized

least mean square (NLMS) [15]. After training, test inputs are fed first to the HMM-

based classifier, which acts as a switching function for the LLMs. Based on the relative

observation probabilities produced by the two HMMs and the decision boundary, as given

in equation (3.4), one of the two LLMs is selected to generate the continuous 3D arm

position. With properly trained HMMs, the bimodal system should be able to estimate

hand positions with reasonably small errors.

Figure 4-1: Bimodal mapping overview

Given 104 neural channels, each LLM is defined with 10 time d, 1 ,-, (1 sec), and

3 outputs so that its weight vector has 3,120 elements (Figure 4-2). The LLMs were

trained on a set of 10,000 consecutive bins (1,000 sec.) of data with a NLMS learning

I D=Acusltlo

Bi-lodal CNlanieri i

^^^^^^^^^ Mill ^^^^^^

rate of 0.03. Weights for each LLM were adapted for 100 cycles. After training, all

model parameters were fixed and 2,988 consecutive bins of test neural data were fed to

the model to predict hand positions. The results of the experiments were then evaluated

in terms of short-time correlation coefficients and the short-time signal-to-error ratio

(SER) between actual and estimated arm position. For each measure, the short-time

window was set to 40 bins (4 sec) since a typical hand movement lasts approximately

four seconds.

Adaptv xign-ere 4xn-2) Locaxl -L-ar deo
A| x(N b :e idaptive Weight

: ; j ,.fJx1'y : F "(,7 .L.l -. _

i 5
L ime lag --

Figure 4 2: Local linear model

Of course, a correlation coefficient value of 1 indicates a perfect linear relationship

between the desired (actual) and predicted (system) trajectories, while 0 indicates no

linear relationship. The second measure, the SER, is defined as the power of the desired

signal divided by the power of the estimation error. Since a high correlation coefficient

cannot account for biases in the two trajectories, the SER complements the correlation

coefficient to give a more meaningful measure of prediction performance. Finally, all of

our bimod i1--i-l,~ in results are compared with two other approaches, namely, a recurrent

neural network (RNN) [11] and a single LLM.

4.2 Results

In this section, we report results for neural-to-motor mappings of a single LLM,

a recurrent neural network (RNN) and the bimodal system. Since the segmentation

results are better for the displacement-based segmentation, we use these HMMs in the

first stage of the bimodal system. In Figure 4-3, we plot the predicted hand trajectories

of each modeling approach, superimposed over the desired (actual) arm trajectories

for the test data; for simplicity, we only plot the trajectory along the z-coordinate.

Qualitatively, we observe that the bimodal system performs better than the others in

terms of reaching targets; this is especially evident for the first, second, and the seventh

peaks in the trajectory. We also observe that during stationary periods the bimodal

system produces less noisy outputs than the other models.

I |

.; ,.. ..

Figure 4-3: Predicted and actual hand trajectoris A) Single LLM, B) RNN C) Bi-modal

Overall, prediction performance of the bimodal system is better than the RNN,

and superior to the single LLM, as evidenced by the empirical cumulative error measure

(CEM) plotted in Figure 4-4. Figure 4-4 shows that the population distribution

functions of L2-norms of error vectors of the bimodal system and the RNN are similar,

and significantly better than the single LLM.

The correlation coefficients over the whole test set, averaged over all three

dimensions, are 0.64, 0.75, and 0.80 for the single LLM, the RNN, and the bimodal

system respectively. The mean of the SER averaged over all dimensions for the single

LLM, the RNN, and the bimodal system are -20.3dB+/-1.6 (SD),-12.4dB+/-16.5, and

-15dB +/-18.8, respectively. Although these measurements give us insight into the

overall performance of the models, they fail to express the difference in accuracy between

each model when predicting movement. The accuracy in movement is more important

than in non-movement since we can remove non-movement errors by using the output of


Sempirical CDFs -




0.2 8

0 20 40 60 80
x (mm)

Figure 44: CEM plots

the HMM as a filter. This led us to compute the SER and CCs over movement sections

of the test set (partitioned by hand). The CCs for only the movement sections were

0.83+/-0.07, 0.84+/-0.14 and 0.86 +/- 0.11 for the LLM, RNN and bimodal system,

respectively. The SER over the movement sections are 2.96+/-2.68, 6.36+/-3.71 and

8.48+/-4.47 for the LLM, RNN and bimodal system, respectively.

4.3 Conclusion

We see the estimation performance of the hand trajectory of the proposed bimodal

system is better than the RNN, and superior to a single LLM. It is also apparent from

the results that using multiple models improves the estimation performance compared

to the single filter, although it adds more computational complexity. Compared to

the RNN, the bimodal system reduces the complexity of training significantly and
the RNN, the bimodal system reduces the complexity of training significantly and


desired trajectory (a)



0 20 40 60 80

short-term SER over time (b)


-6 0
60 time (sec)
BO .---.-.--.-.--.-.--.-.--.--.-.--.-.
0 20 40 60 80

Figure 4-5: SER plot

produces a more accurate estimation. The drawback with the bimodal system is that its

estimation mainly depends on the classification ability of the HMM (as seen in Figure

4-1). Chapter 6 focuses on v--; to improve HMM classifications to remove these false



5.1 Introduction

As described in chapter 2, the feasibility of building brain machine interfaces (BMI)

has been demonstrated with the use of digital computational hardware [24, 25] For these

interfaces, researchers first acquire analog neural recordings and process them through

spike detection hardware and software. Once the neural signals are processed, large

rack-mountable high-end processors are used in conjunction with Matlab-enabled PCs to

predict a subject's arm trajectory in real-time [10, 25].

The ultimate BMI goal envisions that free-roaming subjects will possess the

prediction algorithms and hardware in vivo as they physically interact in the world.

Unfortunately, at the current stage of development, most researchers require the subjects

to be tethered to a cluster of immobile machines. Other researchers have removed

this tether by wirelessly transmitting the analog neural signals off the subject but

still require a local cluster of PCs for predicting the output [20, 23]. Although this

wireless approach is more inline with the ultimate BMI goal, research emphasis has been

placed on shrinking the wireless acquisition hardware which still requires large immobile

machines to predict trajectories [25]. Additionally, the transmission of neural waveforms

has bandwidth limitations. A bandwidth bottleneck occurs as more neurons are sampled

from the brain making the transmission of these signals to the digital hardware arduous

and power consuming (which is contrary to the necessities of such a device) [20].

We believe that by shrinking digital hardware that computes the prediction

algorithms (on the subject), the wireless limitations and immobility issues are both

solvable. Specifically, the proposed solution is to directly connect the analog and digital

subsystems with a high-speed data bus that is more power efficient and faster than

the fastest available wireless link. Furthermore, using on-board digital hardware to

Figure 5-1: Hardware Overview

predict the trajectory removes the need for large external computers and approaches the

ultimate BMI goal of patient mobility.

In this chapter, we present a design for a wearable computational DSP system that

is capable of processing various neural-to-motor translation algorithms. The system first

acquires the neural data through a high speed data bus in order to train and evaluate

our prediction models. Then via a widely used protocol, the low-bandwidth output

trajectory is wirelessly transmitted to a simulated robot arm. This system has been built

and successfully tested with real data.

The organization of the chapter is as follows. We first cover the system requirements

and then outline the system design that meet these requirements in terms of the

hardware modules and the software 1 i--r--. Finally, we present the results of the system,

followed by a conclusion.

5.2 System Requirements

In order to create a successful system, it is necessary to first address the technical

and practical aspects of such a system by determining its functionality within its

intended environment. For example, how will the hardware extract information from

the brain? How will it be powered? What type of processing power is necessary for a

successful algorithm to be implemented? What are the physical constraints? How can

faulty hardware be deb-ir_-_, .1 and fixed? How will the neural predictions be expressed in

the external world?

S_. pike Timing to Hand Trajectory Robotic Arm
Mapping, Robot Arm Control
Neural Spike
Accluistion System

In addressing these broad requirements, we look to the ultimate BMI goal of having

a transplantable chip under the skin that can acquire neural firings from the brain and

translate them into action in the external world. Unfortunately, as with all long term

technological goals we must evolve the hardware in order to verify the designs as we

shrink and combine them into a final solution. At each stage of development, some of

the requirements must be relaxed or tightened depending on which portion of the design

we are trying to verify.

Specifically for our current stage of development, the hardware must first receive

digitized neural data during the training and evaluation of the neural-to-motor

prediction models. After evaluating the output, the predicted trajectory must then

be transmitted off-board through a wireless connection to a receiving computer/robot

arm representing the desired control. This wireless connection must also provide the

ability to remotely program and diagnose the system when being carried by a subject.

The described (WIFI-DSP) system serves as the digital portion of the overall BMI

structure and is responsible for translating the neural firings into action in the external

world. The first generation of this hardware was housed in a PCI slot of a personal

computer and did not posses wireless capabilities [26]. For the second generation

(discussed in this chapter) we require a design that is portable, possesses wireless

capabilities and is computationally fast.

Since the system needs to be portable and contain a wireless connection, we must

resolve how the other system requirements are affected. First, a portable system must

be light-weight and small enough for a human or primate to carry. Second, a portable

system must also be self-contained and rely only on battery power. Consequently, the

hardware needs to be low-power in order to extend the life of this on-board battery. The

low power constraint will then influence the choice of the processor since we need a low

power device that can still achieve fast processing speeds. This power constraint also

Figure 5-2: Hardware Modules of WIFI DSP

affects the wireless connection since it needs be low power, yet retain enough bandwidth

to transmit the output trajectory and any future data streams.

The prediction models running on the hardware platform also constrain the

system design. Since most of the prediction models require the use of floating point

numbers and arithmetic, we need a system that can process these floating point numbers

fast enough to attain real-time model computations. Additionally, these models are

sometimes large or contain multiple versions running simultaneously and therefore

require large memory banks to handle the data throughput.

5.3 System Design

In this section, we explain what components were chosen for the system, why they

were chosen and how they fulfill the requirements outlined in the previous section.

5.3.1 Processor

The central component to any computational system is the processor. The processor

can sometimes determine the speed, computational ability and power consumption

of the entire system. This central component also dictates what support devices are

required for the design. In choosing a processor for our particular system, we looked to

the previous work of our colleagues Scott Morrison and Jeremy Parks. They were able

to verify that the Texas Instruments T\ iS320VC33 (C33) was an appropriate processor

for our needs [26, 27]. It was also advantageous to use this processor since they had

developed a code library and hardware infrastructure for us not to be burdened with.

* High-Performance Floating-Point Digital
Signal Processor (DSP):
13-ns Instruction Cycle Time
150 Million Floating-Point Operations
Per Second (MFLOPS)
75 Million Instructions Per Second
17-ns Instruction Cycle Time
* 34K x 32-Bit (1.1-Mbit) On-Chip Words of
Dual-Access Static Random-Access
Memory (SRAM) Configured in 2 x 16K Plus
2 x 1K Blocks to Improve Internal
* x5 Phase-Locked Loop(PLL) Clock
* Very Low Power: 4 200 mW 150 MFLOPS
* 32-Bit High-Performance CPU
* 16-132-Bit Integer and 32-140-Bit
Floating-Point Operations
* Four Internally Decoded Page Strobes to
Simplify Interface to I/O and Memory
* Boot-Program Loader
* EDGEMODE Selectable External Interrupts
* 32-Bit Instruction Word, 24-Bit Addresses
* Eight Extended-Precision Registers

* On-Chip Memory-Mapped Peripherals:
One Serial Port
Two 32-Bit Timers
Direct Memory Access (DMA)
Coprocessor for Concurrent ltO and CPU
* Fabricated Using the 0.18-a.Lm (lel-Effective
Gate Length) Tlmeline'" Technology by
Texas Instruments (TI)
* 144-Pin Low-Profile Quad Flatpack (LQFP)
(PGE Suffix)
* Two Address Generators With Eight
Auxiliary Registers and Two Auxiliary
Register Arithmetic Units (ARAUs)
* Two Low-Power Modes
* Two- and Three-Operand Instructions
* Parallel ArithmeticlLogic Unit (ALU) and
Multiplier Execution in a Single Cycle
* Block-Repeat Capability
* Zero-Overhead Loops With Single-Cycle
* Conditional Calls and Returns
* Interlocked Instructions for
Multiprocessing Support
* Bus-Control Registers Configure
Strobe-Control Wait-State Generation
* 1.8-V (Core) and 3.3-V (1/O) Supply Voltages
* On-Chip Scan-Based Emulation Logic,
IEEE Std 1149.1t (JTAG)

Figure 5-3: DSP Features

Specifically, the C33 meets our floating point and high speed requirements since

it is a floating-point DSP capable of up to 200 MFLOPS (with over clocking). It

achieves such high speeds by taking advantage of its dedicated floating point/integer

multiplier. Since it works in tandem with the ALU, it also has the ability to compute

two mathematical operations in a single cycle. Another reason this processor is so

efficient in processing is that it can perform two reads, one multiply, and one store in a

single cycle by utilizing its dual address generators for simultaneous RAM access [26].

With regard to processing our BMI algorithms, Scott Morrision was able to verify two of

our groups cornerstone algorithms worked much faster on the C33 than with a 600MHZ

Pentium III computer [26, 27].

The C33 also meets our low power constraint since it uses less than 200mW at

200MFLOPS. It achieves such power savings due in part to its 1.8V core and other

power saving measures built into the processor [22]. Although the processor uses such

a low voltage for its core pro' --in:- the I/O supports 3.3V volt signals. Unfortunately,

the DSP requires external translation logic to interface all 5V devices (like the PC' ICIA

card) in order to meet the 3.3V I/O specs. We used two components for translating 5V

to 3.3V. One was the Texas Instruments SN74CBTD3384DBQR 10-bit level shifters

since it is fast (0.25ns) and available in small 24-pin SSOP packaging. The other was

a epm;l., 1. 14 CPLD that was also used for the interconnection logic of the different

hardware modules which we discuss later in this section.

Finally the C33 was able to fulfill other requirements of the system. The first is

its ability to support a large amount of memory. This is accomplished with its 24-bit

address bus allowing 16 million different memory locations. It also allows quick access

to these locations by using hardware strobe lines PAGEO-3 to directly provide access to

control logic to different memory blocks. This processor also meets the requirement of

expandability as discussed in the last section. It is expandable since it has four hardware

interrupts, two 32-bit timers, and a DMA controller.

5.3.2 Wireless connection

The second most important hardware module in our system is the wireless

connection. We determined that 802.11B would be the most appropriate protocol since

it is easy for our group and our collaborators to interface with. The protocol not only

provides a large amount of bandwidth, it also has inspired a large code infrastructure

Retved for BSooWoad


S (BH Words-
08o02 1 44 Wrds)

RAM Block 2
(M "cw l Wor emalj

RIAM Block 3

Ir K N' dr0 l -',Ori

RAM BtOa 0

RAM Block I
(1K Words I toeml)
I%.CIgPill, .I ?n Ir.'p lupT
..ij Tr] 1aiB r itl79

HK3i_ ST R Active
OM Words -

} C33 Bootloader Code

Main internal RAM blocks
for fast, parallel access.

Smaller internal RAM blocks
for fast, parallel access.

Figure 5-4: Memory map of the C33 DSP showing internal SRAM blocks

for communication clients and servers. Additionally, by using such a widely accepted

protocol, instead of developing a new one, we are able to communicate to any off-the-

shelf wireless device that supports 802.11b. Essentially the system can connect to any

computer or hardware device on the internet.

We designed the system to use a MA401 PCM\CIA wireless 802.11B card. This

PC\I ICA card was the smallest and fastest card available during our design process. At

the core of this card is an Intersil prism 2 chipset which is responsible for handling most

of the physical 1lV-r and MAC addressing of the 802.11B protocol. The control of this

chipset involves different sequences of register calls. Subsequently, these register calls

help to configure, initialize, and transmit data to/from the card.

The PC`\ ICIA card met the power requirements for this development stage since

we have the ability to vary the power consumption from less than 100mA to 300mA









depending on the bandwidth we require. It also meets our size requirements since it

is slightly larger than a credit card as shown in Figure 5-8. This device also meets the

final requirement of high bandwidth since it is capable of transferring data at 10mbits/s.

This high bandwidth is appropriate since it may become necessary to retrieve neural

data wirelessly if a sub-dermal analog acquisition system is designed and requires data

transmission through the skin.

5.3.3 Boot Control

In order for the system to be self reliant the DSP needs to boot from an EEProm

with the appropriate system software. This booting process is accomplished through

hardware interrupts on the DSP. Given a certain interrupt when the system comes out

resting or powering up, the DSP with assert certain memory locations to begin reading

from. Using the CPLD to provide the control logic and a jumper system we can cause

the DSP to boot from the EEProm or the USB interface. The only time we would boot

from USB is the system is being debu-ii- '1 or testing during manufacturing.

The EEPROM device we chose had been selected by Scott and Jeremy previous

for their design and showed to be successful with this DSP. The device is an Atmel

AT29C256 and has a small PLCC package to meet our size requirements. In terms of

our power requirements this chip also meets them since it only uses blah during use

(which is only at bootup). Another feature of the device that serves our needs is that it

has 256k of memory so that we can place fairly large software packages within it.

ForPC Development Mode
IlNT1 pulled low at RESET PC9030 DPM
Boot code is loaded from C program I I 0x400000 /INT1
into the DPRAM before RESET .
,.!n o..t.h.. .............. ......r....... ....-................... ........ ..... .. ................ C 33 D S P .............
For sianaatone Deployment MoNe .* ,
IINT2 pulled low at RESET EEPO
Bool code is loaded from tre EEPROIM F Selectable
Figure 5 5: Two d t m s to bt te C : Jumper

Figure 5-5: Two different methods to boot the C33: EEPROM or USE

5.3.4 USB

We chose the FT245BM USB FIFO device for a USB 2.0 interface. This device

is small and basic to control. Additionally, it meets our low power constraint since

it only uses 25mA in continuous mode and 100uA in suspend mode. Consequently,

this chip provides a range of options for data throughput requirements versus power

consumption. This chip also provides an 8 mbits/s data bus for the dual purposes of

data communications and system diagnosis.

Figure 5-6: Block Diagram of USB Interface

5.3.5 SRAM and Expansion

The DSP has 34K by 32-bits internal high-speed SRAM. As mentioned earlier, the

prediction models require more than this internal limit. Therefore, additional external

32-bit SRAM is required to connect to the C33 data bus. Unfortunately, many of the

desired components and alternatives were not available during the design process.

Consequently, we chose four 8-bit Cypress CY7C1049B- 15VC memory chips. These

memory chips fulfill many of the requirements of this stage of development [27]. First

they possess fast access time (15 ns). Second, they have low active power, (1320 mW

max.) and low C'\!OS standby power, (2.75 mW max.) Finally, they provide easy

memory expansion with their chip enable (CE) and output enable (OE) features

Having four CY7C1049B parts yields a total of 512k by 32-bits or 2MB of external

memory. These four parts are incorporated using the same chip enable line connected to

different bytes of the data bus, giving the appearance of a single 32-bit memory [26].

DATA[7:0] 512K by 8-bit

DATA[15:8] 512K by 8-bit

C33 DSP ADDR[18:0]
+ *512K by 8-bit

L 512K by 8-bit

Figure 5-7: 512K by 32 bit external SRAM architecture.

5.3.6 Power Subsystem

There are three different power requirements in order to power the WIFI-DSP,

1.8V, 3.3V, and 5V. Texas Instruments offers a TPS70351 Dual-Output LDO Voltage

Regulator that includes both 1.8V and 3.3V voltages on a single chip but only requires

5V to operate. The two output voltages are not only used by the DSP, they are also

used for the other hardware modules in the system. This chip also provides the power-up

sequence required by the DSP once it is initialized. Additionally, having one required

voltage input source is an advantage for this portable system since only one 5V battery

supply is necessary.

Figure 5-8: WIFI DSP System

5.4 Complex Programmable Logic Device

The Complex Programmable Logic Device (CPLD) is responsible for 6 hardware

components: 1) C33 DSP, 2) PC\!ICIA 802.11B, 3) USB, 4) External SRAM, 5)

Bootable EEPROM, and 6) Power Regulator. This chip provides the control logic

between the devices using various signals. These signals include Write Enable, Read

Enable, Chip Select, and Reset signals. By using address and control signals the CPLD

is able to define multiple memory locations of the DSP so that specific hardware

components can be read and written to/from. Specifically the decoded address spaces

were created for the EEPROM, SRAM, PCM\CIA 802.11B card and the USB interface.

The CPLD also provides interrupt control to the DSP based on the real-time operating

system implemented in the DSP during the communication between the C33 and the

USB bus. The final requirement of the CPLD is to provide an interface for any future

expansion hardware that may become required.

Given the above requirements for the CPLD control and the number of interface

signals needed, the Altera EPM3064 CPLD was chosen. A member of the Altera MAX

family, this 100-pin TQFP chip provides 64 pins of I/O with 1200 usable gates and

64 Macrocells. Additionally, it is fast enough to meet our memory and bus speed

requirements as well as low power enough to meet our current power requirements.

To control the CPLD device, VHDL code provides the necessary control signals

for each component on the board. Once the VHDL code is compiled, the CPLD is

programmed through the 10-pin ByteBlaster port. This allows the reconfigurable

CPLD to become a flexible architecture as the BMI requirements change and necessary

modifications become important.

5.5 System Software

Software is one of the most important components to any hardware system. For

a BMI to be successful there will be necessary software that accompanies the final

solution. Specifically for our hardware solution, there are five 1 ii' ', levels of software in

the DSP Board environment: 1) PC Software, 2) DSP Operating System, and 3) DSP

Algorithms, 4)802.11B Code 5)TCP/IP protocol. This section will describe in brief the

operation of the software and its interaction between the multiple hardware modules.

5.5.1 PC Software

Using Visual Basic, we wrote a PC console program to interface the DSP through

the USB. The console program calls functions within the DSP OS to initiate and control

the USB communication functions. The DSP OS is also responsible for reading/writing

memory locations and various program control functions.

In order to make the communication with the DSP work, the console program needs

to talk via the USB bus. The drivers for the USB device support Windows 98, 2000, and

XP, and support the following function calls, as well as many others that were not used

for this implementation.

Using the console program, the user may modify DSP configuration registers,

download and execute DSP code, and view and edit memory locations. The code is

easily modifiable to accommodate different testing requirements, such as different

methods for streaming data to the DSP.

Figure 5-9: User Interface

A simple and expandable protocol was created for PC-to-DSP communication. This

protocol involves a series of 32-bit commands containing different opcodes. The DSP

operating system was written to support these series of commands, which include the

ability to read from the DSP, write to the DSP, and execute a program in DSP memory.

In tandem with the OS l0.-r of the DSP, there is low-level driver code for

initializing and controlling the 802.11B wireless controller. This code must interact with

the DSP OS and any UDP client code or algorithms that are running simultaneously

within the WIFI-DSP. Once the prediction model completes an epoch or computation

cycle, the program must interrupt the wireless card and transfer any required data.

This process also involves creating the correct UDP packets for transmission to the

appropriate UDP server (with a specific IP address).

5.6 Results

The WIFI-DSP has been fully tested via its USB interface and wireless

communication in the following manner. Neural data was acquired through the USB

port and used in both training and evaluation (forward) modes on the WIFI-DSP

Figure 5-10: NLMS Performance

system. Specifically, the DSP was programmed to train an NLMS algorithm and upon

completion, trajectory predictions were transmitted off board through the 802.11b

wireless interface using a UDP client protocol. This communication occurred bi-

directionally with an external laptop running as a typical UDP server.

The LMS output results collected at the receiving computer were directly compared

to Matlab computed outputs. These results are accurate within 7 decimal places of the

Matlab double precision results.

The bandwidth of the wireless link (802.11b) is around 1.8 M bit/sec in continuous

operation. This is comparable to what is expected on a 3Ghz Pentium laptop using the

same Netgear wireless adapter with a Prism II chipset.

The current consumption is approximately 350 mA for the entire board which

equates to 1750mW. Of this consumed power, over ,'' or 1400mW is used by the

PC' ICIA wireless adapter. Overall, this is much less than the 4W previously attained

by other acquisition hardware [21].

Speed measured byDigital OsilloscoH~pe
it MA LA runn ]li i ingJ on Pill 0M~ "JIII[',, V = 1.92s ] L
SSpeedup o^^f al^most 100x over PC
SExpected result because PCisrunningmulti
once-'l Hsn 1 i j[o~nin. ,KB iI^ "

,, = MATL'AI onL' l^l P' !1 l;,[l^l^]i^llll i J 6.0[ lli^M


^B^^^^^ 1.0s 2.Os^R'^^^^^^^^^^^^^^^^^


6.1 Hardware

We have a working system that demonstrates LMS training on a DSP platform.

Further, we were able to verify that it wirelessly transmits results to 802.11b enabled

devices. However we are disappointed in the size and power consumption of the WIFI

DSP board. As mentioned throughout the paper, for this development stage we relaxed

some of the constraints in order to verify the technologies and fuse them into a single

system. For the next generation, we want to shrink the system in half and reduce

the power consumption. Because the 1 ii ii i ly of the power is being consumed by

the wireless adapter, it is necessary to find a lower power wireless link. We also want

to verify the ability of the WIFI-DSP to directly communicate to analog acquisition


The WIFI-DSP system demonstrates that by shrinking digital hardware that

computes the prediction algorithms (on the subject), the wireless limitations and

immobility issues are both solvable. Specifically, connecting the analog and digital

subsystems with a high-speed data bus is more power efficient and faster than any

wireless link. Furthermore, using on-board digital hardware to predict the trajectory

removes the need for large external computers and approaches the ultimate BMI goal of

patient mobility.

6.2 FHMM Applications

We note that the final prediction performance of the proposed bimodal system is

much better than the RNN, and superior to that of the single LLM model. Clearly, the

use of multiple models improves prediction performance over a single LLM model, at

some additional computational cost. Furthermore, by increasing the repertoire of motion

primitives we may improve 3D mapping by allowing the linear/nonlinear models, on



I .I J


Figure 6 1: Derivatives of spherical coordinates with pink movement segmentations
super imposed

the output stage of the bimodal structure, to fine tune to a specific primitive. Since

the underlying motion 'primitives' are unknown, we seek in future work to form an

unsupervised method of segmenting these 'primitives'.

We believe that the FHMM structure may lead us to the solution. Figure 6-1 shows

where our hand segmentation incured errors (red arrows). We want to find out what the

FHMM is classifying during these user segmentation errors. Specifically, is it classifying

these sections as movement despite the fact that we segmeneted them as stationary?

In Figure 6-2, there seems to be an indication that the FHMM is in some way

detecting our hand segmentation errors (in green) and producing correct classification

(despite our labeling it incorrectly). This leads us to believe that our classification

results may be higher than what we are reporting since our segmentation process is

slightly flawed.


Tb *



1 i

Figure 62: Derivatives of spherical coordinates with blue movement segmentations


t '

Figure 6-3: Spherical coordinates vs average ratios

The question arises as to why the FHMM failed to discover our error in the sections

labeled as A in both figures (6-1 and 6-2). The FHMM appears to not recognize

this section as movement despite our claim that it is movement (based on above


We see that monkey makes no discernable motion similar to his food-grasping task.

We still observe that the monkey did move its arm in some fashion (though not fitting

our criteria for segmentation). In Figure 6-3, we see the probabilities plotted in time

respective to the time slice for the desired data.

Figure 6-3 shows (1 EP(OIAM) plotted in green and A1 Y P(O As) in blue. We see

a rough approximation of when the FHMM is indicating that motion may in fact be

taking place at both location A and B (which we can confirm with Figure 6-3). This

tells us that the FHMM may be able to cluster the input. Looking within the FHMM

we beleive the single HMM chains themselves may also be optimized to help in our

clustering process.

Since we know the classification performance of the individual HMM chains during

training, we believe that this information can allow us to weight the individual classifiers

performance by using adaboost or some other scheme. In Figure 6-4 we used neuronal

ranking information to see if we could observe the importance of the single HMM chains

on the overall classifier. We start with the top-ranked neurons in his list and then


[1] W. T. Thach, "Correlation of neural discharge with pattern and force of muscular
activity, joint position, and direction of intended next movement in motor cortex
and cerebellum," Journal of Neurophysiology, vol. 41, pp. 654-676, 1978.

[2] J. C. Sanchez, D. Erdogmus, Y. Rao, J. C. Principe, M. Nicolelis, and J. Wessberg,
"Learning the contributions of the motor, premotor, and posterior parietal cortices
for hand trajectory reconstruction in a brain machine interface," presented at
IEEE EMBS Neural Engineering Conference, Capri, Italy, 2003.

[3] M. A. L. Nicolelis, D.F. Dimitrov, J.M. Carmena, R.E. Crist, G Lehew, J. D.
Kralik, and S.P. Wise, "('C!i i. .', multisite, multielectrode recordings in macaque
monkeys," PNAS, vol. 100, no. 19, pp. 11041 11046, 2003.

[4] X. D. Huang, Y. Ariki and M. A. Jack, Hidden Markov Models for Speech
Recognition, Edinburgh University Press, 1990.

[5] J. Yang, Y. Xu and C. S. C(!, 1i "Human Action Learning Via Hidden Markov
Model," IEEE Trans. Systems, Man and Cybernetics, Part A, vol. 27, no. 1, pp.
34-44, 1997.

[6] Y. Linde, A. Buzo and R. M. Gray, "An Algorithm for Vector Quantizer
Design,"IEEE Trans. Communication, vol. COM-28, no. 1, pp. 84-95, 1980.

[7] D ,] in : iii ii S., Kim, S.P. Nechyba, M. (2003) Bimodal brain-machine interfaces
for motor control of robotic prosthetic, IEEE IROS Conference, 2003.

[8] Z. Ghahramani and M.I. Jordan, 1p 1, I. Hidden Markov Models," Machine
L. ,ii.i 29, pp. 245-275, 1997.

[9] A.P. Varga R.K. Moore (1990) Hidden Markov model decomposition of speech and
noise, IEEE Conf. Acoustics, Speech and Signal Processing (ICASSP90).

[10] A. Georgopoulos, J. Kalaska, R. Caminiti, and J. Massey, "On the relations
between the direction of two-dimensional arm movements and cell discharge in
primate motor cortex.," Journal of Neuroscience, vol. 2, pp. 1527-1537, 1982.

[11] S. P. Kim, J. C. Sanchez, D. Erdogmus, Y. N. Rao, J. C. Principe, and M. A.
Nicolelis, "Divide-and-conquer approach for brain machine interfaces: nonlinear
mixture of competitive linear models," Neural Networks, vol. 16, pp. 865-871, 2003.

[12] A. B. Schwartz, D. M. Taylor, and S. I. H. Tillery, "Extraction algorithms for
cortical control of arm prosthetics," Current Opinion in Neurobiology, vol. 11, pp.
701-708, 2001.

[13] M. C. N. Livba and Y. Xu, Learning and Transfer of Human Real-Time Control
Strategies, Journal of Advanced Computational Intelligence, vol. 1, no. 2, pp.
137-54, 1997.

[14] L. E. Baum, T. Petrie, G. Soules and N. Weiss, A Maximization Technique
Occurring in the Statistical Analysis of Probabilistic Functions of Markov C'!I ,in-
Ann. Mathematical Statistics, vol. 41, no. 1, pp. 164-71, 1970.

[15] Widrow B, Stearns S. Adaptive signal processing. Upper Saddle River (NJ):
Prentice Hall; 1985.

[16] Lawrence R. Rabiner, A tutorial on Hidden Markov Models and selected
applications in speech recognition, Proceedings of the IEEE, vol. 77, no. 2, pp.
257-286, 1989.

[17] E. Todorov, "Direct cortical control of muscle activation in voluntary arm
movements: a model," Nature Neuroscience, vol. 3, pp. 391-398, 2000.

[18] S. D i,: i ,ii i,- Elementary Statistical Analysis of Neural Spike Data, Internal
Report, University of Florida, August 2002.

[19] M. A. Nicolelis, A. A. Ghazanfar, B. M. F,.---;ii S. Votaw, and L. M. Oliveira,
"Reconstructing the engram: simultaneous, multisite, many single neuron
recordings," Neuron, vol. 18, pp. 529-537, 1997.

[20] Obeid I, Nicolelis M, Wolf P, A Multichannel Neural Telemetry Sy-i. in Society
for Neuroscience Annual M.. iir.- Orlando, FL, November 2002.

[21] Obeid I, Nicolelis, M, Wolf P, "A Multichannel Telemetry System for Single Unit
Neural R. .... i, I,- J Neurosci Meth, v133, nol-2, pp33-38, February 2004.

[22] Texas Instruments Incorporated. TMS 320C3x users guide. Literature number
SPRU031E. Dallas (TX) USA; 1997.

[23] Nicolelis M, Obeid I, Morizio J, Wolf P, "Towards Wireless Multi-Electrode
Record- ings in Freely Behaving Animals", Society for Neuroscience Annual
Meeting, New Orleans, LA, November 2000.

[24] M. A. L. Nicolelis, Methods for Neural Ensemble Recordings. Boca Raton: CRC
Press, 1999.

[25] M. A. L. Nicolelis, "Brain-machine interfaces to restore motor function and probe
neural circuits," Nature Reviews Neuroscience, vol. 4, pp. 417-422, 2003.

[26] S. Morrison, J. Parks, K. Gugel, A High-Performance Multi-Purpose DSP
Architecture for Signal Processing Research,Intl. Conf. on Acoustics, Speech,
and Signal Processing, 2003.


[27] S. Morrison, A DSP-Based Computational Engine for a Brain- Machine Interface,
M.S. Thesis, University of Florida, 2003.


Shalom D i,1 ,ii in graduated from the University of Florida with a Bachelor of

Science in Computer Engineering in December 2003. After completing this thesis in May

2005 Shalom plans to continue his pursuit of knowledge in the Ph.D. program at the

University of Florida.

Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd






ACKNOWLEDGMENTSWhenIwasaround6yearsold,Isawatelevisionprogramshowingalittlegirlbeingttedwitharoboticprosthetic.Atthattime,Ithoughtshewascontrollingthearmwithhermind.Itinspiredmetogotocollegeandlearnhowtohelpothersregainmobilitywiththisostensiblycyborgtechnology.AsIgrewolder,Icametounderstandthatnosuchtechnologyexistedintheworldandrealizedthatthegirlmusthavebeencontrollingthearmthroughmusclecontractions.Mychildhooddreamwaswashedawayinmuchthesamewaythatmostchildrenwhoaspiretobecomeastronautshavetheirdreamfadeaway.ThenonedayImadealeftinsteadofaright.IhadthechoiceofgoingtoclassinLarsonHallrightsideoraskingDr.GugelforaletterofrecommendationtogradschoolinBentonHallleftside.Byshearchance,hewasinhisoceandabletotalkwithoutstudentinterruptions.ThatsimpleletterturnedintoameetingwithDr.Nechyba,thenameetingwithDr.PrincipeandnallyanintroductiontotheBrainMachineInterfaceBMIproject.AllthreeprofessorshavegivenmemorethanIcaneverrepayandIamtrulygratefulforthewonderfulopportunities.IamalsogratefultomyfellowAppliedDigitalDesignLabADDL,ComputationalNeuralEngineeringLabCNEL,andMachineIntelligenceLabMILcomradesfromwhomIhavealsolearnedagreatdeal.ScottMorrison,JeremyParksandJoelFusterwelcomedmeintotheADDLlabandputupwithmysarcasm.Inparticular,ScottprovidedguidanceandexpertisewithallofthehardwarethatIhelpedtocreate.Additionally,PhilSungKimgreatlyenhancedtheworkinthisthesiswithhisworkontheLeastMeanSquareLMSandwienerltersforuseintheBi-Modalstructurediscussedinthisthesis.GregandBenwerealsoencouragingandhelpfulduringtheprocessofmakingthisthesisandthenalhardwarework.TherearemanyMILmembersthatIhavebecomefriendswith.Iappreciatethemall.Finally,IwouldliketogivespecialthankstoJeremyAndersonand iii


SinisaTodorvichfortheirveryhelpfulcriticismsofthethesis.ItrulyamnothingmorethanthecontributionsfromtheprofessorsandthestudentsofADDL,CNEL,MIL,allwrappedintoonelovablefuzzyballofafella.Ipersonallythankthemallforlettingmebecomeanastronaut. iv


TABLEOFCONTENTS page ACKNOWLEDGMENTS ................................ iii LISTOFFIGURES ................................... vii ABSTRACT ....................................... ix CHAPTER 1INTRODUCTION ................................. 1 1.1Motivation .................................. 1 1.2BrainMachineInterface:CollaborativeEort .............. 2 1.3Overview ................................... 4 2EXPERIMENTALSETUPANDINITIALANALYSIS ............. 5 2.1Duke-PrimateBehavioralExperiments .................. 5 2.2Duke-DataAcquisition ........................... 6 2.3ElementaryStatisticalClasser ...................... 9 2.3.1BasicFeatureExtraction ...................... 9 2.3.2BasicStucture ............................ 12 2.3.3Results ................................ 13 2.4Discussion .................................. 15 3HIDDENMARKOVMODELS .......................... 17 3.1Motivation .................................. 17 3.2HiddenMarkovModelOverview ...................... 17 3.3VectorQuantizing-HMM .......................... 18 3.3.1Structure ............................... 18 3.3.2Training ............................... 20 3.3.3Results ................................ 23 3.4FactorialHiddenMarkovModel ...................... 25 3.4.1Motivation .............................. 25 3.4.2Structure ............................... 28 3.4.3Training ............................... 31 3.4.4Results ................................ 32 3.5Discussion .................................. 35 4LOCALLINEARMODELSWITHHMMPARTITIONING .......... 37 4.1Structure ................................... 37 4.2Results .................................... 38 4.3Conclusion .................................. 40 v


5DISCRETESIGNALPROCESSINGHARDWAREIMPLEMENTATIONFORABMI ................................... 42 5.1Introduction ................................. 42 5.2SystemRequirements ............................ 43 5.3SystemDesign ................................ 45 5.3.1Processor ............................... 46 5.3.2Wirelessconnection ......................... 47 5.3.3BootControl ............................. 49 5.3.4USB ................................. 50 5.3.5SRAMandExpansion ........................ 50 5.3.6PowerSubsystem .......................... 51 5.4ComplexProgrammableLogicDevice .................. 52 5.5SystemSoftware ............................... 53 5.5.1PCSoftware ............................. 53 5.6Results .................................... 54 6FUTUREWORK .................................. 56 6.1Hardware .................................. 56 6.2FHMMApplications ............................ 56 REFERENCES ...................................... 62 BIOGRAPHICALSKETCH ............................... 65 vi


LISTOFFIGURES Figure page 1{1BMIconceptsdrawing ............................. 2 1{2UF-BMIoverview ................................ 3 2{1Rhesusandowlmonkeys ............................ 5 2{2Feedingexperiment ............................... 6 2{3Neuralprostheticsandoperations ....................... 7 2{4Exampleneuralwaveforms ........................... 8 2{5Optotrak ..................................... 8 2{6N.M.A.P.system ................................ 9 2{7Binnedneuraldataandcorrespondingvelocitywiththreshold ...... 10 2{8Instantaneousvelocity ............................. 11 2{9Velocityvsdisplacement ............................ 12 2{10Samplespatialstatisticsone-secondslidingwindow. ............ 13 2{11Samplethresholdedspatialstatisticsone-secondslidingwindow. ..... 14 2{12Performanceofthreshold-basedclassierspatialvariance,allneurons .. 15 3{1DiscreteHMMchain .............................. 18 3{2Stationary/movingclassiers .......................... 19 3{3LGBVQalgorithmon2Dsyntheticdata ................... 21 3{4"Leaving-kout"testing ............................. 24 3{5Sequentialtesting ................................ 25 3{6Singleneuralchannels ............................. 27 3{7Averageratios .................................. 27 3{8Beforeandafterlog ............................... 29 3{9GeneralFHMM ................................. 29 3{10ComparisionofourFHMMtogeneralmodel ................. 30 vii


3{11Biasingplotfornaiveclassier ......................... 32 3{12Biasingplotformodiednaiveclassier .................... 33 3{13FHMMevaluation ................................ 34 3{14Trainingdataonnaiveclassier ........................ 34 3{15TrainingdataonFHMM ............................ 35 4{1Bimodalmappingoverview ........................... 37 4{2Locallinearmodel ............................... 38 4{3PredictedandactualhandtrajectorisASingleLLM,BRNNCBi-modal 39 4{4CEMplots .................................... 40 4{5SERplot ..................................... 41 5{1HardwareOverview ............................... 43 5{2HardwareModulesofWIFIDSP ....................... 45 5{3DSPFeatures .................................. 46 5{4MemorymapoftheC33DSPshowinginternalSRAMblocks ....... 48 5{5TwodierentmethodstoboottheC33:EEPROMorUSB ......... 49 5{6BlockDiagramofUSBInterface ........................ 50 5{7512Kby32bitexternalSRAMarchitecture. ................. 51 5{8WIFIDSPSystem ............................... 52 5{9UserInterface .................................. 54 5{10NLMSPerformance ............................... 55 6{1Derivativesofsphericalcoordinateswithpinkmovementsegmentationssuperimposed ................................. 57 6{2Derivativesofsphericalcoordinateswithbluemovementsegmentationssuperimposed ................................. 58 6{3Sphericalcoordinatesvsaverageratios .................... 59 6{4Worstneurontobestneuronpredictors ................... 60 6{5Worstneurontobestneuronpredictorswithbias ............. 61 viii


AbstractofThesisPresentedtotheGraduateSchooloftheUniversityofFloridainPartialFulllmentoftheRequirementsfortheDegreeofMasterofScienceSWITCHINGHIDDEN-MARKOVMODELANDHARDWAREIMPLEMENTATIONFORABRAIN-MACHINEINTERFACEByShalomDarmanjianMay2005Chair:JosePrincipeMajorDepartment:ElectricalandComputerEngineeringInpursuitoftheUniversityofFloridaaUF'sBrainMachineInterfaceBMIgoals,thisthesisfocusedonhowtoimprovethe3Dmodelingofaprimate'sarmtrajectoryandtheimplementationofsuchalgorithms.Intermsof3Dmodeling,wearguethatthebestwaytoimprovethetrajectorypredictionisbyrstusingaswitchingclassiertopartitiontheprimate'sneuralringsandcorrespondingarmmovementsintodierentmotionprimitives.Weshowthatbyswitchingordelegatingtheseisolatedneural/trajectorydatatodierentlocallinearmodels,predictionofnal3-Darmtrajectoriesismarkedlyimproved.Althoughourstudyfocusedonprimitivesofmotionandnon-motion,weproposethatourworkcanexpandtoincludemoreprimitivesandthusincreasethenalperformancemanifold.ConcerningimplementationofBMIalgorithms,ourrstaimwastoachieveaportablewirelesscomputationalDSP.Nextwedeterminedthesoftwareandhardwarecomponentlayersintegratedinthisevolvingdesign.Finally,wedetailthedistributedimplementationoftheswitchingmodeloveraparallelcomputingarchitecture,investigatingoinetrainingandevaluationaswellaspossiblefuturereal-timeimplementations. ix


CHAPTER1INTRODUCTION 1.1 MotivationTheOlympicsarenottheonlyplacetowitnessmasterfulcontrolofmusclesandlimbs.Simplyattendingalocalping-pongmatchorabasketballgamecanalsoprovideexamplesofprecisedexterityandbodilycontrol.WhetherweareOlympiadsorregularpeople,weallrequiretheabilitytoplantrajectoriesandexecuteaccuratemotionstoservetheneedsanddesiresofourdailylives.Toachievethisprecision,werelyonthebraintodecodeourthoughtsandcontrolmanyindividualcomponentsinourbodieswithoutspecicallycommandingittodoso.Alongwiththebrain,thespinalcordandperipheralnervesmakeupacomplex,integratedinformation-processingandcontrolsystem.Specically,theyhelpthebraincontrolbloodow,oxygenintake,bloodpressure,andamyriadofotherfunctionsthathelpusgrowandstayhealthy.Thebrainmustcontinuouslydoalloftheseactionswhileprocessingfeedbackfromvisualcenters,tactilesensors,andmanyotherinternalsystems;andprovidehigher-ordercognitionforthesiswriting.Unfortunately,millionsofpeople'sbrainssuerapartialorfulldisconnectfromtheirbodies,hinderingthecontrolofphysicalmovement.Althoughsometechnologicalsolutionsgivetheseindividualsvariousinteractionwiththeexternalworld,theyoftenfallshortofwhatisrequiredfordailylife.Thenotedphysicist,StephenHawking,isanexampleofahumanpossessinganexceptionalmindwithlittleornocontrolofhisexteriorlimbs.Hemustnavigatedailylifewithasimplecomputerjoystickthatoftenlimitshisabilitytopresenthistheoriestopeersandconferenceproceedings.Thiscrudedeviceandotherslikeitcanonlyhelpselectdisabledindividualswhostillpossessomemotorcontrol.Ifacommunicationpathwaycouldbridgethegapbetweenthebrainand 1


2 theexternalworld,itwouldempowermillionsofdisabledpeoplewhonowhavelittleornomotorcontrol.ThisistheidealisticvisionofaBrainMachineInterfaceBMI. Figure1{1:BMIconceptsdrawing ABrainMachineInterfaceisoneofmanywaysthatresearchersaretryingtodevelopthispathwaybetweenmanandmachine.Specically,aBMIisasystemthatcandirectlyretrieveneuronalringpatternsfromdozenstohundredsofneuronsandthentranslatethisinformationintodesiredactionsintheexternalworld.Thefunctionalityissimilartohowthebraincurrentlyworksbutprovidesabypassbetweenthepatient'sbrainandanexternaldevice.ABMIisnotlimitedtoonlypatientswithparalysisintheirextremities;futureapplicationscouldservemilitaryorcommercialsectors.Althoughprogresshasbeenmadetowardthesegoalssincethe1970s[ 1 ],workmustbeaccomplishedbeforemachinesunderbraincontrolcanhitasimplebaseballaswellasasmiling8yearoldchild. 1.2 BrainMachineInterface:CollaborativeEortTodesignacomplicatedsystemlikeaBMI,expertsmustcometogetherfrommultipleeldstosolvethemanytechnologicalandbiologicalproblems.Tocircumventthesebarriers,theDefenseAdvancedResearchProjectsAgencyDARPAbrought


3 biologists,neurologistsandabroadspectrumofengineerstocollaborateunderasingleumbrellaofleadershipandnance.Dr.MiguelNicolelisandhisstaatDukeUniversityareundertakingthisleadershiprolefortheentireprojectbycoordinatingthecollaborationofallmemberinstitutions.Hisgroupalsoprovidesneurologicalexpertiseandanexperimentalprimate-testingplatform,whichisrequiredbeforeimplantingtheBMIintohumans.MIT,SUNYandUFaretheotherinstitutionsprovidingtheirexpertisetocomplementtheskillsatDuke.Specically,ourgroupatUFisinvolvedindevelopingalgorithmsthatcanpredictaprimate'sarmtrajectoriesbasedonthespatialsamplingofhundredsofneuronswithinthemultiplecorticalandsubcorticalareas.Inturn,thesemodelsmustbemultiple-inputmultiple-outputsystemsofhugedimensionalitytoaccommodateallofthesedataatonce.Indesigningsuchastructure,accuracymustbetemperedwithspeed,especiallywhenimplementingthesemodelsinhardware.Someofourgroup'sinitialexperimentsexaminedlinearandnonlinearmodelstodeterminewhichmodelsarethemostecientwithregardtotheconstraintsofaccuracyandspeedinourapplication[ 2 ]. Figure1{2:UF-BMIoverview


4 Intandemwithalgorithmdevelopment,ourgroupisinvestigatingtheuseofhybridanaloganddigitaltechnologiestoachievelow-powerportabledevicesthatwillrunthesemodels.Consequently,weareevolvingthedigitalandanalogdesignstoevaluatetheirfeasibilityasweshrinkandintegratethemintoasinglesystem.Ourgroup'sVeryLargeScaleIntegrationVLSIexpertisecanassistinthistypeofsystembydesigningcustomlow-powerhybridVLSI-DSPchips.Eventuallythough,allprocessingneedstomoveintoasingle-chipsolutionsothatitcanbeimplantedintoapatient'sbodyandcanindependentlycontrolanexternaldevicewithactionablethoughts. 1.3 OverviewInserviceofUF'sBMIgoals,thisthesisfocusesonhowtoimprovethe3-Dmodelingofaprimate'sarmtrajectoryandtheimplementationofsuchalgorithms.Intermsof3Dmodeling,wearguethatthebestwaytoimprovethetrajectorypredictionisbyrstusinga'switching'classiertopartitiontheprimate'sneuralringsandcorrespondingarmmovementsintodierentmotionprimitives.Weshowthatbyswitchingordelegatingtheseisolatedneural/trajectorydatatodierentlocallinearmodels,predictionofnal3Darmtrajectoriesismarkedlyimproved.Althoughthisthesisfocusesonprimitivesofmotionandnon-motion,weproposethatourworkcanexpandtoincludemoreprimitivesandsubsequentlyincreasethenalperformancemanifold.ConcerningimplementationofBMIalgorithms,werstdiscussourinitialstepintryingtoachieveaportablewirelesscomputationalDSP.Secondly,wedescribethesoftwareandhardwarecomponentlayersintegratedwithinthisevolvingdesign.Finally,wedetailthedistributedimplementationoftheswitchingmodeloveraparallelcomputingarchitecture,discussingtheresultsinoinetraining/evaluationaswellaspossiblefuturereal-timeimplementations.


CHAPTER2EXPERIMENTALSETUPANDINITIALANALYSISInthischapter,werstdepicttheexperimentalenvironmentinwhichthenon-humanprimatescarriedouttheirbehavioraltasks.Secondly,wedescribetheretrievalofneuralogicalandtrajectorydataalongwithcertainpropertiestheyexhibit.Finally,wediscussourelementaryanalysisofthisdataandpresenttheresultsofarudimentaryclassier. 2.1 Duke-PrimateBehavioralExperimentsDr.Nicolelis'slaboratory,attheDukeUniversityMedicalcenter,isresponsibleforcarryingoutthebehavioralexperimentswithprimatesubjectsfortheDARPAfundedBMIproject.TheprimatespeciestheyusewithintheseexperimentsaretheRhesusMacaqueMacacamulattaandtheOwlMonkeyAotustrivirgatus,eachhavingvaryingphysicalcharacteristics.Dierentexemplarsofthesesmallspeciestrainonamultitudeofexperimentslike3-Dfoodgrasping,1-Dpolecontrol,2-Dpolecontroland2-Dpolecontrolplusgripping.Forthepurposesofthisthesis,wefocuson3-DfoodgraspingexperimentsthatinvolveafemaleOwlMonkey. Figure2{1:Rhesusandowlmonkeys 5


6 Inthisparticular3-Dfoodgraspingexperiment,onceanopaquebarrierlifts,thefemalemonkeyisrequiredtoretrievefruitfromfourxedtraylocationsandthenplaceitinhermouth.Inordertoconstrainthefeedingmovements,Dr.Nicolelis'sgroup,useaconstrainingapparatustoholdtheprimate'sbodyinplace.Theneck,torso,leftarmandlegsfastenintoaclamp-likegrippersothattheonlymotionthatcantakeplaceisrightarmmovement.Inturn,therightarmmovementisdigitizedandtransmittedtothepredictionmodelsalongwiththeneuraldataandsubsequentlytoaroboticarm[ 21 25 ].Thistypeofexperimentisimportantfortworeasons.First,itmimicsthebehaviorthatahumanwouldrequireindailylife,andsecond,theexperimentiscyclicsothatthepredictionmodelscanhavesimilardataexampleswithwhichtotrainandevaluate. Figure2{2:Feedingexperiment 2.2 Duke-DataAcquisitionTheacquisitionofdatafallswithintwoareas:theneurologicaldataandthephysicalarmtrajectorydata.Withinthesetwoareas,weexplainhowthedataareacquiredandthecharacteristicstheydisplay.Toretrieveneurologicaldata,Dr.Nicolelisandhisstarstdrillcraniotomiesintotheprimates.Foreachprimatesubject,histeamthenplacesvetotendierentcorticalimplantsinmultiplecorticalregions.Specically,theyimplantdierentarraysinthe


7 Figure2{3:Neuralprostheticsandoperations posterialparietalcortexPP,primarymotorcortexM1,anddorsalpre-motorcortexPMdsinceeachregionhasbeenfoundtocontributetomotorcontrol[ 3 ].Themicro-arraysplacedintheseregionsconsistoftraditionalmetalmicroelectrodes[ 3 ]thatareelectrolyticallysharpenedwirespins,25to50umindiameterandinsulatedtodeneanexposedrecordingareaatthetipofaround100um.Onaverage,eachmicroelectroderecordsfromfourneurons,consequentlyrequiringsignalconditioningandspikesortingofthewaveformsatalaterstageinordertodistinguishsingleneuralcells[ 19 ].Torecordtheprimate'sarmtrajectory,Dr.Nicholelis'sgroupemployedanon-invasiverecordingsystemtoaccuratelymeasurethepositionofthemonkey'sarmin3-Dspace.Thiscommercialproduct,Optotrak,isaccurateto.1mmwitharesolutionof.01mmwhenrecordingreal-time3-Dpositions.Thesystememploysthreecamerasthattrackinfraredmarkersplacedonthemonkey'swrist.Byusingdierentmarker


8 Figure2{4:Exampleneuralwaveforms combinations,theDuketeamcanisolateandtrackdierentpartsofthearmasitmovesthroughspace. Figure2{5:Optotrak Asbothdatastreamsaresimultaneouslyrecordedfromthemonkey,itisnecessarytocombinetheminorderforthemappingmodelstotrainandevaluatethetrajectory.Thiscanbeadiculttasksincebothtypesofdatahavedierentpropertiesassociatedwiththem.Forexample,theneuralsignalscomeintheformofrawelectrodevoltagepotentialsthatrangefromthenoiselevel[ 3 ]0Vorsotoabout1mV.Thesignals


9 canrangeinfrequencyfrom30Hzto9KHzdependingonthespikingrateandactivity.WhereastheOptotraksystemgeneratesdigitized200Hzsignalstodenethetrackingofthemonkey's3-Darmtrajectory.Inorderforthisprocessingtobeusefulforthemodels,theneuraldataarebinnedinto100msbinsandthetrajectorydataaredown-sampledby20togenerateacorresponding10Hzsignal,whichisconsistentwithotherinvestigators[ 12 17 19 ]. Figure2{6:N.M.A.P.system 2.3 ElementaryStatisticalClasser 2.3.1 BasicFeatureExtractionAmongneuralscientists,thereisanon-going,unresolveddebateregardinghowthemotorcortexencodesthearmsmovement[ 12 ].Variousresearchersarguethatthemotorcortexencodesthearmsvelocity,positionorbothwithintheneuronalrings[ 12 17 ].Inanattempttoobserveanyobviouspatternsorcorrelationsinthisencoding,webeganwithavisualandnumericalinspectionofthevelocity-neuraldata.Inviewingthedata,wewereabletodiscernsomebasicproperties.First,thereappearedtobeaslightincreaseinbinnedringcountswhenthehandmovedatnoticeablevelocities.Second,someofthe104neuronsrarelyre,whileothersvirtuallyreallthetime;butinterestingly,wewitnessedthatspecicneuronsincreasedtheirringrateatcertainpointsalongthearmtrajectory[ 18 ].


10 Tocomplementourvisualanalysisofidentiablecorrelations,wealsocomputedthecrosscorrelationwithuptoaonesecondshiftintimeforwardandbackwardbetweenindividualneuralchannelsandhandvelocities.Overall,visualandcorrelationanalysisshowedthatpatternsexistedbetweenneuralringsandhandmovement[ 18 ].However,thismethodislimitedinthatonlyafewobviousvisuallydiscernablepatternswereperceivablewiththevelocity-neuraldata.Thislackofcorrespondencescomplicatedourtrainingandevaluationoftheswitchingmodel,sinceweneededtohaveclearlydenableandseparateclasses.Therefore,wewantedamoreobjectiveapproachtodenethesegmentedclassesbeforetrainingandevaluatingourmodel.Toavoidexclusionofpotentialneuralencodings,wegeneratedthedataclassesfortwodierentsegmenteddatasets,onebasedonvelocity,theotherbasedondisplacement. Figure2{7:Binnedneuraldataandcorrespondingvelocitywiththreshold Forthevelocityhypothesis,ideally,therstclassofneuralringsshouldcontaindatawherethearmappearstobestationary,whilethesecondclassshouldcontaindatawherethemonkey'sarmappearstobemoving.Weusedasimplethresholdtoachieve


11 thisgrouping:iftheinstantaneousvelocityofthearmisbelowthenoisethresholdofthesensordeterminedbyinspectingthevelocitydatavisually,thecorrespondingneuraldatawereclassiedasstationary;otherwise,theneuraldatawereclassiedasmoving.InFigure2-8,weplottheinstantaneousvelocityofthemonkey'sarmfora500secondsegmentofthedata,wherethemonkeyisrepeatedlyperformingareachingtask.Basedonthisplot,wechose4mm/secasthenoisethresholdfortheaboveprocedure. Figure2{8:Instantaneousvelocity Forthepositionhypothesis,wewantedtoclassifythemonkey'sarmmovementsbasedondisplacement.Todemonstratethisconcept,weplotasamplefeedingsessionforthemonkeyFigure2-9.ThethreecoloredtrajectoriesrepresentdisplacementalongtheCartesiancoordinates,asthemonkeyismovingitsarmfromresttothefoodtray,fromthefoodtraytoitsmouthand,backtotherestposition.Figure2-9Bindicatesthesegmentationofthisdataintotwodistinctdisplacementclasses:restandactive,whichareanalogoustothestationaryandmovingclassesinthevelocity-basedsegmentationabove.InFigure2-9Cand2-9D,weplotthevelocityofthetrajectoriesfromgures2-9Aand2-9B,hereweseethatthissegmentationisnotthesameasthevelocity-basedsegmentation.Notefromthedottedlineindicatingthevelocitythresholdpreviouslydescribedthatsomeoftheactiveclassinthedisplacement-basedsegmentationareclassiedasstationaryinthevelocity-basedsegmentation.Nowthatweunderstood


12 whatoursegmentedclassesshouldrepresent,wewantedtoapplysimplestatisticalmethodstoevaluatetheirperformance. Figure2{9:Velocityvsdisplacement 2.3.2 BasicStuctureInthenextsequenceofexperiments,wecomputedthemeanandvariancefortheneuralspikedata,temporallyacrossindividualneuralchannels,aswellasspatiallyacrossallneuralchannels.Duetotherelativelysparsenatureoftheneuraldata,wecomputedthestatisticsover50%slidingwindowsofone-andfour-secondlengths.Withastatisticaldescriptionoftheneuraldata,weproceededtotestiftheaggregatequantitiescouldtellusanythingaboutthecorrespondinghandmovement.Inthisinitialanalysis,wesetouttodistinguishhandmovementfromnon-movementbyapplyingthresholdstothecomputedstatistics.Ifaparticularstatisticalindicatorforagiventimeindexwasbelowacorrespondingthreshold,wesetthepredictedhandmotiontostationary.Whileindicatorvaluesabovethatsamethresholdwerelabeledasmoving.Foreachtimeindex,wealsolabeledtheone-dimensionalvelocitydataasstationaryormoving.Wethencheckedtheaccuracyofthissimplemovementpredictor.Tocompensateforpossiblemisalignmentbetweentheneuralspikedataandthehand


13 movementdata,werepeatedthesameprocedurefordatashiftedbyamaximumoftwotimeindicesbothforwardandbackwardintime.Finally,theentireanalysisabovewasrepeatedforasubsetofneuronsthatappearedmostrelevantforhand-movementprediction,asindicatedbytheweightsinpreviouslytrainedrecurrentneuralnetworks[ 18 ]. 2.3.3 ResultsWiththerudimentaryanalysisSection2.3.2,wecomputedfourbasicquantities: Temporalmeanperneuralchannel. Temporalvarianceperneuralchannel. Spatialmeanacrossneuralchannels. Spatialvarianceacrossneuralchannels.Ofthesecomputedquantities,thespatialstatisticsappearedtobethemostusefulpredictiveindicatorofhandmovement.Figure2-10showsasampleofthecomputedspatialstatisticsforallneuronsaswellasforasubsetofneuronsdeterminedfromtherecurrentneuralnetworkmodels. Figure2{10:Samplespatialstatisticsone-secondslidingwindow.


14 GiventhestatisticaldataFigure2-10,weproceededtodevelopathreshold-basedclassiertodiscriminatebetweenhandmovementandnon-movement.FromFigure2-10,weobservethatspatialvariancesappeartobeabetterpredictorofhandmovementthanspatialmeans.Therefore,wedesignedourclassiertorelyonspatialvariances,ratherthanspatialmeans.Weappliedtwodierentvariancethresholds:therstwaschosenasthemeanofthevariances,whilethesecondwassomewhatlowertoreducethenumberofhandmovementmovingclassmisclassications. Figure2{11:Samplethresholdedspatialstatisticsone-secondslidingwindow. Totesttheclassier,weused4,000datasamples,correspondingto201distinctinstancesforstatisticscomputedoverfourseconds,and801forstatisticscomputedoveronesecond.Inthisdatasample,58outof201instancescorrespondedtosignicanthandmovementforthefour-secondstatistics,while125outof801instancescorrespondedtosignicanthandmovementfortheone-secondstatistics.Figure2-12summarizestheperformanceofthisspatial-variancebasedclassicationscheme,whileFigure2-11illustratestheeectofathresholdonthespatialstatisticsinFigure2-10.NotefrombothFigure2-12andFigure2-11thateventhissimpleclassicationschemeisabletoachievesuccessfuldiscriminationofhandmovementfromnon-movement.Resultsfor


15 neuronsubsetsselectedfromtrainedrecurrentneuralnetworksprovedtobesimilartothoseinFigure2-12. Figure2{12:Performanceofthreshold-basedclassierspatialvariance,allneurons 2.4 DiscussionTherudimentaryclassierandanalysisdiscussedinthischapterhadtwogoals.First,wesoughttofamiliarizeourselveswiththeneuralringandtrajectorydatatobetterunderstandtheintricaciesoftheprimateexperiment.Second,weexploredwhetherornotevensimplestatisticalanalysiscanyieldusefulinsightsintothisproblem.Todothis,wedevelopedathreshold-basedclassierofhandmovementvs.non-movementthatreliedexclusivelyonasinglestatisticalindicator{namely,spatialvarianceintheneuralringdata.However,thequestionremainsastowhichtypeofsegmentationvelocityordisplacementislikelytobemorebiologicallyplausibleand,consequentlyeasiertolearn.WhilewewilldeferourthoughtsonthisquestionuntilChapter3,wedonotethatkeepinganarmstationary1atrest,orinextensionrequiresdierentmuscleactions.Intherstcase,musclescanberelaxed,whileinthesecondcase,atleastsomemusclesmustbetensed.Despitethesimplicityofthisapproach,weneverthelesswereabletoachievesurprisinglygoodresultsinclassicationperformanceseeFigure2-12ofthetwosimple


16 primitivesofmotionandnon-motion.Encouragedbytheseresults,weproceededtodevelopanadvancedclassierthatreliesonmoresophisticatedtrainablestatisticalmodels.Wedescribethatworkinthenextchapter.


CHAPTER3HIDDENMARKOVMODELS 3.1 MotivationWetrainedstatisticalmodelscorrespondingtothetwoclassesofdatadiscussedinChapter2.Basedonpreviousstatisticalwork[ 18 ],wefeelthatthesestatisticalmodelsshouldcapturethetemporalstatisticalpropertiesofneuralringsthatcharacterizethemonkey'sarmmovementorlackthereof.Onesuchstatisticalmodel,theHiddenMarkovModelHMM,enforcesonlyweakpriorassumptionsabouttheunderlyingstatisticalpropertiesofthedata,andcanencoderelevanttemporalproperties.Fortheseimportantreasons,wechoosetomodelthetwoclassesofneuraldatastationaryvs.movingwithHMMs.ThischoicefollowsalonglineofresearchthathasappliedHMMsintheanalysisofstochasticsignals,suchasinspeechrecognition[ 4 5 ]modelingopen-loophumanactions[ 13 ],andanalyzingsimilaritybetweenhumancontrolstrategies[ 13 ]. 3.2 HiddenMarkovModelOverviewAlthoughcontinuousandsemi-continuousHMMshavebeendeveloped,discrete-outputHMMsareoftenpreferredinpracticebecauseoftheirrelativecomputationalsimplicityandreducedsensitivitytoinitialparametersettingsduringtraining[ 16 ].AdiscreteHiddenMarkovChainFigure3-1consistsofasetofnstates,interconnectedthroughprobabilistictransitions,andiscompletelydenedbythetriplet,=fA;B;g,whereAistheprobabilisticnxnstatetransitionmatrix,BistheLxnoutputprobabilitymatrixwithLdiscreteoutputsymbols,andisthen-lengthinitialstateprobabilitydistributionvector[ 14 16 ].ForanobservationsequenceO,welocallymaximizePjOi.e.probabilityofmodelgivenobservationsequenceOwiththeBaum-WelchExpectation-MaximizationEMalgorithm.ThediscreteHMMsdiscussedinthisthesis,aretrainedonnite-lengthsequences,sothatrareeventswithnonzeroprobabilitymaybepossibleyet,atthesametime, 17


18 maynotbereectedinthedatai.e.maynothavebeenobserved.TheprobabilitiescorrespondingtosucheventswillthereforeconvergetozeroduringHMMtraining.Alternatively,asamplesequencemayhaveerroneousreadingsduetosensorfailure,etc.,andsuchsequenceswillevaluatetozeroprobabilityonHMMspreviouslytrainedonlessnoisydata.Inordertotraindiscrete-outputHMMsoncontinuous-valueddataeectively,weusediscretizationcompensation,namely,semi-continuousevaluation.Insemi-continuousevaluation,theHMMisrsttrainedondiscretedatavectorquantized{VQ{fromreal-valueddata.Whennewsequencesofreal-valueddataneedtobeevaluated,weassumethattheVQcodebookpreviouslygeneratedrepresentsamixtureofGaussianswithsomeuniformvariancethatcanbethoughtofasasmoothingparameter.Below,werstdiscusstheoverallstructureofthisVQ-HMMandthendetailthetrainingoftheclassierinsection3.2.2. Figure3{1:DiscreteHMMchain 3.3 VectorQuantizing-HMM 3.3.1 StructureInthissection,webroadlydescribeourVQ-HMM-basedclassierFigure3-2illustratestheoverallstructure.Theclassierworksasfollows:1.Attimeindext,weconvertaneuralringvectorvtoflength104equaltonumberofneuralchannelsintoadiscretesymbolOtinpreparationfordiscreteoutputHMM


19 Figure3{2:Stationary/movingclassiers evaluation.Themethodofsignal-to-symbolconversionwillbediscussedlaterinthissection.2.Next,weevaluatetheconditionalprobabilitiesPOjsandPOjm,where,O=fOt)]TJ/F23 7.97 Tf 6.587 0 Td[(N+1;Ot)]TJ/F23 7.97 Tf 6.587 0 Td[(N+2;Ot)]TJ/F21 7.97 Tf 6.587 0 Td[(1;Otg;N>1; .1 andsandmdenoteHMMsthatcorrespondtothetwopossiblestatesofthemonkey'sarmstationaryvs.moving.3.Finally,wedecidethatthemonkey'sarmisstationaryif,POjs>POjm; .2 andismovingif,POjm>POjs: .3 InordertoexplicitlycomputePOj,weusethepracticalandecientforwardalgorithm[ 16 ].Fortheforwardalgorithm,letusdeneaforwardvariableti:ti=PO1;:::;Ot;Xij; .4


20 whichreferstoprobabilityofthepartialobservationsequencefO1;:::;OtgandbeinginstateXiattimet,giventhemodel[ 16 ].AsexplainedbyRabner[ 16 ]andothers[ 14 ]thetvariablescanbecomputedinductivelywiththeuseoftheprobabilistictransitionandoutputmatrixes,inturn,weevaluatePOjwith:POj=NXi=1Ti: .5 Furthermore,theclassicationdecisioninEquations3.2and3.3isrelativelysimplisticinthatitdoesnotoptimizeforoverallclassicationperformance,anddoesnotaccountforpossibledesirableperformancemetrics.Forexample,itmaybeveryimportantforaneventualmodelingschemetoerronthesideofpredictingarmmotioni.e.movingclass.Therefore,wemodifyourpreviousclassicationdecisiontoincludethefollowingclassicationboundary:POjm POjs=y; .6 whereynownolongerhastobestrictlyequaltoone.Notethatbyvaryingthevalueofy,wecanessentiallytuneclassicationperformancetotourparticularrequirementsforsuchaclassier.Moreover,optimizationoftheclassierisnownolongerafunctionoftheindividualHMMevaluationprobabilities,butratherafunctionofoverallclassicationperformance.Inthefollowingsubsection,wediscusssignal-to-symbolconversionandHMMtraininginsomewhatgreaterdetail. 3.3.2 TrainingOurparticulardatasetcontained23000discretebinnedringcountsforthe104neuronseachbinnedcountcorrespondstothenumberofringsper100ms.Asdiscussed,wemustrstconvertthismulti-dimensionalneuralspikedatatoasequenceofdiscretesymbols.Thisprocessinvolvesvectorquantizingtheinput-spacevectorsto


21 discretesymbolsinordertousediscrete-outputHiddenMarkovModels.Wechoosethewell-knownLBGVQalgorithm[ 6 ],whichiterativelygeneratesvectorcodebooksofsizeL=2m,andcanbestoppedatanappropriatelevelofdiscretization,asdeterminedbytheamountofavailabledata.Byoptimizingthevectorcodebookontheneuralspikedata,weseektominimizetheamountofdistortionintroducedbythevectorquantizationprocess.Figure2-3illustratestheLBGVQalgorithmonsomesynthetic,two-dimensionaldatagrayarea. Figure3{3:LGBVQalgorithmon2Dsyntheticdata Aftervectorquantizingtheinput,weusethegenerateddiscretesymbolsasinputtoaleft-to-rightorBakisHMMchain;inthisstructurenon-zeroprobabilitytransitionsbetweenstatesareonlyallowedfromlefttoright,asdepictedintheHMMsinFigure6.Giventhatweexpectthemonkey'sarmmovementtobedependentnotonlyoncurrentneuralrings,butalsoonarecenttimehistoryofrings,wetraineachoftheHMMmodelsonobservationsequencesoflengthN.Sincetheneuralspikedatausedinthisstudyisbinnedat100msec,N=10,forexample,wouldcorrespondtoneuralspikedataoverthepastonesecondEquation3.1.Duringrun-timeevaluationofPOjsandPOjm,weusethesamevalueofNaswasusedduringtraining.


22 InordertomaximizetheprobabilityoftheobservationsequenceOwemustestimatethemodelparametersA;B;forbothMandS.Thisisadiculttask;rst,thereisnoknownwaytoanalyticallysolvefortheparametersthatwillmaximizetheprobabilityoftheobservationsequence[ 16 ].Second,evenwithaniteamountofobservationsequencesthereisnooptimalwaytoestimatetheseparameters[ 16 ].Inordertocircumventthisissue,weusetheiterativeBaum-Welchmethodtochoose=fA;B;gthatwilllocallymaximizePOj[ 14 ].Specically,fortheBaum-WelchmethodweprovideacurrentestimateoftheHMM=fA;B;gandanobservationsequenceO=fO1;:::;OTgtoproduceanewestimateoftheHMMgivenby=fA;B;g,wheretheelementsofthetransitionmatrixA,aij=PT)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=1ti;j PT)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=1ti;i;j2f1;:::;Ng: .7 Similarly,theelementsfortheoutputprobabilitymatrixB,bjk=Pttjwhere8Ot=vk PTt=1tj;j2f1;::;Ng;k2f1;:::;Lg; .8 andnallythevector,i=1i;i2f1;:::;Ng; .9 where,ti;j=tiaijbjOt+1t+1j POjand .10 ti=NXj=1ti;j: .11 Pleasenote,isthebackwardvariable,whichissimilartotheforwardvariableexceptthatnowwepropagatethevaluesbackfromtheendoftheobservationsequence,ratherthanforwardfromthebeginningofO[ 14 ].


23 3.3.3 ResultsGivenourclassierstructureinFigure4andourdecisionruleinEquation3.4,thereareanumberofdesignparametersthatcanbevariedtooptimizeclassicationperformance:L=numberofprototypevectorsinVQcodebook;N=lengthofobservationsequences;n=numberofstatesfortheHMM;y=classierthresholdboundary;InTables2and3,wereportexperimentalresultsfordierentcombinationsofthefourparametersandsubsetsofneuralchannels.Thesetablesareasmallrepresentationoftheclassicationresultsproducedfromalargenumberofconductedexperiments.TheLparameterno.ofprototypevectorswasvariedfrom8to256;theNparameterobservationlengthwasvariedfrom5to10;andthenparameterno.ofstateswasvariedfrom2to8.Thetwotablesdierinhowthedatawassplitintotrainingandtestsets.Inthe'leaving-k-out'approachFigure3-4,wetookrandomsegmentsofthecompletedata,removedthemfromthetrainingdata,andreservedthemfortesting;carewastakenthatnooverlapoccurredbetweenthetrainingandtestdata.InthesecondapproachTable3,wesplitthedatasequentiallyintotrainingandtestdatainequivalentfashiontoourgroupmembersatUF[ 2 ].Theadvantageofthersttestingapproachisthatwecanrepeattheprocedureanarbitrarynumberoftimes,leadingtomoretestdata,andhence,morestatisticallysignicantresults.Alternatively,theadvantageofthesecondtestingapproachisthatitusestestdatainamannermorelikelytobeencounteredinaneventualBMIsystem,whereaperiodoftrainingwouldbefollowedbyasubsequentperiodoftesting.


24 Atthispoint,wemakesomegeneralobservationsabouttheresultsinFigure3-4and3-5.First,thedisplacement-basedsegmentationresultsaresubstantiallybetterthanthevelocity-basedsegmentationresults.Second,wenotethattheresultsinFigure3-4aremarginallybetterthanthoseinFigure3-5.Wesuggestthatonereasonforthisisthattheneuralencodingofthesmallpopulationof104neuronsisnon-stationarytosomeextent.Ifthedataisnon-stationary,weshouldexpectthersttestingapproachtoproducebetterresults,sincetestdatainthe'leaving-k-out'approachistakenfromwithinthecompletedataset,whilethesecondtestingapproachtakesthetestdatafromthetailendofthecompletedataset.Overall,wenotethatthesequentialtestingwithdisplacement-baseddataisprobablythebest.Wealsonotethatsinceasubsetofneuralchannelsattheinputyieldedthebestperformance,someofthemeasuredneuralchannelsmayoerlittleinformationaboutthemonkey'sarmmovementwhichsubsequentlydirectsthemotivationinthenextsection. Figure3{4:"Leaving-kout"testing


25 Figure3{5:Sequentialtesting 3.4 FactorialHiddenMarkovModel 3.4.1 MotivationAsmentionedinsection3.3.3,ourpreviousclassierrequiredtheconversionofthemulti-dimensionalneuraldataintoadiscretesymbolforthediscrete-outputHMMs[ 7 ].WeusedtheLBG-VQalgorithm,sinceithastheabilitytogeneratethisdiscretesymbolwitharelativelyminimalamountofdistortion.Unfortunately,this'minimal'distortionwaslaterrevealedtohamperclassicationperformancewhenusedwiththeneuraldata[ 7 ].Wenotethatsincethe104-channeldatadoesnotformtightclustersinthe104-dimensionalinputspace,theVQsignal-to-symbolconversionintroducesasubstantiallossofinformationandconsequentlydegradesclassicationperformance[ 6 ].Combiningthisresultwiththeerrorthatoccursfromthelinearmodelswithinthebi-modelmappingframework,weareonlyabletoproduceneural-displacementmappingresultsmarginallybetterthanourgroup'spreviouswork[ 2 ].


26 Asdiscussed,weattemptedtocircumventtheseVQHMMlimitationsbyexploringdierentneuralsubsetstoseeifwecouldeliminatenoisyunimportantneuronsandretainusefulones.Todierentiateimportantneuronsfromunimportantneurons,weexaminedhowwellanindividualneuroncanclassifymovementvs.non-movementwhentrainedandtestedonanindividualHMMchain.WeareabletodirectlytrainasingleHMMchainsincetheneuraldataisalreadyindiscreteform,ranginginvaluefromzerototwentyringsper100msbin.DuringtheevaluationoftheseparticularHMMs,wecomputetheconditionalprobabilitiesPOkjksandPOkjkmforeachneuralchannelkwithitsrespectiveobservationsequenceofbinnedringcounts.Togiveaqualitativeunderstandingoftheseweakclassiers,wepresentinFigure3-6theprobabilisticratiosfromthetop14single-channelHMMclassiersshownbetweenthetopandbottommovementsegmentations.Specically,wepresenttheprobabilisticratioPOkjkm POkjks; .12 foreachnueralchannelinagrayscalegradientformat;darkerbandsrepresentratioslargerthanoneindicatingastrongerprobabilitytowardmovementandlighterbandsforratiossmallerthanoneindicatingstrongerprobabilitytowardnon-movement.Theprobabilitiesroughlyequaltooneanothershowupasgreybands.Wegleanfromthisgurethatthegroupofsingle-channelHMMscanroughlypredictmovementandnon-movementfromthedata.InordertoobservetherelevanceofthesesingleHMMchainsfurther,wecomputetheaverageoftheprobabilisticratios1 KKXk=1POkjkM POkjkS .13 foragivenobservationsequence.Figure4presentstheaverageoftheratioslightgrey,aswellasthevariancedarkgreyofthesingle-channelHMMoutputprobabilities.


27 Figure3{6:Singleneuralchannels Figure3{7:Averageratios Wealsosuperimposeoursegmentationsofmovementandnon-movementwithadottedlineinordertodemonstratethatthelargerofthetwoprobabilitieswillcausetheoutputratiotodipbeloworriseabovethethresholdboundaryof'one'.Specically,theaveragedratiosappeartobesignicantlylargerthanthethresholdboundaryduringmovementandlessthantheboundaryduringnon-movement.Theratiosthatappearnearthethresholdboundaryof'one',indicatethattheprobabilitiesofmovementandnon-movementareequivalent.Theaboveanalysisledustobelievethatbycombiningtheseweaksingle-channelpredictions,wecouldgenerateanimprovedclassication,andsubsequently,animproved


28 nal3Dmapping.Inthenextsection,weinvestigateaframeworkthatwillincorperatetheprobabilisticoutputfromtheseweakclassiersseetable1intoastrongcorrectclassication. Table3{1:Classicationperformanceofselectneurons Neuron# Stationary% Moving% 23 83.4 75.0 62 80.0 75.3 8 72.0 64.7 29 63.9 82.0 72 62.6 82.6 3.4.2 StructureInsection3.4.1,weillustratedtheresultofndingtheaverageratiooftheoutputprobabilitiesfromindividualmovement/non-movementHMMchainsusingcorrespondingneuralchannelsasinput.Thissimplemeasuremotivatedourrstattempttocombinetheprobabilitiesintoasimpleclassier.Weusedtheaverageratioandappliedadecisionruletothethresholdinordertodeterminewhethermovementornon-movementoccurred,1 KKXk=1POkjkM POkjkS>y: .14 Althoughsimplistic,wedemonstrateinsection3.3.5thatthisapproachisremarkablybetterthantheVQ-HMMmodel.Unfortunately,theratioitselfissusceptibletoinnitesimalprobabilities,whichinturncancauseextremelylargeoutputvalues.Consequently,theselargeratioswillbiastheoverallmodelandincreaseerroneousclassications.Tominimizethisbias,weapplythelogtoEquation3.2:1 KKXk=1logPOkjkM POkjkS .15


29 Figure3{8:Beforeandafterlog Figure3{9:GeneralFHMM Inturn,thisapproximates:logKYk=1POkjkM POkjkS .16 WeseethatbyapplyingthelogtotheratiosEq3.6,weareessentiallyndingtherelativescalingbetweeneachchainandavoidingtheeectsofanyinnitesimaloutputprobabilitiesEq3.7.Similarly,thesummingofthelogratiosresultsinndingtheloglikihoodandsubsequentlytakestheformofaparticularHMMvariationknownastheFactorialHiddenMarkovModelFHMM[ 8 ].ThegraphicalmodelforageneralFHMMisshowninFigure3-9.ThesystemiscomposedofasetofKchainsindexedbyk.Thestatenodeforthekthchainattime


30 Figure3{10:ComparisionofourFHMMtogeneralmodel tisrepresentedbyXtkandthetransitionmatrixforthekthchainisrepresentedbyAk.Theoveralltransitionprobabilityforthesystemisobtainedbytakingtheproductacrosstheintra-chaintransitionprobabilities:PXtjXt)]TJ/F21 7.97 Tf 6.587 0 Td[(1=YAkXktjXkt)]TJ/F21 7.97 Tf 6.586 0 Td[(1; .17 OurdeparturefromthisgeneralmodeloccursattheoutputvectornodeFigure3-3.Insteadofeachchainbeingstochasticallycoupledattheoutputnoderepresentedbyavector,ourHMMmulti-chainstructureindependentlyusesasingleelementfromtheoutputvectornodeforeachchainFigure3-4leavingthechainsfullyuncoupled.ThisFHMMvariantisusedinotherresearchinvolvingspeechprocessing[ 9 ]andhasanassociationwithanotherstructurecalledparallelmodelcomposition[ 9 ].Specicallyduringevaluation,ourmodelusestheneuralbinnedspikedatafromthekthchannelinordertoevaluatethekthconditionalprobabilitiesPOkjkSandPOkjkM; .18 where,Ok=fOkt)]TJ/F42 7.97 Tf 6.586 0 Td[(N+1;Okt)]TJ/F42 7.97 Tf 6.586 0 Td[(N+2;Okt)]TJ/F42 7.97 Tf 6.587 0 Td[(1;Oktg;N>1 .19 andksandkmdenoteHMMparametersthatrepresentthetwopossiblestatesofthemonkey'sarmmovingvsnon-movingforaparticularHMMchaink.SinceourFHMM


31 reducestoasetofuncoupledHMMchains,weevaluatetheindividualchainswiththesameproceduredescribedinsection3.332.Beforeevaluationthough,eachHMMchainispreviouslytrainedontheneuralspikedatausingtheBaum-Welchalgorithmwhichwedescribeinthenextsection. 3.4.3 TrainingUpdatingtheparametersforaFHMMisaniterative,two-phaseprocedureandisonlyslightlydierentfromthetrainingofasingleHMMchainasdescribedinsection3.3.3.Intherstphase,weusetheBaum-Welchmethodtocalculateexpectationsforthehiddenstates.ThisisdoneindependentlyforeachoftheKchains,makingreferencetothecurrentvaluesoftheparameterstk.Inthesecondphase,theparameterstkareupdatedbasedontheexpectationscomputedintherstphase[ 16 ].Theprocedurethenreturnstotherstphaseanditerates.Wenotethattheinputintoeachleft-to-rightorBakisHMMchainkisthespikingbincountsofacorrespondingneuronk.Coincidentally,oursimplevariationoftheFHMMnaturallyappearsasasubstructureapproximationFigure3-10tothecomputationallydicultjointprobabilitydistribution,PXkt;Ytj=KYk=1[kXk1TYt=2AkXktjXkt)]TJ/F21 7.97 Tf 6.587 0 Td[(1]TYt=1PYtjXkt .20 [ 8 14 16 ].Consequently,ourFHMMvarianthasanadvantageinthetrainingproceduresinceitsimpliestothetrainingprocedureforasingleHMMchaindescribedin3.2.2justrepeatedKtimes.Additionally,withthereductionincomputation,ourmodelcanbetrainedwithpre-existingsoftwareandevenhastheabilitytobedistributedoveraparallelcomputingarchitectureasopposedtoageneralFHMM.InChapter6,wedetailourparticulardistributedimplementationofthismodelusingsuchanarchitecture.


32 Figure3{11:Biasingplotfornaiveclassier 3.4.4 ResultsAfterobservingsomequalitativepropertiesoftheratios,wenowseektoquantifyourinitialapproachEquation3.2aswellasourFHMMclassier.ShowninFigure3-12,wepresentabiasingplotofourinitialclassierwiththedecisioncriterion1 KKXk=1POkjkM POkjkS>y: .21 Fromthisgraph,wenoticethatasthethresholdboundaryismanipulated,classicationperformanceforthemovement/stationaryclassesshifts.Wecanclearlyseethatthejointmaximumorequilibriumpointoccursneartheareawherethethresholdis1.04avalueofonerepresentsthe1:1relationshipoftheratioofprobabilities.WealsoobservethatthissimpleclassierhasasignicantimprovementinclassicationascomparedtoourpreviousVQ-HMMclassiersincetheequilibriumpointshowsthatmovementandnon-movementclassicationsoccuraround94%asopposedto87%inourpreviouswork.Notethatwithoutthethreshold,theresultsdonotshowanyimportantsignicanceexceptbetterthanrandom.Asexplainedearlier,thecriterioninEquation3.14ispronetoextremebiasifanindividualclassierproducesinnitesimalprobabilities.Inanattempttoavoidbiasedprobabilisticratios,wealsoevaluatedtheratios-of-meanscriterion


33 Figure3{12:Biasingplotformodiednaiveclassier 1 KPPOkjkM 1 KPPOkjkS=PPOkjkM PPOkjkS>y: .22 Unfortunately,weseeinFigure3.22thatdespitetheabilitytoavoidinnitesimalprobabilities,theratio-of-meansclassierproducedinferiorresultscomparedtotheinitialmean-of-ratiosclassier.Consideringourdiscussionearlier,weknowthatsincesomeneuronsaremoretunedthemovement/non-movementthanothers,byaveragingtheprobabilitieswedegradeindividualneuralcontributionsandsimplyprovideadilutedconsensustowhichmotionprimitiveisoccurring.Therefore,ndingtherelativescalingbetweentheoutputsofsingleHMMchainsallowsustoincorporateampliedpredictionsintoabetteroverallclassication.OurFHMMmodelhastheabilitytondtherelativescalingbetweentheoutputsandcancircumventinnitesimalprobabilities.InFigure3.22weseethattheFHMMisabletoachievemuchbetterperformancethantheratio-of-meansclassierandslightlybetterperformancethanthemean-of-ratiosclassier.Consequently,wemustnowobservewhichmodel,aftertraining,cancalculateathresholdboundarythatwillallowmaximalclassicationsduringthetestingofnewdata.Withoutthecalculationofthisthresholdboundary,noneofthemulti-chainmodelscanperformbetterthantheVQHMMmodel.


34 Figure3{13:FHMMevaluation Figure3{14:Trainingdataonnaiveclassier


35 Figure3{15:TrainingdataonFHMM Sincethemeans-of-ratiosclassierretainsproblemswithinnitesimaloutputvalues,weonlycomparetheratio-of-meansclassierandtheFHMM.WeseeinFigures3.22and3.22thatbothmethodsdemonstrateasimilarthresholdpointisachievablewiththetrainingset.Unfortunately,wealsonoticeadistinctclassicationdierence.Specically,weobservethattheratio-of-meansclassierinFigure3.12performslesseectivelythantheFHMM,whichconrmsoursuggestionthattheoverallaveragingeectdilutesthestrongneuralclassiersintoaweakclassicationsincetrainingdatashouldproducehigherclassications.Alternatively,Figure3.44showsFHMMresultsthatareconsistentwithamodeltrainedandtestedwiththesamedatahighclassications-97%. 3.5 DiscussionWemakeseveralobservationsaboutourresults.First,thereappearstobeasignicantstatisticaldierenceintheneuralspikedataforthetwoclassesofarmmotion.Second,increasedtemporalstructureleadstobetterclassicationperformance;thatis,armmotioniscorrelatednotonlywiththemostrecentneuralrings,butashort-timehistoryofneuralrings.Wecanalsohypothesizeanumberofsourcesofresidualclassicationerror.


36 First,theamountofdataanalyzedintheseexperimentsisrelativelylimited,giventhesizeofthestatisticalmodelsemployed.Note,forexample,thattheclassicationresultsforthemoving-armclassareworsethanthoseforthestationaryclass;thismaybeareectionoftherelativeamountofdataavailablefortrainingandtestingineachclass.Second,inthesignal-to-symbolconversionofthemulti-channeldataforVQHMM,weloseasubstantialamountofinformation.Evenfor256prototypevectors,theconsequentdistortionuncertaintyinthesymboldataissubstantial.WeseethatbyusingtheFHMMtobreakdowntheproblemspaceintoindividualHMMchains,weavoidtheintroduceddistortionfromVQandcanclassifythemonkey'sarmstatemoreaccurately.Overall,weseethattheFHMMisabetterswitchingmodelthanourpreviousattempts.Wedemonstratedthatitcouldavoidinnitesimalprobabilitiesandyetndarelativescalingbetweenthemovementandnon-movementprobabilities.Additionally,theFHMMisabletorepresentalargeeectivestatespacewithamuchsmallernumberofparametersthanasingleunstructuredHMM.Consequently,thismodeliseasilydistributedacrossaparallelcomputingarchitectureandcanbeusedwithpre-existingtraining/evaluationsoftware.Finally,weremarkthatsincetheFHMMisaprobabilisticframework,wecanndotheruniqueconnectionsbetweenthenodesinthisgraphicalmodeltobetterexploitthespatialandtemporaldependenciesoftheneuronalrings.InthenextChapter,wedescribehowweuseourswitchingclassierincombinationwithmultiplecontinuouslocallinearmodelstopredictthe3Dcoordinatesofthemonkey'shand.


CHAPTER4LOCALLINEARMODELSWITHHMMPARTITIONING 4.1 StructureThenalstepinourbimodalmappingstructureistotaketheoutputsfromtheHMM-basedclassierandgenerateanoverallmappingofneuraldatato3Darmposition.Toestablishabaselineforthisapproach{namelythepriorsegmentingofneuraldataattheinputintomultipleclasses{weassignasinglelocal-linearmodelLLMtoeachclass,andtraineachoftheLLMsonlyondatathatcorrespondstoitsrespectiveclass,asshowninFigure4.1.EachLLMadaptsitsweightsusingnormalizedleastmeansquareNLMS[ 15 ].Aftertraining,testinputsarefedrsttotheHMM-basedclassier,whichactsasaswitchingfunctionfortheLLMs.BasedontherelativeobservationprobabilitiesproducedbythetwoHMMsandthedecisionboundry,asgiveninequation3.4,oneofthetwoLLMsisselectedtogeneratethecontinuous3Darmposition.WithproperlytrainedHMMs,thebimodalsystemshouldbeabletoestimatehandpositionswithreasonablysmallerrors. Figure4{1:Bimodalmappingoverview Given104neuralchannels,eachLLMisdenedwith10timedelays1sec,and3outputssothatitsweightvectorhas3,120elementsFigure4-2.TheLLMsweretrainedonasetof10,000consecutivebins1,000sec.ofdatawithaNLMSlearning 37


38 rateof0.03.WeightsforeachLLMwereadaptedfor100cycles.Aftertraining,allmodelparameterswerexedand2,988consecutivebinsoftestneuraldatawerefedtothemodeltopredicthandpositions.Theresultsoftheexperimentswerethenevaluatedintermsofshort-timecorrelationcoecientsandtheshort-timesignal-to-errorratioSERbetweenactualandestimatedarmposition.Foreachmeasure,theshort-timewindowwassetto40binssecsinceatypicalhandmovementlastsapproximatelyfourseconds. Figure4{2:Locallinearmodel Ofcourse,acorrelationcoecientvalueof1indicatesaperfectlinearrelationshipbetweenthedesiredactualandpredictedsystemtrajectories,while0indicatesnolinearrelationship.Thesecondmeasure,theSER,isdenedasthepowerofthedesiredsignaldividedbythepoweroftheestimationerror.Sinceahighcorrelationcoecientcannotaccountforbiasesinthetwotrajectories,theSERcomplementsthecorrelationcoecienttogiveamoremeaningfulmeasureofpredictionperformance.Finally,allofourbimodal-systemresultsarecomparedwithtwootherapproaches,namely,arecurrentneuralnetworkRNN[ 11 ]andasingleLLM. 4.2 ResultsInthissection,wereportresultsforneural-to-motormappingsofasingleLLM,arecurrentneuralnetworkRNNandthebimodalsystem.Sincethesegmentation


39 resultsarebetterforthedisplacement-basedsegmentation,weusetheseHMMsintherststageofthebimodalsystem.InFigure4-3,weplotthepredictedhandtrajectoriesofeachmodelingapproach,superimposedoverthedesiredactualarmtrajectoriesforthetestdata;forsimplicity,weonlyplotthetrajectoryalongthez-coordinate.Qualitatively,weobservethatthebimodalsystemperformsbetterthantheothersintermsofreachingtargets;thisisespeciallyevidentfortherst,second,andtheseventhpeaksinthetrajectory.Wealsoobservethatduringstationaryperiodsthebimodalsystemproduceslessnoisyoutputsthantheothermodels. Figure4{3:PredictedandactualhandtrajectorisASingleLLM,BRNNCBi-modal Overall,predictionperformanceofthebimodalsystemisbetterthantheRNN,andsuperiortothesingleLLM,asevidencedbytheempiricalcumulativeerrormeasureCEM,plottedinFigure4-4.Figure4-4showsthatthepopulationdistributionfunctionsofL2-normsoferrorvectorsofthebimodalsystemandtheRNNaresimilar,andsignicantlybetterthanthesingleLLM.Thecorrelationcoecientsoverthewholetestset,averagedoverallthreedimensions,are0.64,0.75,and0.80forthesingleLLM,theRNN,andthebimodalsystemrespectively.ThemeanoftheSERaveragedoveralldimensionsforthesingleLLM,theRNN,andthebimodalsystemare-20.3dB+/-1.6SD,-12.4dB+/-16.5,and-15dB+/-18.8,respectively.Althoughthesemeasurementsgiveusinsightintotheoverallperformanceofthemodels,theyfailtoexpressthedierenceinaccuracybetweeneachmodelwhenpredictingmovement.Theaccuracyinmovementismoreimportantthaninnon-movementsincewecanremovenon-movementerrorsbyusingtheoutputof


40 Figure4{4:CEMplots theHMMasalter.ThisledustocomputetheSERandCCsovermovementsectionsofthetestsetpartitionedbyhand.TheCCsforonlythemovementsectionswere0.83+/-0.07,0.84+/-0.14and0.86+/-0.11fortheLLM,RNNandbimodalsystem,respectively.TheSERoverthemovementsectionsare2.96+/-2.68,6.36+/-3.71and8.48+/-4.47fortheLLM,RNNandbimodalsystem,respectively. 4.3 ConclusionWeseetheestimationperformanceofthehandtrajectoryoftheproposedbimodalsystemisbetterthantheRNN,andsuperiortoasingleLLM.Itisalsoapparentfromtheresultsthatusingmultiplemodelsimprovestheestimationperformancecomparedtothesinglelter,althoughitaddsmorecomputationalcomplexity.ComparedtotheRNN,thebimodalsystemreducesthecomplexityoftrainingsignicantlyand


41 Figure4{5:SERplot producesamoreaccurateestimation.ThedrawbackwiththebimodalsystemisthatitsestimationmainlydependsontheclassicationabilityoftheHMMasseeninFigure4-1.Chapter6focusesonwaystoimproveHMMclassicationstoremovethesefalseerrors.


CHAPTER5DISCRETESIGNALPROCESSINGHARDWAREIMPLEMENTATIONFORABMI 5.1 IntroductionAsdescribedinchapter2,thefeasibilityofbuildingbrainmachineinterfacesBMIhasbeendemonstratedwiththeuseofdigitalcomputationalhardware[ 24 25 ]Fortheseinterfaces,researchersrstacquireanalogneuralrecordingsandprocessthemthroughspikedetectionhardwareandsoftware.Oncetheneuralsignalsareprocessed,largerack-mountablehigh-endprocessorsareusedinconjunctionwithMatlab-enabledPCstopredictasubject'sarmtrajectoryinreal-time[ 10 25 ].TheultimateBMIgoalenvisionsthatfree-roamingsubjectswillpossessthepredictionalgorithmsandhardwareinvivoastheyphysicallyinteractintheworld.Unfortunately,atthecurrentstageofdevelopment,mostresearchersrequirethesubjectstobetetheredtoaclusterofimmobilemachines.OtherresearchershaveremovedthistetherbywirelesslytransmittingtheanalogneuralsignalsothesubjectbutstillrequirealocalclusterofPCsforpredictingtheoutput[ 20 23 ].AlthoughthiswirelessapproachismoreinlinewiththeultimateBMIgoal,researchemphasishasbeenplacedonshrinkingthewirelessacquisitionhardwarewhichstillrequireslargeimmobilemachinestopredicttrajectories[ 25 ].Additionally,thetransmissionofneuralwaveformshasbandwidthlimitations.Abandwidthbottleneckoccursasmoreneuronsaresampledfromthebrainmakingthetransmissionofthesesignalstothedigitalhardwarearduousandpowerconsumingwhichiscontrarytothenecessitiesofsuchadevice[ 20 ].Webelievethatbyshrinkingdigitalhardwarethatcomputesthepredictionalgorithmsonthesubject,thewirelesslimitationsandimmobilityissuesarebothsolvable.Specically,theproposedsolutionistodirectlyconnecttheanaloganddigitalsubsystemswithahigh-speeddatabusthatismorepowerecientandfasterthanthefastestavailablewirelesslink.Furthermore,usingon-boarddigitalhardwareto 42


43 Figure5{1:HardwareOverview predictthetrajectoryremovestheneedforlargeexternalcomputersandapproachestheultimateBMIgoalofpatientmobility.Inthischapter,wepresentadesignforawearablecomputationalDSPsystemthatiscapableofprocessingvariousneural-to-motortranslationalgorithms.Thesystemrstacquirestheneuraldatathroughahighspeeddatabusinordertotrainandevaluateourpredictionmodels.Thenviaawidelyusedprotocol,thelow-bandwidthoutputtrajectoryiswirelesslytransmittedtoasimulatedrobotarm.Thissystemhasbeenbuiltandsuccessfullytestedwithrealdata.Theorganizationofthechapterisasfollows.Werstcoverthesystemrequirementsandthenoutlinethesystemdesignthatmeettheserequirementsintermsofthehardwaremodulesandthesoftwarelayers.Finally,wepresenttheresultsofthesystem,followedbyaconclusion. 5.2 SystemRequirementsInordertocreateasuccessfulsystem,itisnecessarytorstaddressthetechnicalandpracticalaspectsofsuchasystembydeterminingitsfunctionalitywithinitsintendedenvironment.Forexample,howwillthehardwareextractinformationfromthebrain?Howwillitbepowered?Whattypeofprocessingpowerisnecessaryforasuccessfulalgorithmtobeimplemented?Whatarethephysicalconstraints?Howcanfaultyhardwarebedebuggedandxed?Howwilltheneuralpredictionsbeexpressedintheexternalworld?


44 Inaddressingthesebroadrequirements,welooktotheultimateBMIgoalofhavingatransplantablechipundertheskinthatcanacquireneuralringsfromthebrainandtranslatethemintoactionintheexternalworld.Unfortunately,aswithalllongtermtechnologicalgoalswemustevolvethehardwareinordertoverifythedesignsasweshrinkandcombinethemintoanalsolution.Ateachstageofdevelopment,someoftherequirementsmustberelaxedortighteneddependingonwhichportionofthedesignwearetryingtoverify.Specicallyforourcurrentstageofdevelopement,thehardwaremustrstreceivedigitizedneuraldataduringthetrainingandevaluationoftheneural-to-motorpredictionmodels.Afterevaluatingtheoutput,thepredictedtrajectorymustthenbetransmittedo-boardthroughawirelessconnectiontoareceivingcomputer/robotarmrepresentingthedesiredcontrol.Thiswirelessconnectionmustalsoprovidetheabilitytoremotelyprogramanddiagnosethesystemwhenbeingcarriedbyasubject.ThedescribedWIFI-DSPsystemservesasthedigitalportionoftheoverallBMIstructureandisresponsiblefortranslatingtheneuralringsintoactionintheexternalworld.TherstgenerationofthishardwarewashousedinaPCIslotofapersonalcomputeranddidnotposseswirelesscapabilities[ 26 ].Forthesecondgenerationdiscussedinthischapterwerequireadesignthatisportable,possesseswirelesscapabilitiesandiscomputationallyfast.Sincethesystemneedstobeportableandcontainawirelessconnection,wemustresolvehowtheothersystemrequirementsareaected.First,aportablesystemmustbelight-weightandsmallenoughforahumanorprimatetocarry.Second,aportablesystemmustalsobeself-containedandrelyonlyonbatterypower.Consequently,thehardwareneedstobelow-powerinordertoextendthelifeofthison-boardbattery.Thelowpowerconstraintwilltheninuencethechoiceoftheprocessorsinceweneedalowpowerdevicethatcanstillachievefastprocessingspeeds.Thispowerconstraintalso


45 Figure5{2:HardwareModulesofWIFIDSP aectsthewirelessconnectionsinceitneedsbelowpower,yetretainenoughbandwidthtotransmittheoutputtrajectoryandanyfuturedatastreams.Thepredictionmodelsrunningonthehardwareplatformalsoconstrainthesystemdesign.Sincemostofthepredictionmodelsrequiretheuseofoatingpointnumbersandarithmetic,weneedasystemthatcanprocesstheseoatingpointnumbersfastenoughtoattainreal-timemodelcomputations.Additionally,thesemodelsaresometimeslargeorcontainmultipleversionsrunningsimultaneouslyandthereforerequirelargememorybankstohandlethedatathroughput. 5.3 SystemDesignInthissection,weexplainwhatcomponentswerechosenforthesystem,whytheywerechosenandhowtheyfullltherequirementsoutlinedintheprevioussection.


46 5.3.1 ProcessorThecentralcomponenttoanycomputationalsystemistheprocessor.Theprocessorcansometimesdeterminethespeed,computationalabilityandpowerconsumptionoftheentiresystem.Thiscentralcomponentalsodictateswhatsupportdevicesarerequiredforthedesign.Inchoosingaprocessorforourparticularsystem,welookedtothepreviousworkofourcolleaguesScottMorrisonandJeremyParks.TheywereabletoverifythattheTexasInstrumentsTMS320VC33C33wasanappropriateprocessorforourneeds[ 26 27 ].Itwasalsoadvantageoustousethisprocessorsincetheyhaddevelopedacodelibraryandhardwareinfrastructureforusnottobeburdenedwith. Figure5{3:DSPFeatures Specically,theC33meetsouroatingpointandhighspeedrequirementssinceitisaoating-pointDSPcapableofupto200MFLOPSwithoverclocking.Itachievessuchhighspeedsbytakingadvantageofitsdedicatedoatingpoint/integermultiplier.SinceitworksintandemwiththeALU,italsohastheabilitytocomputetwomathematicaloperationsinasinglecycle.Anotherreasonthisprocessorisso


47 ecientinprocessingisthatitcanperformtworeads,onemultiply,andonestoreinasinglecyclebyutilizingitsdualaddressgeneratorsforsimultaneousRAMaccess[ 26 ].WithregardtoprocessingourBMIalgorithms,ScottMorrisionwasabletoverifytwoofourgroupscornerstonealgorithmsworkedmuchfasterontheC33thanwitha600MHZPentiumIIIcomputer[ 26 27 ].TheC33alsomeetsourlowpowerconstraintsinceituseslessthan200mWat200MFLOPS.Itachievessuchpowersavingsdueinparttoits1.8Vcoreandotherpowersavingmeasuresbuiltintotheprocessor[ 22 ].Althoughtheprocessorusessuchalowvoltageforitscoreprocessing,theI/Osupports3.3Vvoltsignals.Unfortunately,theDSPrequiresexternaltranslationlogictointerfaceall5VdeviceslikethePCMCIAcardinordertomeetthe3.3VI/Ospecs.Weusedtwocomponentsfortranslating5Vto3.3V.OnewastheTexasInstrumentsSN74CBTD3384DBQR10-bitlevelshifterssinceitisfast.25nsandavailableinsmall24-pinSSOPpackaging.Theotherwasaepm3064c44CPLDthatwasalsousedfortheinterconnectionlogicofthedierenthardwaremoduleswhichwediscusslaterinthissection.FinallytheC33wasabletofulllotherrequirementsofthesystem.Therstisitsabilitytosupportalargeamountofmemory.Thisisaccomplishedwithits24-bitaddressbusallowing16milliondierentmemorylocations.ItalsoallowsquickaccesstotheselocationsbyusinghardwarestrobelinesPAGE0-3todirectlyprovideaccesstocontrollogictodierentmemoryblocks.Thisprocessoralsomeetstherequirementofexpandabilityasdiscussedinthelastsection.Itisexpandablesinceithasfourhardwareinterrupts,two32-bittimers,andaDMAcontroller. 5.3.2 WirelessconnectionThesecondmostimportanthardwaremoduleinoursystemisthewirelessconnection.Wedeterminedthat802.11Bwouldbethemostappropriateprotocolsinceitiseasyforourgroupandourcollaboratorstointerfacewith.Theprotocolnotonlyprovidesalargeamountofbandwidth,italsohasinspiredalargecodeinfrastructure


48 Figure5{4:MemorymapoftheC33DSPshowinginternalSRAMblocks forcommunicationclientsandservers.Additionally,byusingsuchawidelyacceptedprotocol,insteadofdevelopinganewone,weareabletocommunicatetoanyo-the-shelfwirelessdevicethatsupports802.11b.Essentiallythesystemcanconnecttoanycomputerorhardwaredeviceontheinternet.WedesignedthesystemtouseaMA401PCMCIAwireless802.11Bcard.ThisPCMICAcardwasthesmallestandfastestcardavailableduringourdesignprocess.AtthecoreofthiscardisanIntersilprism2chipsetwhichisresponsibleforhandlingmostofthephysicallayerandMACaddressingofthe802.11Bprotocol.Thecontrolofthischipsetinvolvesdierentsequencesofregistercalls.Subsequently,theseregistercallshelptocongure,initialize,andtransmitdatato/fromthecard.ThePCMCIAcardmetthepowerrequirementsforthisdevelopmentstagesincewehavetheabilitytovarythepowerconsumptionfromlessthan100mAto300mA


49 dependingonthebandwidthwerequire.ItalsomeetsoursizerequirementssinceitisslightlylargerthanacreditcardasshowninFigure5-8.Thisdevicealsomeetsthenalrequirementofhighbandwidthsinceitiscapableoftransferringdataat10mbits/s.Thishighbandwidthisappropriatesinceitmaybecomenecessarytoretrieveneuraldatawirelesslyifasub-dermalanalogacquisitionsystemisdesignedandrequiresdatatransmissionthroughtheskin. 5.3.3 BootControlInorderforthesystemtobeselfrelianttheDSPneedstobootfromanEEPromwiththeappropriatesystemsoftware.ThisbootingprocessisaccomplishedthroughhardwareinterruptsontheDSP.Givenacertaininterruptwhenthesystemcomesoutresetingorpoweringup,theDSPwithassertcertainmemorylocationstobeginreadingfrom.UsingtheCPLDtoprovidethecontrollogicandajumpersystemwecancausetheDSPtobootfromtheEEPromortheUSBinterface.TheonlytimewewouldbootfromUSBisthesystemisbeingdebuggedortestingduringmanufacturing.TheEEPROMdevicewechosehadbeenselectedbyScottandJeremypreviousfortheirdesignandshowedtobesuccessfulwiththisDSP.ThedeviceisanAtmelAT29C256andhasasmallPLCCpackagetomeetoursizerequirements.Intermsofourpowerrequirementsthischipalsomeetsthemsinceitonlyusesblahduringusewhichisonlyatbootup.Anotherfeatureofthedevicethatservesourneedsisthatithas256kofmemorysothatwecanplacefairlylargesoftwarepackageswithinit. Figure5{5:TwodierentmethodstoboottheC33:EEPROMorUSB


50 5.3.4 USBWechosetheFT245BMUSBFIFOdeviceforaUSB2.0interface.Thisdeviceissmallandbasictocontrol.Additionally,itmeetsourlowpowerconstraintsinceitonlyuses25mAincontinuousmodeand100uAinsuspendmode.Consequently,thischipprovidesarangeofoptionsfordatathroughputrequirementsversuspowerconsumption.Thischipalsoprovidesan8mbits/sdatabusforthedualpurposesofdatacommunicationsandsystemdiagnosis. Figure5{6:BlockDiagramofUSBInterface 5.3.5 SRAMandExpansionTheDSPhas34Kby32-bitsinternalhigh-speedSRAM.Asmentionedearlier,thepredictionmodelsrequiremorethanthisinternallimit.Therefore,additionalexternal32-bitSRAMisrequiredtoconnecttotheC33databus.Unfortunately,manyofthedesiredcomponentsandalternativeswerenotavailableduringthedesignprocess.Consequently,wechosefour8-bitCypressCY7C1049B-15VCmemorychips.Thesememorychipsfulllmanyoftherequirementsofthisstageofdevelopment[ 27 ].Firsttheypossessfastaccesstime5ns.Second,theyhavelowactivepower,320mW


51 max.andlowCMOSstandbypower,.75mWmax.Finally,theyprovideeasymemoryexpansionwiththeirchipenableCEandoutputenableOEfeaturesHavingfourCY7C1049Bpartsyieldsatotalof512kby32-bitsor2MBofexternalmemory.Thesefourpartsareincorporatedusingthesamechipenablelineconnectedtodierentbytesofthedatabus,givingtheappearanceofasingle32-bitmemory[ 26 ]. Figure5{7:512Kby32bitexternalSRAMarchitecture. 5.3.6 PowerSubsystemTherearethreedierentpowerrequirementsinordertopowertheWIFI-DSP,1.8V,3.3V,and5V.TexasInstrumentsoersaTPS70351Dual-OutputLDOVoltageRegulatorthatincludesboth1.8Vand3.3Vvoltagesonasinglechipbutonlyrequires5Vtooperate.ThetwooutputvoltagesarenotonlyusedbytheDSP,theyarealsousedfortheotherhardwaremodulesinthesystem.Thischipalsoprovidesthepower-upsequencerequiredbytheDSPonceitisinitialized.Additionally,havingonerequiredvoltageinputsourceisanadvantageforthisportablesystemsinceonlyone5Vbatterysupplyisnecessary.


52 Figure5{8:WIFIDSPSystem 5.4 ComplexProgrammableLogicDeviceTheComplexProgrammableLogicDeviceCPLDisresponsiblefor6hardwarecomponents:1C33DSP,2PCMCIA802.11B,3USB,4ExternalSRAM,5BootableEEPROM,and6PowerRegulator.Thischipprovidesthecontrollogicbetweenthedevicesusingvarioussignals.ThesesignalsincludeWriteEnable,ReadEnable,ChipSelect,andResetsignals.ByusingaddressandcontrolsignalstheCPLDisabletodenemultiplememorylocationsoftheDSPsothatspecichardwarecomponentscanbereadandwrittento/from.SpecicallythedecodedaddressspaceswerecreatedfortheEEPROM,SRAM,PCMCIA802.11BcardandtheUSBinterface.TheCPLDalsoprovidesinterruptcontroltotheDSPbasedonthereal-timeoperatingsystemimplementedintheDSPduringthecommunicationbetweentheC33andtheUSBbus.ThenalrequirementoftheCPLDistoprovideaninterfaceforanyfutureexpansionhardwarethatmaybecomerequired.GiventheaboverequirementsfortheCPLDcontrolandthenumberofinterfacesignalsneeded,theAlteraEPM3064CPLDwaschosen.AmemberoftheAlteraMAXfamily,this100-pinTQFPchipprovides64pinsofI/Owith1200usablegatesand


53 64Macrocells.Additionally,itisfastenoughtomeetourmemoryandbusspeedrequirementsaswellaslowpowerenoughtomeetourcurrentpowerrequirements.TocontroltheCPLDdevice,VHDLcodeprovidesthenecessarycontrolsignalsforeachcomponentontheboard.OncetheVHDLcodeiscompiled,theCPLDisprogrammedthroughthe10-pinByteBlasterport.ThisallowstherecongurableCPLDtobecomeaexiblearchitectureastheBMIrequirementschangeandnecessarymodicationsbecomeimportant. 5.5 SystemSoftwareSoftwareisoneofthemostimportantcomponentstoanyhardwaresystem.ForaBMItobesuccessfultherewillbenecessarysoftwarethataccompaniesthenalsolution.Specicallyforourhardwaresolution,therearevemajorlevelsofsoftwareintheDSPBoardenvironment:1PCSoftware,2DSPOperatingSystem,and3DSPAlgorithms,402.11BCode5TCP/IPprotocol.Thissectionwilldescribeinbrieftheoperationofthesoftwareanditsinteractionbetweenthemultiplehardwaremodules. 5.5.1 PCSoftwareUsingVisualBasic,wewroteaPCconsoleprogramtointerfacetheDSPthroughtheUSB.TheconsoleprogramcallsfunctionswithintheDSPOStoinitiateandcontroltheUSBcommunicationfunctions.TheDSPOSisalsoresponsibleforreading/writingmemorylocationsandvariousprogramcontrolfunctions.InordertomakethecommunicationwiththeDSPwork,theconsoleprogramneedstotalkviatheUSBbus.ThedriversfortheUSBdevicesupportWindows98,2000,andXP,andsupportthefollowingfunctioncalls,aswellasmanyothersthatwerenotusedforthisimplementation.Usingtheconsoleprogram,theusermaymodifyDSPcongurationregisters,downloadandexecuteDSPcode,andviewandeditmemorylocations.Thecodeiseasilymodiabletoaccommodatedierenttestingrequirements,suchasdierentmethodsforstreamingdatatotheDSP.


54 Figure5{9:UserInterface AsimpleandexpandableprotocolwascreatedforPC-to-DSPcommunication.Thisprotocolinvolvesaseriesof32-bitcommandscontainingdierentopcodes.TheDSPoperatingsystemwaswrittentosupporttheseseriesofcommands,whichincludetheabilitytoreadfromtheDSP,writetotheDSP,andexecuteaprograminDSPmemory.IntandemwiththeOSlayeroftheDSP,thereislow-leveldrivercodeforinitializingandcontrollingthe802.11Bwirelesscontroller.ThiscodemustinteractwiththeDSPOSandanyUDPclientcodeoralgorithmsthatarerunningsimultaneouslywithintheWIFI-DSP.Oncethepredictionmodelcompletesanepochorcomputationcycle,theprogrammustinterruptthewirelesscardandtransferanyrequireddata.ThisprocessalsoinvolvescreatingthecorrectUDPpacketsfortransmissiontotheappropriateUDPserverwithaspecicIPaddress. 5.6 ResultsTheWIFI-DSPhasbeenfullytestedviaitsUSBinterfaceandwirelesscommunicationinthefollowingmanner.NeuraldatawasacquiredthroughtheUSBportandusedinbothtrainingandevaluationforwardmodesontheWIFI-DSP


55 Figure5{10:NLMSPerformance system.Specically,theDSPwasprogrammedtotrainanNLMSalgorithmanduponcompletion,trajectorypredictionsweretransmittedoboardthroughthe802.11bwirelessinterfaceusingaUDPclientprotocol.Thiscommunicationoccurredbi-directionallywithanexternallaptoprunningasatypicalUDPserver.TheLMSoutputresultscollectedatthereceivingcomputerweredirectlycomparedtoMatlabcomputedoutputs.Theseresultsareaccuratewithin7decimalplacesoftheMatlabdoubleprecisionresults.Thebandwidthofthewirelesslink2.11bisaround1.8Mbit/secincontinuousoperation.Thisiscomparabletowhatisexpectedona3GhzPentiumlaptopusingthesameNetgearwirelessadapterwithaPrismIIchipset.Thecurrentconsumptionisapproximately350mAfortheentireboardwhichequatesto1750mW.Ofthisconsumedpower,over80%or1400mWisusedbythePCMCIAwirelessadapter.Overall,thisismuchlessthanthe4Wpreviouslyattainedbyotheracquisitionhardware[ 21 ].


CHAPTER6FUTUREWORK 6.1 HardwareWehaveaworkingsystemthatdemonstratesLMStrainingonaDSPplatform.Further,wewereabletoverifythatitwirelesslytransmitsresultsto802.11benableddevices.HoweverwearedisappointedinthesizeandpowerconsumptionoftheWIFIDSPboard.Asmentionedthroughoutthepaper,forthisdevelopmentstagewerelaxedsomeoftheconstraintsinordertoverifythetechnologiesandfusethemintoasinglesystem.Forthenextgeneration,wewanttoshrinkthesysteminhalfandreducethepowerconsumption.Becausethemajorityofthepowerisbeingconsumedbythewirelessadapter,itisnecessarytondalowerpowerwirelesslink.WealsowanttoverifytheabilityoftheWIFI-DSPtodirectlycommunicatetoanalogacquisitionhardware.TheWIFI-DSPsystemdemonstratesthatbyshrinkingdigitalhardwarethatcomputesthepredictionalgorithmsonthesubject,thewirelesslimitationsandimmobilityissuesarebothsolvable.Specically,connectingtheanaloganddigitalsubsystemswithahigh-speeddatabusismorepowerecientandfasterthananywirelesslink.Furthermore,usingon-boarddigitalhardwaretopredictthetrajectoryremovestheneedforlargeexternalcomputersandapproachestheultimateBMIgoalofpatientmobility. 6.2 FHMMApplicationsWenotethatthenalpredictionperformanceoftheproposedbimodalsystemismuchbetterthantheRNN,andsuperiortothatofthesingleLLMmodel.Clearly,theuseofmultiplemodelsimprovespredictionperformanceoverasingleLLMmodel,atsomeadditionalcomputationalcost.Furthermore,byincreasingtherepertoireofmotionprimitiveswemayimprove3Dmappingbyallowingthelinear/nonlinearmodels,on 56


57 Figure6{1:Derivativesofsphericalcoordinateswithpinkmovementsegmentationssuperimposed theoutputstageofthebimodalstructure,tonetunetoaspecicprimitive.Sincetheunderlyingmotion'primitives'areunknown,weseekinfutureworktoformanunsupervisedmethodofsegmentingthese'primitives'.WebelievethattheFHMMstructuremayleadustothesolution.Figure6-1showswhereourhandsegmentationincurederrorsredarrows.WewanttondoutwhattheFHMMisclassifyingduringtheseusersegmentationerrors.Specically,isitclassifyingthesesectionsasmovementdespitethefactthatwesegmenetedthemasstationary?InFigure6-2,thereseemstobeanindicationthattheFHMMisinsomewaydetectingourhandsegmentationerrorsingreenandproducingcorrectclassicationdespiteourlabelingitincorrectly.Thisleadsustobelievethatourclassicationresultsmaybehigherthanwhatwearereportingsinceoursegmentationprocessisslightlyawed.


58 Figure6{2:Derivativesofsphericalcoordinateswithbluemovementsegmentationssuperimposed


59 Figure6{3:Sphericalcoordinatesvsaverageratios ThequestionarisesastowhytheFHMMfailedtodiscoverourerrorinthesectionslabeledasAinbothgures6-1and6-2.TheFHMMappearstonotrecognizethissectionasmovementdespiteourclaimthatitismovementbasedonaboveassumptions.Weseethatmonkeymakesnodiscernablemotionsimilartohisfood-graspingtask.Westillobservethatthemonkeydidmoveitsarminsomefashionthoughnotttingourcriteriaforsegmentation.InFigure6-3,weseetheprobabilitiesplottedintimerespectivetothetimesliceforthedesireddata.Figure6-3shows1 nPPOjMplottedingreenand1 nPPOjSinblue.WeseearoughapproximationofwhentheFHMMisindicatingthatmotionmayinfactbetakingplaceatbothlocationAandBwhichwecanconrmwithFigure6-3.ThistellsusthattheFHMMmaybeabletoclustertheinput.LookingwithintheFHMMwebeleivethesingleHMMchainsthemselvesmayalsobeoptimizedtohelpinourclusteringprocess.SinceweknowtheclassicationperformanceoftheindividualHMMchainsduringtraining,webelievethatthisinformationcanallowustoweighttheindividualclassiersperformancebyusingadaboostorsomeotherscheme.InFigure6-4weusedneuronalrankinginformationtoseeifwecouldobservetheimportanceofthesingleHMMchainsontheoverallclassier.Westartwiththetop-rankedneuronsinhislistandthen


60 Figure6{4:Worstneurontobestneuronpredictors continuallyaddneuronsfrombesttoworstandplottheirclassicationperformance.Thisexperimentinvolvednobias/threshold.WenotefromtheFigure6-4thatafteracertainpointofcontinuallyaddingweakerneural-classiers,ourclassicationperformanceonlyplateaus.InFigure6-5,weagaindisplayabesttoworstneuronaddingexperiment.However,inthisexperimentweuseabias/thresholdandndtheequilibriumpointjointmaximumbetweenbothclasses.Incontrasttoourpreviousgraph,weseethatdespiteaddingweakerneuralclassiersweobservecontinuingimprovementinourclassicationresults.Alsoofinterest,isthebias/thresholdinredcontinuestoconvergetozeroasweaddmoreandmoresingle-neuralclassiers.Finally,weremarkonthefactthatwehavenotyetoptimizedthesingleHMMchainswithintheFHMMframework.WithourpreviousworkweexhaustivelymanipulatedtheparameteroftheVQHMMsequencelength,states,etc.andwereabletoincreaseourresultsafewpercentagepoints.Perhapschangingtheseparametersandthedecisioncriterioncangarneraperformanceincrease.


61 Figure6{5:Worstneurontobestneuronpredictorswithbias


REFERENCES [1] W.T.Thach,"Correlationofneuraldischargewithpatternandforceofmuscularactivity,jointposition,anddirectionofintendednextmovementinmotorcortexandcerebellum,"JournalofNeurophysiology,vol.41,pp.654-676,1978. [2] J.C.Sanchez,D.Erdogmus,Y.Rao,J.C.Principe,M.Nicolelis,andJ.Wessberg,"Learningthecontributionsofthemotor,premotor,andposteriorparietalcorticesforhandtrajectoryreconstructioninabrainmachineinterface,"presentedatIEEEEMBSNeuralEngineeringConference,Capri,Italy,2003. [3] M.A.L.Nicolelis,D.F.Dimitrov,J.M.Carmena,R.E.Crist,GLehew,J.D.Kralik,andS.P.Wise,"Chronic,multisite,multielectroderecordingsinmacaquemonkeys,"PNAS,vol.100,no.19,pp.11041-11046,2003. [4] X.D.Huang,Y.ArikiandM.A.Jack,HiddenMarkovModelsforSpeechRecognition,EdinburghUniversityPress,1990. [5] J.Yang,Y.XuandC.S.Chen,"HumanActionLearningViaHiddenMarkovModel,"IEEETrans.Systems,ManandCybernetics,PartA,vol.27,no.1,pp.34-44,1997. [6] Y.Linde,A.BuzoandR.M.Gray,"AnAlgorithmforVectorQuantizerDesign,"IEEETrans.Communication,vol.COM-28,no.1,pp.84-95,1980. [7] Darmanjian,S.,Kim,S.P.,Nechyba,M.03Bimodalbrain-machineinterfacesformotorcontrolofroboticprosthetic,IEEEIROSConference,2003. [8] Z.GhahramaniandM.I.Jordan,"FactorialHiddenMarkovModels,"MachineLearning,29,pp.245-275,1997. [9] A.P.VargaR.K.Moore1990HiddenMarkovmodeldecompositionofspeechandnoise,IEEEConf.Acoustics,SpeechandSignalProcessingICASSP90. [10] A.Georgopoulos,J.Kalaska,R.Caminiti,andJ.Massey,"Ontherelationsbetweenthedirectionoftwo-dimensionalarmmovementsandcelldischargeinprimatemotorcortex.,"JournalofNeuroscience,vol.2,pp.1527-1537,1982. [11] S.P.Kim,J.C.Sanchez,D.Erdogmus,Y.N.Rao,J.C.Principe,andM.A.Nicolelis,"Divide-and-conquerapproachforbrainmachineinterfaces:nonlinearmixtureofcompetitivelinearmodels,"NeuralNetworks,vol.16,pp.865-871,2003. [12] A.B.Schwartz,D.M.Taylor,andS.I.H.Tillery,"Extractionalgorithmsforcorticalcontrolofarmprosthetics,"CurrentOpinioninNeurobiology,vol.11,pp.701-708,2001. 62


63 [13] M.C.NechybaandY.Xu,LearningandTransferofHumanReal-TimeControlStrategies,JournalofAdvancedComputationalIntelligence,vol.1,no.2,pp.137-54,1997. [14] L.E.Baum,T.Petrie,G.SoulesandN.Weiss,AMaximizationTechniqueOccurringintheStatisticalAnalysisofProbabilisticFunctionsofMarkovChains,Ann.MathematicalStatistics,vol.41,no.1,pp.164-71,1970. [15] WidrowB,StearnsS.Adaptivesignalprocessing.UpperSaddleRiverNJ:PrenticeHall;1985. [16] LawrenceR.Rabiner,AtutorialonHiddenMarkovModelsandselectedapplicationsinspeechrecognition,ProceedingsoftheIEEE,vol.77,no.2,pp.257-286,1989. [17] E.Todorov,"Directcorticalcontrolofmuscleactivationinvoluntaryarmmovements:amodel,"NatureNeuroscience,vol.3,pp.391-398,2000. [18] S.Darmanjian,ElementaryStatisticalAnalysisofNeuralSpikeData,InternalReport,UniversityofFlorida,August2002. [19] M.A.Nicolelis,A.A.Ghazanfar,B.M.Faggin,S.Votaw,andL.M.Oliveira,"Reconstructingtheengram:simultaneous,multisite,manysingleneuronrecordings,"Neuron,vol.18,pp.529-537,1997. [20] ObeidI,NicolelisM,WolfP,AMultichannelNeuralTelemetrySystem",SocietyforNeuroscienceAnnualMeeting,Orlando,FL,November2002. [21] ObeidI,Nicolelis,M,WolfP,"AMultichannelTelemetrySystemforSingleUnitNeuralRecordings",JNeurosciMeth,v133,no1-2,pp33-38,February2004. [22] TexasInstrumentsIncorporated.TMS320C3xusersguide.LiteraturenumberSPRU031E.DallasTXUSA;1997. [23] NicolelisM,ObeidI,MorizioJ,WolfP,"TowardsWirelessMulti-ElectrodeRecord-ingsinFreelyBehavingAnimals",SocietyforNeuroscienceAnnualMeeting,NewOrleans,LA,November2000. [24] M.A.L.Nicolelis,MethodsforNeuralEnsembleRecordings.BocaRaton:CRCPress,1999. [25] M.A.L.Nicolelis,"Brain-machineinterfacestorestoremotorfunctionandprobeneuralcircuits,"NatureReviewsNeuroscience,vol.4,pp.417-422,2003. [26] S.Morrison,J.Parks,K.Gugel,AHigh-PerformanceMulti-PurposeDSPArchitectureforSignalProcessingResearch,Intl.Conf.onAcoustics,Speech,andSignalProcessing,2003.


64 [27] S.Morrison,ADSP-BasedComputationalEngineforaBrain-MachineInterface,M.S.Thesis,UniversityofFlorida,2003.


BIOGRAPHICALSKETCHShalomDarmanjiangraduatedfromtheUniversityofFloridawithaBachelorofScienceinComputerEngineeringinDecember2003.AftercompletingthisthesisinMay2005ShalomplanstocontinuehispursuitofknowledgeinthePh.D.programattheUniversityofFlorida. 65

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E4OVH479K_7SEO07 INGEST_TIME 2011-06-16T18:38:01Z PACKAGE UFE0009426_00001