Group Title: Department of Computer and Information Science and Engineering Technical Reports
Title: Training an STDP-enabled neuron with an innocuous teaching signal
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00095727/00001
 Material Information
Title: Training an STDP-enabled neuron with an innocuous teaching signal
Alternate Title: Department of Computer and Information Science and Engineering Technical Report
Physical Description: Book
Language: English
Creator: VanderKraats, Nathan D.
Sengupta, Subhajit
Banerjee, Arunava
Publisher: Department of Computer and Information Science and Engineering, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: December 4, 2008
Copyright Date: 2008
 Record Information
Bibliographic ID: UF00095727
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

2008465 ( PDF )


Full Text








Training an STDP-Enabled Neuron with an

Innocuous Teaching Signal

Nathan D. VanderKraats
Department of Computer and Information Science and Engineering
University of Florida
Gainesville, FL 32611
ndv@cise.ufl.edu
Subhajit Sengupta
Department of Computer and Information Science and Engineering
University of Florida
Gainesville, FL 32611
ss5@cise.ufl.edu
Arunava Banerjee
Department of Computer and Information Science and Engineering
University of Florida
Gainesville, FL 32611
arunava@cise.ufl.edu

December 4, 2008


Abstract
For a spiking neural --, -., i, of multiple excitatory synapses and
a single output neuron utilizing Spike Timing Dependent Plasticity
(STDP), we investigate the effects of adding a single teaching input
to train the --, -I. 111 in a 1p-, -i1- ,:_ ,ii11-, realistic fashion. This teaching
signal, by directly affecting the output spike train, is made to indi-
rectly affect the entire set of inputs' synaptic weights through STDP.
Remarkably, this method is shown to increase the performance of an









output neuron executing a symbolic classification task on the inputs.
Further, the resultant teaching signal is innocuous: statistically, it is
virtually indistinguishable from a constant rate Poisson spike train
over the duration of the inputs.


1 Introduction

Spike timing dependent plasticity (STDP) has been observed in many exper-
imental situations, becoming the dominant theory of how synapses are up-
dated by the neural signals they process. However, connecting STDP, a local
phenomenon, to an overarching learning strategy for realistically-modeled
spiking networks has heretofore remained elusive.
Any viable learning scheme for a system of neurons should contain a few
assumptions. For one, STDP's effects should be solely local: one synapses'
changes should not affect other synapses. Also, feedback within the sys-
tem should be realistic. Rather than assuming a specific kind of code, such
as rate coding [?] or synfire chains [?], general representation-independent
spiketrains are desirable. Since cortical spiketrains are commonly known
to have Poisson-like statistics [?], feedback in a model system should show
similar statistics. Furthermore, the overall procedure should be biologically
plausible, not relying on any analyses that are impossible for actual neurons.
Previous work in connecting STDP to global learning has begun by des-
ignating a system-wide objective function, then attempting to derive STDP
using the assumed objective. For instance, Bell assumes that a neural sys-
tem seeks to maximize network sensitivity [?] to increase the entropy of its
outputs. Starting with this principle, he attempts to derive a local rule
that bears similarity to STDP. Similarly, spiketrain variability [?] and infor-
mation maximization [?] have been explored as global objective functions.
Unfortunately, these attempts have fallen short in explaining the connection
between STDP and learning, becoming mired in the inherent complexity of
spiking systems. While the existence of a simple global objective function is
attractive, a biological system may utilize an objective that is complicated or
inefficient, meaning that there may be no clear mathematical principle from
which to start.









1.1 A Reinforcement Learning Model
To overcome this issue, we explore a paradigm shift: rather than beginning
with the system-level learning goals and attempting to derive STDP, we
start by assuming STDP in a model spiking neuron. Using the task-based
framework described in C'! lpter Decoder a feedback loop is envisioned that
mirrors the psychological notion of learning through reinforcement.
Constructing a complete neural learning loop is an ambitious task, and
achieving this goal requires several nontrivial developments. The work of
this proposal, therefore, focuses on a necessary piece of the larger solution.
Assuming the existence of a teaching signal and knowledge of whether each
training example is correctly classified, we address the question of whether a
well-chosen teaching signal could push the system in a direction such that fu-
ture classification is improved. Rather than allowing any arbitrary feedback
signal, constraints are placed upon the teacher so that it will be compati-
ble with the larger plan. For this proposal, the teacher is modified from a
randomly-generated Poisson spiketrain, ensuring that the new teacher will
be innocuous. A definition and justification of this innocuous signal is given
in Section ??. Precisely how to push the system in the right direction with
such a teacher, and how this affects learning in the system, is examined
throughout Section ??.
The construction of the teaching signal, as well as the exact circuitry of
the feedback loop, is beyond the scope of this proposal. However, research in
closed-loop learning, insinuated by phenomena such as motor babbling, sug-
gests that a teaching signal could be presented alongside the same stimulus
that created it for the purposes of reinforcement learning [?].


2 Model of the Neuron and Inputs

The model is a simple network consisting of a single output neuron innervated
by a large number of inputs and a single teaching input, as depicted in Figure
??. Spiking dynamics in the system are governed by the first-order Spike
Response Model [?], which was described in Section ??. In all experiments,
3 = 6 dimensionlesss), r = 15msec, R = -1000mV, and 7 = 1.6msec. The
synapses on the output neuron implement additive STDP [?] as detailed in
Section ??. For this work, g,,, = 40 by default, r+ = 7_ = 20msec, and the
values of A+ and A_ will be given for each trial.











-I 50 ms


50 ms


Output



Teaching



Input 1



Input 2


I I


Past


Present


Future


Figure 1: The two-pass teaching algorithm. Desired output perturbations,
the dashed spikes in the output line, are used to find optimal teaching per-
turbations, the dashed teaching spikes. The simulation is then rerun using
the new teaching spiketrain.


The synaptic weights between inputs and output are initialized randomly
between 15 and 25. As necessary in additive STDP models, these values
are bounded during runtime at a minimum value g,i, = 1 and a maximum
value gax. Although additive STDP is used, the method generalizes easily
to other forms of STDP [?].
For inputs to the model neuron, the Meddis Inner-Hair Cell model is again
levied for the generation of realistic auditory nerve data. All experiments
use 40 exponentially-increasing center frequencies from 100Hz to 5000Hz.
Two auditory nerve responses are created for each of the center frequencies,
producing a total of 80 input neurons.


I I I I: ~1 I



I I



I I I



I I


I : : I : I









3 Definition of the Learning Task


3.1 Experimental Overview
Given the basic system above, consider a general framework that separates
symbolic inputs using the output neuron, using the task-based discrimina-
tion method described in C'!i pter Decoder. Each symbolic input, then, cor-
responds to an equivalence class of spiketrains. Reinforcement is introduced
into the system through a two-stage process. The input to the system is
defined as a sequence of short, fixed length spike trains, each representing
some semantic symbol. First, an input segment, plus the non-informative
teaching input, is passed through the system, yielding an output spike train.
Based on the method described in Section ??, the output is evaluated, pro-
ducing a vector in the direction in which the output should be changed to
maximally improve performance. Using this vector, changes are made to the
teaching signal that would move the output in the desired direction given
the same inputs. Finally, the simulation is rewound and rerun again using
the modified teacher, effectively moving the system in the desired direction.
One important assumption is contained in this plan. At any point in
time, the state of the system is described by its synaptic weights, which
completely determine its spike responses to any input. The optimal correc-
tion for a given sample, therefore, would be achieved by directly changing the
weights. Unfortunately, this manipulation is not biologically plausible, since
the weights must only be changed through STDP. Because of this require-
ment, any system relying on a teacher for learning must make the assumption
that output corrections alone are capable of pulling the synaptic weights in
the right direction, on average.
Ideally, given any set of synaptic weights, a B li-, -i i classifier would be
used to determine the most probable input class given any observed output
spike train [?]. However, due to the variety of potential inputs, not to men-
tion the astronomical number of synaptic weight combinations, creating the
underlying distributions for a B li-, -i ,i method is clearly impractical. There-
fore, an online classifier is desirable. An online technique has the additional
benefit of being robust in the face of concept drift [?], where the underlying
model slowly changes as examples are presented. In the case of a neural sys-
tem with STDP, this drifting occurs because the synaptic weights that define
the system are constantly updated. While an appraisal of the system could
be performed by temporarily fixing the synaptic weights for an evaluation









Input 1

output


Input 80


Teaching
Input

Figure 2: Output neuron with 80 input synapses and one teaching synapse.



period, this workaround should be avoided since no guarantees exist that the
performance with a fixed weight set will mimic performance in the natural
situation of ever-changing weights. Furthermore, such a fixed evaluation is
not physiologically realizable.
As a consequence of the above, we make the assumption that moving an
output spiketrain in a greedy direction, improving its own classifiability, will
translate into synaptic weight movements that increase average performance
for the system on the whole.


3.2 Learning Task

As an initial demonstration of the global learning capabilities of the method,
consider a simple puretone frequency discrimination task, as discussed in
Section ??. The auditory stimuli are 50msec iso-amplitudinal puretone blips
ranging in frequency between 90Hz and 5200Hz. The classes are divided
at 2645Hz, the midpoint of the range. While this task seems superficially
simple, any dichotomy of frequencies can be obtained by asking a series of
such high/low questions.
In some respects, working with symbolic tasks presents more of a chal-
lenge than the homogeneous modification rules examined by past researchers
[?][?]. Specifically, any rule that specifies the relationship between a partic-
ular output spike and its input spikes operates merely by making a similar
modification on each observation, without incorporating the performance of
the system at that point in time. On the other hand, a task-based method
utilizes actual feedback for arbitrary symbolic classes, rather than just ad-









justing the sensitivity of a group of neurons to a particular set of inputs.

3.3 Innocuous Reinforcement
To attain effective, biologically-plausible reinforcement, the nature of the
teaching signal must be deliberated. For simple binary classification tasks,
one could imagine a naive teaching signal that spikes with an extremely high
rate for class 1 and a low rate for class 0. However, such rate-based rein-
forcements are undesirable. For one, this teaching signal cannot be turned
off without effect: its mere presence or absence denotes that some degree of
reinforcement is occurring. Further, rate-based reinforcement lacks physio-
logical appeal, and it is unclear whether such an approach can be generalized
to arbitrary symbolic input classes.
Therefore, the teaching signal should encode its reinforcements tempo-
rally. First, for the non-informative first pass teaching input, a constant rate
Poisson spiketrain is randomly generated. To make the signal informative for
the second pass, the spikes in this teacher are slightly perturbed, incremen-
tally changing the output spiketrains. Teaching signals of this nature will
have very nearly Poisson spike distributions, reflecting biologically-observed
spike statistics. Comparisons between some perturbed teaching signals and
their base Poisson spiketrains are presented in Figures ?? and ??.


4 Teaching Method

4.1 Objective Function
As stated in Section ??, the ideal reinforcement strategy would be to derive
an objective function of the synaptic weights of the input: E(Qi,..., Q,).
For every given set of synaptic weights, or Q-configuration, this function
would relate how good or bad the system is at making its classifications.
Then, a gradient descent over the synaptic weight space could locate the
optimal discriminant. Unfortunately, this objective function seems impossi-
ble to determine directly. Even if one had access to the prodigious amounts
of data necessary to estimate conditional probability distributions for every
potential Q-configuration, enacting such a solution is physiologically unrea-
sonable, since real neurons must operate in an online manner.
Since determining the gradient of the system itself is not possible for any









online scheme, the only information that can be utilized to direct a given
training period is the output spiketrain. Therefore, we propose a method by
which the system is taught to move each output spiketrain a small amount
in a desirable direction, with the hope that this greedy decision will lead to
a long-term performance gain for the system as a whole.
As in Section ??, each output modulation period is considered as a point
in the spike times feature space. For the selection of the statistical classi-
fier, a linear discriminant has several attractive properties, including ease of
computation and physiological validity. Additionally, since a linear classi-
fier produces a hyperplane boundary between positive and negative classes,
the orientation of this hyperplane can be used to readily assess the perfor-
mance of any given data point. A correction may be applied to a data point
even when the point is correctly classified, which will serve to increase class
separation.
Any incremental learning algorithm may be implemented as a classifier.
For ease of use, we select the perception, as presented in Section ??, which is
both online and linear [?]. The learning rate TI is set to 0.001 by default. It
should be noted that the use of the perception is only a tool to gain insight
into the performance of the current system. The full input discrimination is
performed by first multiplexing the input signals through the nonlinear out-
put neuron, and then effecting the linear separation. Therefore, the overall
discrimination power is superior to a linear classifier, being more reminiscent
of a kernel-based technique.
The output corrections are applied in the following manner. First, the
perception's hyperplane is initialized. To avoid pathological convergence
issues, a small portion of the training data is examined to ensure that the
starting hyperplane is in the same feature space vicinity as the data set. The
initial weight vector is:

W /p(Ci) (C2) (1)
where p(Ci) denotes the mean of all points in the initial set belonging to
class i. Likewise, the initial bias is set to:

((C1)2 I I(C2) 2) (2)
2
After the initial set, each training output spiketrain is evaluated with
respect to the current perception hyperplane. Regardless of whether the










point was correctly classified, a correction is computed, serving to separate
the two classes on every example. This correction is simply a normalized
vector orthogonal to the discriminating hyperplane:

W
Ay V (3)

where z E {-1, +1} is the correct class label of the current point. If
the point is misclassified, the hyperplane is adjusted by the perception rules
detailed in Section ??.


4.2 Optimal Teaching Perturbations

Using the sample output's correction vector, a suitable perturbation of the
teaching spikes and indirectly, the synaptic weights can be calculated
to produce the desired output perturbation. We emphasize that all of the
synapses onto the output neuron are perturbed indirectly through the per-
turbations in the output spike train. As a first step, the output spikes and
synaptic weight perturbations must be written as functions of the teaching
perturbations.
The synaptic weights of the system are only updated when a spike is
generated. The following calculations presuppose that the system is about
to produce an output spike, denoting the impending output spike as ynw.
Using the SRM with global threshold T:

n Mi
TyQi *P(x) =r (4)
i=0 j=1
where n denotes the total number of neurons, and .i signifies the total
number of spikes for neuron i within the time window T. By convention,
let i = 0 represent the output spiketrain, i 1 denote the teaching input,
and i 2... n be the non-teaching inputs. Therefore, x' is the time elapsed
since the jth most recent input spike for synapse i 1= ... n. Similarly, xQ
denotes the time of the jth most recent output spike. Finally, Q\ denotes the
synaptic strength between input i and the output neuron at the time of spike
xt. These synaptic weight histories must be retained until the spike is no
longer in the active window, as a consequence of the SRM. For compactness
in the derivation, we define Qj as a constant, 1, for all output spikes j.









Finally, Pi(t) denotes the fixed PSP/AHP profile for a given spike on neuron



P( t) = t + CAHP if i 0 (5)
Re 7 + CPSP if i = 0
where all model parameters other than t remain static. Specifically, CAHP
denotes the constant AHP response with respect to input spike movements,
and Cpsp denotes the constant PSP response with respect to output spike
movements. Now, to describe the output perturbation Ay,,w caused by a
set of past spike perturbations Axa and past synaptic weight perturbations
AQj, for i 0= ... n, j = ... 1, note that:
n Mi
Z (Qi + AQi) Pi(x + Ax Ayns) T (6)
i=0 j=1
Applying a first-order Taylor expansion for Pi(xl):


(Q + AQ) Pi(x ) P+ (x (Ax Ayn) T (7)
i,j
R. 11 i.il,,. dropping all non-first-order terms, and noting the equality
in Equation ??:


Y AQj(i ) + Y Q (i x Ay"') T (8)
i,j i,3
It follows that:

Z,, Fi(XW)AQi + C i^ (2: AQ ()
Aynew Y j-----X3 1 (9)
i, Q i ax)

Next, we must formulate the perturbation in a synaptic weight update as a
function of all past perturbations of input spikes, output spikes, and synaptic
weights. To avoid notational confusion, we denote this update perturbation
as ARi, with Ri representing the current weight between neuron i and the
output. Using the notation from above with an impending spike y,,e, a










positive synaptic update is denoted Ri Ri + gmxFi. The cumulative
effect of all positive synaptic updates is thus:

Mi -Xi
F, = Ae + (10)
j=1
where 11f denotes the total number of spikes for neuron i lying within the
past STDP efficacy window, which is assumed for simplicity to be the same
length as the spike efficacy time window T. For a set of input perturbations
Ax ,j 1 ....1 and output perturbation Ay,,, as above, the new Fi will
be:

M1i (x j+Ax -Ay-e)
F(pert) F + AF A+e A+ ) (11)
j=1
-(xl+Axu-Aynew)
Applying a first-order Taylor expansion for e + it follows
that:

Mi -X. t
AF, -Ae + (Ax Ayne)( ) (12)
j=1
Now, the change in the update of Ri due to all perturbations, for positive
updates only, is:


ARi g maxF(pert) ma
M 1
-g1 > -A+ r (Ax' Ay.,.)((- )
j=1

Where A ,,,, from Equation ??, is a function of fixed spike and weight
perturbations. Similarly, the cumulative negative synaptic updates for a
synapse i with impending input spike xi,ne are:

Mi -Xi
ARi max -A-e (Ax Axi,new)( ) (14)
T--
j=1
where Axi,,,w is the prescribed perturbation of the impending input spike.









As mentioned in Section ??, hard bounds are commonly added to STDP
models to prevent unlimited reinforcement. When a synaptic update would
result in a value beyond the prescribed range [mini, 9max, the synaptic strength
Ri is set to the boundary, and ARi is set to 0.
Next, perturbations to spikes on the teaching synapse, Ax{ for all j within
the current window T must be applied so that the output is perturbed in the
appropriate direction. Using Equation ??, the incremental change in each
output spike, Ayk, for all k within the output window, may be described as
a function of the change in each teaching input spike Ax{ and the change
in each synaptic weight AQ{, forming a Jacobian matrix J. The optimal
teaching spike perturbations are then a solution to the linear system:


Ax~
Ax2




a9y 9y7 9 Q{ 9 Qy 9Qy 9

AQ7

The rows of this matrix can be constructed in an iterative fashion by
considering a sliding window starting at [-T, 0] and moving forward to [0, T].
Whenever a new output spike is encountered at the forward edge of the
window, a new row is created in J. Whenever an input spike is encountered,
the synaptic weight structures are updated according to the negative updates
of Equation ??.
Since this system may be overdetermined or underdetermined depending
on the number of input and output spikes and their relative positions fr a
given modulation period, the system is solved using the Moore-Penrose pseu-
doinverse. This solution is desirable in the overdetermined case, because it
will minimize the norm of the error II JAx- Ay I|, and in the underdetermined
case because it will yield the solution Ax that minimizes IlAxI|.












07 -thresh80-th60
thresh80-tch44

S06 .I6... .-
055,,





2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000
Number of Modulation Periods


Figure 3: Moving average accuracy computed over 1000 modulation periods
for runs initalized at various parameter values.


5 Results

5.1 Classification Improvement

Plotting a moving average allows visualization of the performance of a clas-
sifier that is changing over time. Figure ?? shows such a plot, using 1000
modulation periods per average. To avoid contamination of the accuracies
by the teaching process, the performance is plotted before .niv teaching cor-
rections occur.
Synaptic distances are initialized to random values between 1 and 2 for all
experiments. Similarly, the synaptic weights are initialized randomly between
15 and 25. Each experiment uses a fixed threshold for the output neuron, and
a constant spike rate for the initial Poisson teaching signal. Results for several
experiments, each with different parameters, and random initializations are
shown in Figure ??. For most parameter settings, the teaching signal was
capable of driving the synaptic strengths of the neuron in a direction that
improved classification.


5.2 Innocuousness

The differences between the original non-teaching Poisson spike train and
the appropriately perturbed teaching spike train were hardly distinguish-













6000 80 100 120 140 1 20 40 0 80 100 120 140 1
500
400
400


200

100100

0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180
ISI (msec) ISI (msec)

Figure 4: Histogram of the interspike interval (ISI) distribution of the origi-
nal non-teaching Poisson spike train (left), and the appropriately perturbed
teaching spike train (right). Note that the ISI distribution is largely undis-
turbed.


able. The coeffienct of variation (CV) was 0.9665 for the Poisson spike train
and 0.9651 for the teaching spike train. Visually, the interspike interval
distributions were also barely differentiable, as shown in Figure ??). The
conditional distribution function p(x\ xne), the probability of a spike oc-
curring for neuron i at time t given a current spike on neuron j at time t = 0,
fully determines the rate at which the synaptic strength drifts. As di-1 i', '1
in Figure ??, no statistically significant differences were found between the
drift for the teaching synapses and the drift for non-teaching synapses.



6 Conclusion

While the preliminary results presented here have not yet been completely
explored, the implications of the initial findings are quite intriguing. The
evidence -ii --. -I -; a mechanism by which feedback could be used in the brain
to learn an arbitrary symbolic task, using only STDP and the most basic spike
time dynamics. Even more surprising is the conclusion that such signals may
be virtually undetectable in biology, masquerading as mundane background
noise.

















1 1 J I I 1 20 ] I


.11 1 i I


20 II I ~ 1Ili 0I li ii4 i VLi


Figure 5: Top left: Histogram of the conditional probability p(xa'xj,ne.) for
j = 2... 80, the probability of finding an output spike t ms into the past given
a non-teaching input spike at present. Top right: Histogram for p(x |o,new)
for i = 2... 80. Bottom left and right: Corresponding histograms for j = 1,
i 1 respectively, comparing output spikes to teaching input spikes.


20 [


100 0 10 20 M0 40 50 60 70 80 90 100


|


M 0










References


[1] Squire, L. R., Bloom, F. E., McConnell, S. K., Roberts, J. L., Spitzer, N.
C., & Zigmond, M. J. (2003). Fundamental neuroscience. (2nd ed.). London:
Academic Press.

[2] Bear, M. F., Conners, B. W., & Paradiso, M. A. (2007). Neuroscience: explor-
ing the brain. (3rd ed.). Baltimore: Lippincott Williams & Wilkins.

[3] B i,. i i, A. (2001). On the Phase-Space Dynamics of Systems of Spiking
Neurons. I: Model and Experiments. Neural Computation, 13(1), 161-193.

[4] Altschuler, R. A., Bobbin, R. P., Clopton, B. M., & Hoffman, D. W. (1991).
N ,n,,. .:, 1.,,; of hearing: The central auditory -;1/. i, New York: Raven Press.

[5] Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1997).
Spikes: Exploring the neural code. Cambridge: MIT Press.

[6] D -, min. P. & Abbott, L. F. (2001). Theoretical neuroscience: computational
and mathematical modeling of neural -;/. t/,.- Cambridge: MIT Press.

[7] Mainen, Z. F., & S, iM, v.- -1:i. T. J. (1995). Reliability of spike timing in neocor-
tical neurons. Science, 268, 1503-1506.

[8] Gerstner, W. & Kistler, W. (2002). Spiking neuron models: single neurons,
populations, P,1,i-.:.: ,I Cambridge: Cambridge Univ. Press.

[9] MacGregor, R. J., & Lewis, E. R. (1977). Neural Modeling. New York: Plenum
Press.

[10] Kistler, W., Gerstner, W., & van Hemmen, J. L. (1997). Reduction of the
hodgkin-huxley equations to a threshold model. Neural Computation, 9, 1069-
1100.

[11] Bi, G. Q. & Poo, M. M. (1998). Synaptic modifications in cultured hippocam-
pal neurons: dependence on spike timing, synaptic strength, and 1 .- --, I i'l
cell type. J. Neuro., 18, 10 11 1-72.

[12] Song, S., Miller, K., & Abbott, L. (2000). Competitive hebbian learning
through spike-timing-dependent plasticity. Nat Neuro, 3, 919-926.

[13] van Rossum, M. C. W., Bi, G. Q., & Turrigiano, G. G. (2000). Stable Hebbian
Learning from Spike Timing-Dependent Plasticity. J. Neuro., 20(23), 8812-8821.










[14] Moore, B. C. J. (2004). An introduction to the /-;,. 1,,.,.,,1. of hearing. (5th
ed). London: Academic Press.

[15] Meddis, R. (1'-,.). Simulation of mechanical to neural transduction in the
auditory receptor. J. Acoust. Soc. Am., 79, 702-711.

[16] Meddis, R. (1988). Simulation of auditory-neural transduction: Further stud-
ies. J. Acoust. Soc. Am., 83, 1056-1063.

[17] Lopez-Poveda, E. A., OMard, L. P., & Meddis, R. (2001). A human nonlinear
cochlear filterbank. J. Acoust. Soc. Am., 110, 3107-3118.

[18] Sumner, C. J., Lopez-Poveda, E. A., OMard, L. P., & Meddis, R. (2002). A
revised model of the inner-hair cell and auditory nerve complex. J. Acoust. Soc.
Am., 111, 2178-2188.

[19] http://www.pdn.cam.ac.uk/groups/dsam/index.html. Online.

[20] Vapnik, V. N. (1999). The Nature of Statistical Learning TI.7.,... (2nd ed).
Berlin: Springer.

[21] Burges, C. J. C. (1998). A tutorial on support vector machines for pattern
recognition. Data Mining and Knowledge D.: ..;. ,. 2, 121-167.

[22] Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern Recognition. New
York: John Wiley & Sons.

[23] Dvoretzky, A., Kiefer, J., & Wolfowitz, J. (1956). Asymptotic minimax char-
acter of the sample distribution function and of the classical multinomial esti-
mator. Ann. Math. Stat., 27(3), 642-669.

[24] Shlens, J., Kennel, M. B., Abarbanel, H. D. I., & C'! In liii-.l:y, E. J. (2007).
Estimating information rates with confidence intervals in neural spike trains.
Neural Computation, 19, 1683-1719.

[25] Cover, T. M., & Thomas, J. A. (2006). Elements of I r.i, i..i,,.. TI.. .,.i, (2nd
ed). New Jersey: Wiley.

[26] Shannon, C. E. (1948) A mathematical theory of communication. The Bell
S,,i-/. Technical Journal, 27, 379-423, 623-i.".,.

[27] Warland, D. K., Reinagel, P., & Meister, M. (1997). Decoding visual infor-
mation from a population of retinal ganglion cells. J. NV ,,,. T'!,,,,. 78, 2336-2350.










[28] Wessberg, J., Stambaugh, C. R., Kralik, J. D., Beck, P. D., Laubach, M.,
Chapin, J. K., Kim, J., Biggs, S. J., Srinivasan, M. A., Nicolelis, M. A. (2000).
Real-time prediction of hand trajectory by ensembles of cortical neurons in
primates. Nature, 408(6810), 361-5.

[29] Serruya, M. D., Hatsopoulous, N. G., Paninski, L., Fellows, M. R., Donoghue,
J. P. (2002). Nature, 416(6877), 141-2.

[30] Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., & Black, M. J. (2005).
B ,-,. -i im population decoding of motor cortical activity using a Kalman filter.
Neural Comp, 18, 80-118.

[31] xii. 1., I:_. S., Jacobs, A., Fridman, G., Latham, P., Douglas, R., Alam, N.,
& Prusky, G. (2006). Ruling out and ruling in neural codes. J. Vision, 6, 889.

[32] Pouget, A., D i-, '_i. P., Zemel, R. (2000). Information processing with popu-
lation codes. Nat Rev Neuro, 1, 125-132.

[33] Strong, S., Koberle, R., de Ruyter van Steveninck, R., & Bialek, W. (1998).
Entropy and information in neural spike trains. Phys Rev Lett, 80, 197-200.

[34] Kennel, M. B., Shlens, J., Abarbanel, H. D. I., & C'in !il"ii-.l:y, E. J. (2005).
Estimating entropy rates with B i-,. -i I1 confidence intervals. Neural Computa-
tion, 17, 1531-1576.

[35] Wolpert, D. & Wolf, D. (1995). Estimating functions of probability distribu-
tions from a finite set of samples. Phys Rev E, 52, 6841-1-',. !.

[36] Miller, G. (1955). Note on the bias of information estimates. In H. Quastler
(Ed.), Information theory in psychology II-B (pp. 95-100). Glencoe, IL: Free
Press.

[37] Paninski, L. Estimation of entropy and mutual information. (2003). Neural
Computation, 15, 1191-1253.

[38] Pola, G., Petersen, R. S., Thiele, A., Young, M. P., & Panzeri, S. (2005).
Data-robust tight lower bounds to the information carried by spike times of a
neuronal population. Neural Computation, 17, 1962-2005.

[39] Montemurro, M. A., Senatore, R., & Panzeri, S. (2007). Tight data-robust
bounds to mutual information combining shuffling and model selection tech-
niques. Neural Computation, 19, 2913-2'1,7.










[40] M. 1,in:_. C., Hehl, U., Kubo, M., Diesmann, M., & Aertsen, A. (2003). Ac-
tivity dynamics and propagation of synchronous spiking in locally connected
random networks. Biol CGl, i, 88, 395

[41] Mazurek, M. E. & Shadlen, M. N. (2002). Limits to the temporal fidelity of
cortical spike rate signals. Nat Neuro, 5, 463-471.

[42] Diesmann, M., Gev ,11i:_. M. O., & Aertsen, A. (1999). Stable propagation of
synchronoous spiking in cortical neural networks. Nature, 402(6761), 529-33.

[43] xii. i,1. !:_. S., & Victor, J. D. (2007). Ai, 1-,. i,:_ the activity of large pop-
ulations of neurons: how tractable is the problem? Curr Opin Neurobiol, 17,
397-400.

[44] Shlens, J., Field, G. D., Gauthier, J. L., Grivich, M. I., Petrusca, D., Sher,
A., Litke, A. M., & C'in! l!Iil""-I:y, E. J. (2006). The structure of multi-neuron
firing patterns in primate retina. J. Neuro, 26, .2-'. 1-8266.

[45] Schneidman, E. Berry, M. J. II, Segev, R., & Bialek, W. (2006). Weak pairwise
correlations imply strongly correlated network states in a neural population.
Nature, 440(7087), 1007-1012.

[46] Latham, P. & Nil, _,l. !:_. S. (2005). Synergy, redundancy, and independence
in population codes, revisited. J. Neuro, 25, 5195-5206.

[47] VanRullen, R., Guyonneau, R., & Thorpe, S. J. (2005). Spike times make
sense. Trends Neuro, 28(1), 1-4.

[48] Reich, D. S., Mechler, F., Purpura, K. P., & Victor, J. D. (2000). Interspike
intervals, receptive fields, and information encoding in primary visual cortex. J
Neuro, 20(5), 1964-1974.

[49] Gollisch, T., & Meister, M. (2008). Rapid Neural Coding in the Retina with
Relative Spike Latencies. Science, 319(".-,,ll), 1108-111.

[50] Joachims, T. (1999). Making large-scale SVM learning practical. In Schdlkopf,
B., Burges, C., & Smola, A. (Ed.), Advances in kernel methods support vector
learning. Cambridge: MIT Press.

[51] Joachims, T. (2002). Learning to 1, -.:. text using support vector machines.
Norwell: Kluwer Press.

[52] http://svmlight.j.. lihi,- .1 :- /. Online.










[53] Fletcher, N. H., & R..-i:_. T. D. (2005). The -pli. of musical instruments.
New York: Springer.

[54] Bell, A. J. & Parra, L. C. (2004). Maximising sensitivity in a spiking network.
Advances in Neural lif..i ,,I '..,11. Processing Si.. ii- 17, Cambridge: MIT Press.

[55] Bohte, S. M. & Mozer, M. C. (2004). Reducing spike train variability: A
computational theory of spike-timing dependent plasticity. Advances in Neural
Inf.,i ,n.il+.ii Processing S11., ,i. 17, Cambridge: MIT Press.

[56] C'!, I!il:, G. (2003). Spike-timing dependent plasticity and relevant mutual
information maximization. Neural Computation, 15(7), 1481-1510.

[57] Zelaznik, H. N. (1996). Advances in Motor Learning and Control. Champaign:
Human Kinetics.

[58] Widmer, G., &E Kubat, M. (1996). Learning in the presence of concept drift
and hidden contexts. Machine Learning, 23, 69-101.

[59] Wellner, J. A. (1992). Empirical processes in action: a review. Inter. Stat.
Rev., 60(3), 247-269.

[60] DeStefano, J., & Learned-Miller, E. (2008). A probabilistic upper bound on
differential entropy. IEEE Transactions on Information TI .., i. Under Revision.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs