On the Complexity of
Acyclic Networks of Neurons
Venkatakrishnan Ramaswamy Arunava Banerjee
University of Florida University of Florida
September 8, 2009
We study acyclic networks of neurons viewing them as transformations
that map input spike trains to output spike trains Our neurons are ab-
stract mathematical objects that satisfy a small set of axioms. We observe
that, even for single neurons, the notion of an input-output transforma-
tion is not well defined, in general (in a sense that we will make precise).
We establish biologically-realistic sufficient conditions under which these
transformations are well-defined. Notions of relative complexity of spike-
train to spike-train transformations are then introduced according to the
architecture of acyclic networks that can/cannot carry them out. Some
complexity results are then proved.
Neurons and their networks, fundamentally, are machines that transform spike
trains into spike trains. It is these transformations that are the basis of infor-
mation processing, indeed even cognition, in the brain. This work is broadly
motivated by the following question: What constraints, if any, do local proper-
ties of neurons impose on the computational abilities of various networks, purely
from a theoretical perspective? For example, given a network architecture, are
there transformations that cannot be performed by any network with this ar-
chitecture, assuming that the constituent neurons are constrained to obey a
set of abstract (biologically-realistic) properties? In order to then rule out the
possibility that this transformation is unachievable by any network, we would
also need to show a network made up of very simple neurons that can effect
this transformation. It is these kinds of questions that motivate our definition
of complexity of spike-train to spike-train transformations. In this paper, we re-
strict our study to acyclic1 networks of neurons, i.e. networks that do not have
1While the term J... It. ,.t, t,1 network is widely used to indicate this type of network, we
prefer to call these acyclic networks to emphasize that these networks are not feedforward in
a directed cycle. While even single neurons are functionally recurrent2, acyclic
networks quickly settle down to quiescence upon receiving no input. On the
other hand, recurrent networks have been known  to have complex dynamics,
in general, even on receiving no input for unbounded time.
Several researchers have studied related questions. In , Poirazi et al.,
model a compartmental model of a pyramidal neuron using a two layer neural
network, assuming rate codes. Bohte et al.,  derive a supervised learning rule
for a network of spiking neurons, where the output is restricted to a single spike
in a given period of observation. Gutig and Sompolinsky  describe a model
to learn spike time decisions. They also have a task with two outcomes, which
are mapped to notions of a presence or absence of spikes. Maass  investigates
the computational power of networks of Spike Response Model neurons relat-
ing them to well-known models of computation like Turing Machines. Finally
Bartlett and Maass  analyze the discriminative capacity of a pulse-coded neu-
ron from a statistical learning theory perspective. In contrast to all the above
approaches, we seek to investigate the relative complexity of the physical spike
train to spike train transformations that are instantiated by systems of spiking
neurons, without making overarching assumptions about the underlying com-
putational dynamics of the system. Our results are at a level more fundamental
than the computational framework most other work assumes and are therefore
more widely applicable.
Roadmap. In Section 2 we introduce notation. In Section 3, we describe
our abstract mathematical model of a neuron with its biological underpinnings.
In Section 4, we show that even single neurons cannot be consistently viewed as
spike-train to spike-train transformations. In Section 5, we develop two criteria
that provide sufficient conditions for spike-train to spike-train transformations to
be well-defined; one is biologically-realistic but mathematically unwieldy and the
other while being mathematically-tractable is not biologically well-motivated.
In Section 6, we introduce notions of relative complexity of acyclic networks
and prove the surprising result that our more tractable criterion can be used to
prove every complexity result that is accessible to our more biologically-realistic
criterion. In Section 7, we prove some complexity results for the abstract model;
notably we show that acyclic networks with almost two neurons are more com-
plex than single neurons. We conclude in Section 8.
2 Notation and Preliminaries
An action potential or spike is a stereotypical event characterized by the time
instant at which it is initiated (at the soma), which is referred to as its spike
time. Spike times are represented relative to the present by real numbers, with
positive values denoting past spike times and negative values denoting future
spike times. A spike train = (x1, x2, ..., x, ...) is an increasing sequence
the system-theoretic sense.
2owing to the membrane potential also depending on past output spikes to account for
effects during the relative refractory period
of spike times, with every pair of spike times being more than a apart, where
a > 0 is the absolute refractory period and xz is the spike time of spike i.3 An
empty spike train, denoted by 9, is one which has no spikes. Let S denote the
set of all spike trains. A time-bounded spike train is one where all spike times
lie in the bounded interval [a, b], for some a, b E R. Note that a time-bounded
spike train is also a finite sequence. A spike train is said to have a gap in
the interval [a, b], if it has no spikes in [a, b]. Two spike trains are said to be
identical in the interval [a, b], if they have exactly the same spike times in that
time interval. A spike configuration X (a1, ..., ,) is a tuple of spike trains.
A time-bounded spike configuration is one where each of its spike trains is time-
bounded. Let 7 (x x2,. .., x ,...) be a spike train and X (a1,..., ,m) be
a spike configuration. We define some operations on these objects. The shift
operator for spike trains is defined as ut(Y) = x -t, 2 -t,..., x -t,...). The
shift operatorfor spike configurations is defined as ot(x) ( (ot(i1),..., t(m,)).
The truncation operator for spike trains is defined as follows: E[a,b] (a) is a time-
bounded spike train so that it is the longest subsequence of 7 that has each
of its elements lying in the interval [a, b]. The truncation operator for spike
configurations is defined as [a,b] (X) ([a,b] (), ... [a,b] (m))
In this section, we informally describe the assumptions that underlie our model.
For want of space, the treatment will be brief. The approach taken here closely
follows the one in .
1. We assume that the neuron is a device that receives input from other
neurons exclusively by spikes which are received at chemical synapses.
2. Without loss of generality, we assume the resting membrane potential to
be 0. Let T > 0 be the threshold membrane potential.
3. Owing to the absolute refractory period a > 0, no two input or output
spikes can occur closer than a.
4. The neuron is a finite-precision device with fading memory. Hence, the
underlying potential function can be determined by working with a bounded
past. That is, we assume that the current membrane potential of the neuron
can be determined as a function of the input spikes received in the past T
seconds and the spikes produced by the neuron in the past p seconds.4
5. We assume that the membrane potential of the neuron can be written
down as a real-valued, C", everywhere bounded function P(X; ao), where Yo
is a time-bounded spike train, with bound [0,p] and X (a1,..., m) is a
time-bounded spike configuration with 7i, 1 < i < m, being a time-bounded
spike train, with bound [0, T]. Informally, ai, 1 < i < m, is the sequence of
spikes afferent in synapse i in the past T seconds and Yo is the sequence of
spikes efferent from the current neuron in the past p seconds. The function
3We assume a single fixed absolute refractory period for all neurons, although our results
would be no different if different neurons had different absolute refractory periods.
4p corresponds to the notion of relative refractory period.
P(.) characterizes the entire spatiotemporal response of the neuron to spikes
including synaptic strengths, their location on dendrites, and their modulation
of each other's effects at the soma, spike-propagation delays, and the postspike
6. The neuron outputs a spike whenever P(.) = T. Additionally, since the
first derivative of potential need not be continuous with time, when a new output
spike is produced, we assume that the first derivative is sufficiently high, so as
to keep the membrane potential from going above the threshold.
7. Past output spikes have an inhibitory effect, in the following sense: P(X; ao) <
P(X; ) .
8. Finally, on receiving no input spikes in the past T seconds, the neuron
settles to its resting potential. That is, P((Q, ,..., ); }) 0.
An acyclic network of neurons, informally, is a Directed Acyclic Graph where
each vertex corresponds to an instantiation of the neuron model, with some
vertices designated input vertices (which are placeholders for input spike trains),
and one vertex (a neuron) designated the output vertex. The depth of an acyclic
network is the length of the longest path from an input vertex to an output
4 Acyclic Networks as Input-Output transfor-
We wish to look at acyclic networks as transformations that map input spike
trains to output spike trains. Therefore, we first need to define in what sense
they constitute the said transformations.
Let us first consider the simplest acyclic network, namely the single neuron.
Recall that the membrane potential of the neuron depends not only on the
input spikes received in the past T seconds, it also depends on the output spikes
produced by it in the past p seconds. Therefore, knowledge of just input spikes in
the past T seconds does not uniquely determine the current membrane potential
(and therefore the output spike train produced from it). It might be tempting to
then somehow use the fact that the past output spikes are themselves a function
of input and output received in the more distant past, and attempt to make the
membrane potential a function of a finite albeit larger .- 1 I1. .. of input spikes
alone. The example in Figure l(a) shows that this does not work. In particular,
the current membrane potential of the neuron may depend on the position of
an input spike that has occurred arbitrarily long time ago in the past. (See
caption for Fig. l(a)). One is then forced to ask if given the infinite history of
input spikes received by the neuron, the membrane potential is then uniquely
determined. Before we can answer this question, we need to define under what
conditions we can consistently associate an (unbounded) output spike train with
an (unbounded) input spike configuration, for a single neuron.
Definition 1. An output spike train Yo is said to be consistent with an input
spike configuration X, for a neuron, if for every t E R, t E Yo iff the neuron
produces a spike when presented with the time-bounded input spike configuration
and output spike train, ([0,T] (t(X)) and O[0,p] (t(Y0))-
Our question then is : For every (unbounded) input spike configuration,
does there exist exactly one consistent (unbounded) output spike train? The
example in Figure l(b) answers this question in the negative by describing a
neuron which has two consistent output spike trains for an input.
The underlying difficulty in defining even single neurons as spike train to
spike train transformations is sensitivity, in general, of current membrane po-
tential to "initial state". Can we just consider a subset of input/output spike
trains, which have the property of the current membrane potential being inde-
pendent of the input spike train beyond a certain time in the past?
5 The Gap and Flush Criteria
For a neuron, the way input spikes that happened sufficiently earlier affect
current membrane potential is via a causal sequence of output spikes, causal
in the sense that each of them had an effect on the membrane potential while
the subsequent one in the sequence was being produced. The condition in the
Gap Theorem basically seeks to break this causal chain. To see the main idea
that leads to the condition, see Figure l(c). Suppose, the spikes in the shaded
region (which is an interval of length p) occurred at the exact same position for
input spike configurations with spikes occurring at arbitrary positions older than
time instant t', then the current membrane potential depends on atmost the
input spikes in the interval [t, t'].Suppose we choose this fixed interval of time of
length p to have no spikes, i.e to be a gap of length p. Now, just requiring this
does not imply that this gap is preserved when input spikes sufficiently old are
changed as is clear from Figure l(a). However, surprisingly, a gap of 2p suffices.
2p also happens to be the smallest gap for which this works. The details are in
the following theorem. For a picture, see Figure 1(d).
Theorem 1 (Gap Theorem). For a neuron, consider an unbounded input spike
configuration X* with atleast one consistent unbounded output spike train ,,
so that ,,, has a gap in the interval [t,t + 2p]. Let X be an arbitrary spike
configuration that is identical to X* in the interval [t,t + T + p]. Then, every
output spike train consistent with X, with respect to the same neuron, has a gap
in the interval [t, t + p]. Furthermore, 2p is the smallest gap length in 4*, for
which this is true.
Proof. Since, in each Yo consistent with y, the interval [t + 2p, t + 3p] of Yo and
the [t + T + p, t + T + 2p] of x are arbitrary, the sequence of spikes present
in the interval [t + p, t + 2p] of Yo could be arbitrary. However, X* and x are
identical in [t, t + p + T]. Thus, it follows from our axiom to the effect that
P(X; Yo) < P(X; y), for every t' e [t, t + p], P(E[o,](at, (X)), E[o,p] (t,(Yo))) is
at most the value of P(E[o,] (rat, (X*)), E[o,p] (p t' ,,))) because [o, p] (t' ( ,,))
is p. Since P(E[o,T] (at,(X*)), E[o, (t '(,,))) is less than T for every t' e [t,t +
p], P(B[o,T](at (X)), B[o,] (-t' (o))) is less than T in the same interval, as well.
Therefore, Yo has no output spikes in [t, t + p].
That 2p is the smallest possible gap length in for this to hold, follows
from the example in Figure l(a), where the conclusion did not hold, when 4*
had gaps of length 2p 6, for arbitrary 6 > 0. E
Corollary 1. For a neuron, consider an unbounded input spike configuration X*
with atleast one consistent unbounded output spike train ,,, so that ,,, has a gap
in the interval [t,t + 2p]. Then, for every t' more recent than t, the membrane
potential at t', is precisely a function of spikes in [t, t+T+p] (X*). That is, it
is independent of input and output spikes that occurred before the time instant
t + T + p. Moreover, every output spike train consistent with X* has the same
output after t.
The upshot of the Gap Theorem and its corollary is that whenever a neuron
goes through a period of time equal to twice its relative refractory period where
it has produced no output spikes, its membrane potential from then on becomes
independent of input spikes that are older than T + p seconds before the end of
Large gaps in the output spike trains of neurons seem to be extensively
prevalent in the human brain. In parts of the brain where the neurons spike
persistently, such as in the frontal cortex, the spike rate is very low (0.1Hz-10Hz)
. In contrast, the typical spike rate of retinal ganglion cells can be very high
but the activity is generally interspersed with large gaps during which no spikes
are emitted .
These observations motivate our definition of a criterion for input spike con-
figurations efferent on single neurons. The criterion dictates that there be inter-
mittent gaps of length atleast twice the relative refractory period in an output
spike train consistent with the spike configuration.
Definition 2 (Gap Criterion for a single neuron). An input spike configuration
X is said to satisfy a T-Gap Criterion for a single neuron if there is an output
spike train consistent with X for the neuron so that every time interval of length
T T + 2p contains at least one gap of length 2p in that output spike train.
Proposition 1. For an input spike configuration that satisfies a T-Gap crite-
rion for a neuron, there is exactly one output spike train consistent with it, with
respect to the same neuron.
For an input spike configuration X that satisfies a T-Gap criterion, the mem-
brane potential at any point in time is dependent on atmost T seconds of input
spikes in X before it. This can be seen from Figure l(e), which illustrates a
section of the input spike configuration and the output spike train.
The Gap Criterion we have defined for single neurons can be naturally ex-
tended to acyclic networks. The criterion is simply that the input spike config-
uration to the network is such that every neuron's input obeys a Gap criterion
for single neurons.
Definition 3 (Gap Criterion for an acyclic network). An input spike configura-
tion X is said to satisfy a T- Gap Criterion for an acyclic network if each neuron
in the network satisfies a () -Gap Criterion, when the network is driven by X,
where d is the depth of the acyclic network.
The membrane potential of the output neuron at any point is dependent on
atmost T seconds of past input, if the input spike configuration to the acyclic
network satisfies a T-Gap criterion. The situation is illustrated in Figure l(g).
Additionally, the output spike train is unique.
Lemma 1. Let X be an input spike configuration to an acyclic network that
satisfies a T-Gap Criterion for it. Then, the membrane potential of the output
neuron at any time instant depends on almost the past T seconds of input in X.
Also, X has a unique output spike train.
The proof follows from an inductive argument on neurons with successively
higher depth in the network. We are thus at a juncture where questions we
initially posed can be asked in a coherent manner that is also biologically well-
motivated. Before we proceed, we introduce some more notation. Given an
acyclic network AV, let gfT be the set of all input spike configurations that
satisfy a T-Gap Criterion for A. Let gg = UTR+ g(. Therefore, every acyclic
network A induces a transformation 7A : gN -* S that maps each element
of GA to a unique output spike train in S. Suppose g' C gN. Then, let
7T : -- S be the map TNA with its domain restricted to g'.
The Gap Criteria are very general and biologically realistic. However, given
a neuron or an acyclic network, there does not appear to be an easy way to char-
acterize all the input spike configurations that satisfy a certain Gap Criterion
for it. For an acyclic network, this is even more complex, since intermediate
neurons must satisfy Gap Criteria, with the inputs they get being outputs of
other neurons. This appears to make the problem of comparing the transforma-
tions performed by two different neurons/acyclic networks difficult, because of
the difficulty in finding spike configurations that satisfy Gap Criteria for both
of them. Next, we show how to overcome this difficulty.
The idea of the Flush Criterion is to force the neuron to produce no output
spikes for sufficiently long so as to guarantee that a Gap criterion is being
satisfied. This is done by making input time-bounded so that the neuron has
been at resting potential for arbitrarily long. In an acyclic network, the flush is
propagated so that all neurons have had a sufficiently long gap in their output
spike trains. Note that the Flush Criterion is not defined with reference to any
Definition 4 (Flush Criterion). A spike configuration X is said to satisfy a
T-Flush Criterion, if all its spikes lie in the interval [0, T], i.e. it has no spikes
before time instant T and after time instant 0.
Lemma 2. An input spike configuration X for a neuron that satisfies a T-Flush
Criterion also satisfies a (T + 2T + 2p)-Gap Criterion for that neuron.
The proof follows from bounds on output gaps induced by the flush (See
Figure l(f)). As indicated previously, inductive arguments can be applied to
extend this result to networks.
We introduce some more notation. Let the set of spike configurations con-
taining exactly m spike trains that satisfy the T-Flush criterion be 1T. Let
Fm UTCR+ F~. What we have established in this section is that Fm C Gr,
where N has exactly m input vertices.
In this section, we define notions of relative complexity of sets of acyclic networks
of neurons. What we would like to to capture with the definition is the following:
Given two classes of networks with the first class subsuming the second, we wish
to ask if there are transformations in the second class that cannot be performed
by networks in the first class. That is, do the extra networks in the second class
make it richer in terms of transformational power? The classes could correspond
to network architectures, although for the purpose of the definition, there is no
reason to require this to be the case. While comparing a set of networks, we
always treat their transformations only on inputs for which all of them satisfy
a certain Gap Criterion (though, not necessarily for the same T).
Definition 5. Let E1 and E2 be two sets of acyclic networks, each of whose
constituent networks has exactly m input vertices. Define g12 nHlC lU 2 gA.
E2 is said to be at least as complex as E1, if VA1i E E, 3A2 E E2 so that
T12 T 2. E2 is said to be more complex than E1 if E2 is atleast as
complex as E1 and 3]' E E2 such that VA EE1, T~2 T 1 12
Note that g12 is always nonempty because Fm C g12. Next, is the main theorem
of this section. We show that if it is already known that one class of networks is
atleast as complex as another, and if it is true that, in addition, that class is also
more complex, then inputs that satisfy the Flush Criterion are sufficient to prove
this. That is, to prove this type of complexity result, one can work exclusively
with Flushed inputs without losing any generality. This is not obvious because
Flush inputs form a subset of the more biologically realistic Gap inputs
Theorem 2 (Equivalence of Flush and Gap Criteria with respect to Complex-
ity). Let E1 and E2 be two sets of acyclic networks, each of whose constituent
networks has exactly m input vertices. Additionally, let E2 be at least as com-
plex as E1. Then, E2 is more complex than E1 if and only if 3A E E2 such
that VAN G E1, T 7N T4 .
Proof. If 3N' E 2 such that VA E l,TTm r Fm, then TG12 -T12
because Fm C A. For the other direction, let 3N' E E2 such that VA E
E1, T12 C2. We construct F' C Fm, so that T', T4 '. This immedi-
ately implies T N-~ 7' Nm. Consider arbitrary AN E 1. From the hypothesis
we have, T12 T 7'12. Therefore 3X E g12 such that T12(X) 1 T12(X). Ad-
ditionally, there exist TI, T2 E R+, so that x satisfies a TI-Gap Criterion for N
and a T2-Gap Criterion for N'. Let T max(Ti,T2). Let TI,2(X) ,,, and
TA12r() 0 Let U'F [ 0,2T](-t(x)). Note that each element of T sat-
isfies a 2T-Flush Criterion. The claim, then, is that Tf, T4/. We have
-[o,T](T '( [o,2T](at(x)))) B= [O,T](rt( ,,)) and E[o,T](T( [o,2T](rt(x))))
E[o,T](ut(Yo)). This follows from the fact that X satisfies the T-Gap Criterion
with both N and N' and therefore when N and N' are driven by any segment
of X of length 2T, the output produced in the last T seconds of that interval
agrees with So and ,,, respectively. Therefore, if Yo / ,,, it is clear that there
exists a t, so that T',([o,2T](urt(x))) Tg 7A(E[o,2T](urt(x))). is obtained by
taking the union of such F for every VN E Y1. The result follows. O
Thus, whenever we prove complexity results, we prescribe a transformation
whose domain is a subset of inputs that satisfy Flush Criteria, without loss of
7 Some Complexity Results for the Abstract Model
First, we point out that it is easy to construct a transformation that cannot be
effected by any acyclic network. One of its input spike configurations with the
prescribed output is shown in Figure l(k). For larger T, the shaded region is
simply replicated over and over again. Briefly, the reason this transformation
cannot be effected by any network is that, for any network, beyond a certain
value of T, the shaded region tends to act as a flush erasing memory of the first
spike, so that to the network on receiving another input spike is in the exact
same state it was when it received the first spike, and therefore produced no
output. Whether there are nontrivial transformations that cannot be done by
any acyclic network remains to be investigated.
Next, is the main theorem of this section. We prove that the set of networks
with atmost two neurons is more complex than the set of single neurons. The
proof is by prescribing a transformation which cannot be done by any single
neuron. We then construct a network with two neurons that can effect this
Theorem 3. For m > 2, the set of acyclic networks with almost two neu-
rons which have exactly m input vertices is more complex than the set of single
neurons with ... ii./ m afferent synapses.
Proof. That the former set is atleast as complex as the latter is clear. We
first prove the result for m = 2 and indicate how it can be trivially extended
for larger values of m. The following transformation is prescribed for m = 2.
Let the two input spike trains in each input spike configuration, which satisfies
a Flush Criterion be I1 and -2. I1 has evenly-spaced spikes starting at time
instant T until 0. For the sake of exposition, we call the distance between
consecutive spikes, one time unit and we number the spikes of Ilwith the first
spike being the oldest one. The ith input spike configuration in the prescribed
transformation satisfies a T-Flush criterion, where T = 4i + 3 time units. In the
ith configuration, 12 has spikes at time instants at which spike numbers 2i + 1
and 4i + 3 occur in I1. Finally, the output spike train corresponding to the the
ith input spike configuration has exactly one spike at the time instant at which
I1 has spike number 4i + 3. Figure l(h) illustrates the transformation for i = 2.
Next, we prove that the transformation prescribed above cannot be effected
by any single neuron. For the sake of contradiction, assume it can, by a neuron
with associated T and p. Let max(T, p) be bounded from above by k time units.
We show that the 1th input spike configuration and above cannot be mapped
by this neuron to the prescribed output spike train. Consider the output of the
neuron at the time instants corresponding to the (k + 1)th spike number and
(2k + 3)rd spike number of I1. At each of these time instants, the input received
in the past k time units and the output produced by the neuron in the past
k time units are the same. Therefore, the neuron's membrane potential must
be identical. However, the transformation prescribes no spike in the first time
instant and a spike in the second, which is a contradiction. It follows that no
single neuron can effect the prescribed transformation.
We now construct a two neuron network which can carry out the prescribed
transformation. The network is shown in Figure l(i). I1 and 12 arrive instanta-
neously at N2. I1 arrives instantaneously at N1 but I2 arrives at N1 after a delay
of 1 time unit. Spikes output by N1 take one time unit to arrive at N2, which is
the output neuron of the network. The functioning of this network for i = 2 is
described in Figure l(j). The generalization for larger i is straightforward. All
inputs are excitatory. N1 is akin to the neuron described in example of Figure
l(a), in that it while the depolarization due to a spike in I1 causes potential to
cross threshold, if, additionally, the previous output spike happened one time
unit ago, the associated hyperpolarization is sufficient to keep the membrane
potential below threshold now. However, if there is a spike from 12 also at the
same time, the depolarization is sufficient to cause an output spike, irrespective
of if there was an output spike one time unit ago. The T corresponding to N2
is shorter than 1 time unit. Further, N2 produces a spike if and only if all three
of its afferent synapses receive spikes at the same time. In the figure, II spikes
at times 1, 3, 5. It spikes at 6 because it received spikes both from I1 and 12 at
that time instant. Subsequently, it spikes at 8 and 10. The only time wherein
N2 received spikes at all three synapses at the same time is at 11, which is the
prescribed time. The generalization for larger i is straightforward.
For larger m one can just have no input on the extra input spike trains and
the same proof generalizes trivially. This completes the proof. E
The above proof also suggests a large class of transformations that cannot be
done by a single neuron. Informally, these are transformations for which there
is no fixed bound, so that one can always determine whether there is an output
spike or not, just by looking at a window of past input and past output, so that
the window has length atmost this bound.
In this paper, we have established a mathematically-rigorous and biologically-
satisfying framework to study acyclic networks as spike-train to spike-train
transformations. We defined notions of relative complexity and proved a number
of useful theorems. The complexity results we have obtained here suggest the
existence of rich underlying structure, much of which remains to be discovered.
Not only are the results themselves of theoretical significance, we believe
that in these proofs lie the seeds of computation in the brain; that is, this forces
us to directly address questions about which properties of neurons are crucial
for which properties of networks and how local properties of neurons constrain
global network behavior.
We also note that the framework we have constructed here to treat acyclic
networks as spike-train to spike-train transformations is also ripe for investiga-
tions other than the ones pertaining to complexity which has been our primary
interest in this paper.
Finally, to what extent these results can afford us insights into the larger
class of recurrent networks remains to be investigated.
 W. Maass. Lower bounds for the computational power of networks of spiking
neurons. Neural Computation, 8(1):1-40. (1996)
 Tim Gollisch and Markus Meister, Rapid Neural Coding in the Retina with
Relative Spike Latencies, Science 319, 1108 (2008).
 Arunava Banerjee, On the Sensitive Dependence on Initial Conditions of
the Dynamics of Networks of Spiking Neurons, Journal of Computational
Neuroscience, 20(3), pp. 321-348 (2006).
 Panayiota Poirazi, Terrence Brannon and Bartlett W. Mel Pyramidal Neu-
ron as Two-Layer Neural Network, Neuron, Vol. 37, 989999 (2003).
 Sander M. Bohte, Joost N. Koka, Han La Poutre, Error-backpropagation in
temporally encoded networks of spiking neurons, Neurocomputing 48, 1737
 Robert Gutig and Haim Sompolinsky, The tempotron: a neuron that learns
spike timingbased decisions, Nature Neuroscience, Vol. 9(3) (2006).
 Peter L. Bartlett and Wolfgang Maass, Vapnik-( / .... ..... Dimension of
Neural Nets, In M. A. Arbib, editor, The Handbook of Brain Theory and
Neural Networks, pages 1188-1192. 2nd edition, (2003).
 S. Nirenberg, S. Carcieri, A. Jacobs and P. Latham, Retinal ganglion cells
act largely as independent encoders. Nature 411, 698701 (2001).
 Gordon M. Shepherd, The Synaptic Organization of the Brain, Fifth Edi-
tion. Oxford University Press (2004).
 Arunava Banerjee, On the Phase-Space Dynamics of Systems of Spiking
Neurons. I: Model and Experiments Neural Computation, 13(1), pp. 161-
 Wulfram Gerstner and Werner M. Kistler, Spiking Neuron Models Single
Neurons, Populations, Plasticity, Cambridge University Press (2002).
II t\ II POTEP
SI I WHEN
S \ \ ABSENT
II t II I II WITH
\ \ EFFEC
II rl n p II rl T POTE
I I I
I I I I I
) p T-T- 2p 2 p
T T +2p
', I iN I I i i
(b) 2P -
I I I I FIRST
S--i ...'.u. CONSISTENT
) t +p T t 2p t p
T T+PT 0 T
P i I I I I
PAST T T T
1 3 6 8 PAST10
*i "2 "3 "4 "5 b 6 "7 "8 "9 0 "11
(a)This example describes a single neuron which has just one afferent synapse Until time t' in the past, it
received no input After this time, its input was spikes that arrived every p /2, where 6 > 0
An input spike alone (if there were no output spikes in the past p seconds ) can cause the neuron to
produce an output spike However, if there were an output spike within the past p seconds, the AHP is
sufficient to bring the potential below threshold, so that the neuron does not spike currently Thus, if the
first spike is absent, then the output spike train drastically changes Note that this change occurs, no matter
how often the shaded segment in the middle is repeated, i e it does not depend on how long ago the first spike occurred
(b)The example here is very similar to the one in (a), except that this is an unbounded-time input spike configuration with
the same periodic input spikes occurring since infinite past Y3e that both output spike trains are consistent with input
I I I I sI VoLTu
I I I I I N1 OUTPUT
I I I I I I OFu pu
I I I
I I 1 [