Many Slow Spikes, One Fast Beat: How Neuron Populations Make Rapid Rhythms
Neurons in the brain often fire at relatively slow rates individually. However, when a population of neurons is combined, either through a postsynaptic neuron summing their inputs or through measurement of a local field potential (LFP) summing up the activity of all the neurons in the population, the resulting signal can appear to oscillate much faster than any individual neuron. In the case of a postsynaptic neuron, the membrane potential integrates the contributions of many presynaptic neurons. Even if each presynaptic neuron fires slowly, the combined input can drive the postsynaptic neuron to fire at a higher apparent rate. Similarly, an LFP, which reflects the summed currents from a local population of neurons, can exhibit fast oscillations even when the underlying neurons are firing slowly, due to constructive interference of their individual periodic signals.
I want to show formally how a population of neurons can give rise to an higher apparent frequency than the firing rate of the underlying neurons. With a few simple assumptions this is not hard to prove. Let's assume we have $N$ neurons with frequency $\omega$. Let's assume these neurons are perfectly periodic with period $T=\frac{1}{\omega}$. Let us further assume that the shape of the voltage trace (or synaptic variable) is completely arbitrary except that it is strictly $T$-periodic. We will denote the $j$th neuron as $X_j(t)$, where $j$ is the $j$th neuron of the $N$ oscillators. Furthermore, it is worth stressing that $X_j(t+T) = X_j(t)$. Now I want to show how a sum of the $N$ neurons (either a downstream postsynaptic neuron or a local field potential) appears to have a $m \omega$ frequency where $m \in \mathbb{N}$, i.e.\ an integer multiple of $\omega$. Thus the resulting frequency will be an integer multiple of $\omega$.
Perhaps the easiest way to see this is to construct an apparent LFP with frequency $N\omega$. Let us call our postsynaptic neuron or LFP. Here our signal will be denoted $Y(t)$ and is the sum of all neurons (wether a post synaptic neuron or LFP, doesn't matter.
$$Y(t)=\sum_{j=1}^N X_j(t)$$
Let us further stipulate that each neuron has a phase offset following the rule
$$X_j(t)=X\!\left(t+\frac{T(j-1)}{N}\right)$$
This ensures that each oscillator is equally spaced out along a single period.
Now, we said the shape of $X$ is arbitrary, but periodic. Let's also assume $X(t)$ is bounded (does not explode to infinity), and finally we will assume it is continuous (and at least differentiable except at finite number of points). We also want to assume the Fourier series exists; that is, the Fourier series is well defined.
Recall, by the definition of the Fourier series,
$$X(t)=\sum_{k=-\infty}^\infty c_k e^{2\pi i \omega k t}$$
The constants $c_k$ are fixed for $X(t)$. Even if we time-shift $X(t)$, the $c_k$ remain the same. Moreover, the time shift corresponds to multiplying each term by a complex constant $e^{2\pi i \omega \frac{T(j-1)}{N}}=e^{2\pi i \frac{(j-1)}{N}}$.
This gives us the following Fourier series for the $j$th neuron:
$$X\!\left(t+\frac{T(j-1)}{N}\right)=\sum_{k=-\infty}^\infty c_k \, e^{2\pi i k \frac{(j-1)}{N}} \, e^{2\pi i \omega k t}$$
Now recall, $Y(t)$ is the sum of all neurons. This gives us
$$Y(t)=\sum_{j=1}^N X\!\left(t+\frac{T(j-1)}{N}\right)=\sum_{j=1}^N \sum_{k=-\infty}^\infty c_k \, e^{2\pi i k \frac{(j-1)}{N}} \, e^{2\pi i \omega k t}$$
Assuming $c_k$ decays at least on the order of $k^{-2}$ (which is equivalent to assuming continuous function of $X(t)$), we can interchange sums and write
$$Y(t)=\sum_{k=-\infty}^\infty c_k \, e^{2\pi i \omega k t} \sum_{j=1}^N e^{2\pi i k \frac{(j-1)}{N}}$$
Now we need to evaluate the inner sum
$$\sum_{j=1}^N e^{2\pi i k \frac{(j-1)}{N}}$$
One way to see this is that it corresponds to summing points uniformly distributed on the unit circle in the complex plane. This will make a regular polygon with centroid $0$ unless all the points stack up perfectly at $1$. However, let's prove this fact algebraically.
This is a geometric series. Note, very specifically, if $z=e^{2\pi i k \frac{1}{N}}$, then
$$\sum_{j=1}^N e^{2\pi i k \frac{(j-1)}{N}}=\sum_{j=1}^N z^{\,j-1}$$
We can compute
$$\sum_{j=1}^N z^{\,j-1}=\frac{1-z^N}{1-z}$$
First lets note that for any integer k we have the numerator
$$1-z^N=1-e^{2 \pi i k \frac{1}{N} N}=1-e^{2 \pi i k}=0$$
This is because $k \in \mathbb{N}$ and any integer multiple of $2 \pi i$ is 0.
Now lets examine the case when $k \neq m N$, where m is an integer ($m \in \mathbb{N}$) we know the denominator
$$1-z = e^{2 \pi i \frac{k}{N}}$$
Critically, $\frac{k}{N} \notin \mathbb{N}$ and $1-z \neq 0$.
therefore,
$$ \frac{1-z^N}{1-z} = \frac{0}{1-z} = 0$$
This result is equivalent to all Fourier terms with frequencies that are not multiples of $N\omega$ disappear . Critically, frequencies $\omega,2\omega,\dots,(N-1)\omega$ all vanish. Another way to see that is the kth term
$$Y_k = 0 c_k \, e^{2\pi i \omega k t} = 0 $$ when $k \neq mN$.
Now lets consider the case $k=mN$. Then $z=e^{2\pi i m}=1$, and $z-1 = 0$ so both numerator and denominator are $0$, that is $$\frac{1-z^N}{1-z}= \frac{0}{0}$$
Using the limit
$$\lim_{z\to 1}\frac{1-z^N}{1-z}$$
and L'Hôpital's rule, we get
$$\frac{-N z ^{N-1}}{-1} = N$$
and we get
$$N \neq 0$$
Critically this means only terms with $k=mN$ survive in the outer sum. To see this more explictly we have
$$Y_{k=mN} = N c_{k=nM} e^{2\pi i \omega k t} \neq 0 $$
when $k = mN$. Because all $k \neq mN$ we can now write
$$Y(t) = \sum^{\infty}_{m=-\infty} N c_{mN} e^{2\pi i m N \omega t} $$
That means every multiple frequency $mN\omega$ (with $m\in\mathbb{N}$) is an allowed frequency of the Fourier series of $Y(t)$, with the lowest (fundamental) frequency $N\omega$. All terms associated with $k\omega$ vanish exactly unless $k=mN$. Thus when we take the Fourier series of $Y(t)$ it appears to have frequency $N\omega$, not $\omega$, despite being composed of much slower oscillators.
Below I show this example with square and triangle pulses, although any pulse works. Here I choose $N=5$ and $\omega=1$.
Notice when I have 5 oscillators, the dominant peak is at at $N\omega=5$
![]() |
| Square Wave and Triangle wave being boosted to a frequency of $5$ cycles. Note the peaks at $5$ cycles. Also note there are no peaks at $1$ cycles |
I can also only add 2 or 3 oscillators, as I do, the peak at $N\omega=5$ gets more and more prominent as I add oscillators. Once I hit 5 oscillators, all peaks between $mN\omega$ disappear and $Y(t)$ seems to have frequency $N\omega$, not $\omega$.
![]() |
| Two and three neurons added instead of all 5. Notice as 2 goes to 3 the peak at 5 gets larger. However, also note that there is a peak at 1 cycle and all its harmonics. |
Now I will conclude that there are several ways to generalize this that I will not discuss here. Specifically, one can change the phase offset to $\frac{p (j-1)}{N}$ where $p\in\mathbb{N}$. Also one can create a frequency $m \omega$ where $m$ is a divisor of $N$. Both are the same but slightly modified proofs. So an interested reader can try to do it themselves.
Finally, I want to show that for small to moderate noise (i.e.\ the spike times jitter around the mean slow frequency $\omega$) the results still mostly hold. Proving this is much more difficult so I won't go into it here, except to show it below for jittered square waves and triangle waves.
As a final note, square waves technically are not differentiable, and one needs to be careful when switching the order of the sums. They are not absolutely convergent, but in this case it still ends up working; the proof is a bit more complicated.
Author: Alexander White








留言
張貼留言