Part 3 Izhikevich's simple neurons: Resonators and Integrators.
$\newcommand{\Vminus}{V_-}
\newcommand{\Wminus}{W_-}
\newcommand{\Vplus}{V_+}
\newcommand{\dvdv}{\frac{\partial\frac{dV}{dt}}{\partial V}}
\newcommand{\dvdw}{\frac{\partial\frac{dV}{dt}}{\partial W}}
\newcommand{\dwdv}{\frac{\partial\frac{dW}{dt}}{\partial V}}
\newcommand{\dwdw}{\frac{\partial\frac{dW}{dt}}{\partial W}}
$
中文在這裡
From the outset, we said the Izhikevich model captures many different types of possible neural computations. It captures the many different types of computations by taking advantage of the quadratic nonlinearity $V^2$. Thus by choosing different parameters one can create many different types of neurons capable of many different types of computations.
What does the word "computation" mean? Simply put, computation means given some input into a neuron, how does the neuron transform that input into a output. And more specifically, Which kinds of inputs does the neuron respond to by firing?
One of the biggest classifications of computational properties according to Dr. Izhikevich is integrators and resonators. Very simply put, a resonator has sub-threshold oscillations and integrators do not have any oscillations. This seemingly trivial property is very important in understanding the dynamics of neurons. To quote him, "This feature is so important that many other neuronal properties discussed later are mere consequences of the existence or absence of such oscillations. " (Chapter 7 pg 230).
What is a sub-threshold oscillation? Lets look at a simulation of the famous Hodgkin-Huxley model. It starts out at rest, then at 62 ms we inject a small current. As the neuron returns to rest, it doesn't monotonically decay towards rest. Instead, it oscillates above and below the rest potential as it decays towards rest. This oscillatory return to rest is a sub-threshold oscillation, and thus the Hodgkin-Huxley model is a resonator. Izhikevich neurons can also be resonators; we just need to choose the correct set of parameters. In the same figure I have created a Izhikevich neuron with the exact same sub-threshold oscillation frequency! The graphs are hard to tell apart, if you look closely at the highest peak, you'll see a slight difference.
To compare to a model without this sub-threshold oscillation lets look at the Izhikevich model we talked about in the first blog. This model is an integrator. We give the same input into this integrator neuron and it monotonically returns to rest. It strictly stays above the rest state and never drops below it.
The Hodgkin-Huxley resonator can also be approximately estimated as a sum of four exponentials multiplied by a sine wave. The values of these numbers are exactly the eigenvalues and eigenvectors of the linearization around the rest point. The reason there are four eigenvalues, is because the Hodgkin-Huxley model has 4 state variables, while the Izhikevich model only has 2 state variables. If you are unfamiliar with what linearization is see this blog.
To give a quick summary of linearization, it is looking at a very small region around a equilibrium point and pretending it is a linear system. If we pretend the area around a system is linear then we can use all the tools developed in linear ordinary differential equations, including eigenvalues and eigenvectors. Usually these regions are small, and in neuroscience the region usually only spans a few millivolts in either direction before the nonlinear effects become to big to ignore. Nonetheless this information is very useful.
One of the most important things to note here, is that the eigenvalues tell us a lot of information about the existence of sub-threshold oscillations. Recall that any linear system is a sum of exponentials. $V(t) \approx \xi_1 e^{\lambda_1 t} + \xi_2 e^{\lambda_2 t}$, where $\lambda$ are the eigenvalues, and $\xi_{1,2}$ are the voltage component of the eigenvectors. More importantly note that $\sin(\omega t)$ is also a sum of exponentials. $\sin(\omega t) = \frac{e^{(\omega i t)} - e^{(-\omega i t)}}{2 i }$. Where $i$ is the imaginary number. Thus, if we have a sub-threshold oscillation like the Hodgkin-Huxley model where $V(t)\approx C e^{\alpha t} \sin(\omega t)$, and we assume $\phi=0$ for convenience, then substituting and collecting we will note
$$ V(t)\approx C e^{\alpha t} \sin(\omega t) $$
$$V(t)\approx C e^{\alpha t} \frac{e^{(\omega i t)} - e^{(-\omega i t)}}{2i} $$
$$V(t)\approx \frac{C}{2i} e^{(\alpha+\omega i) t} + \frac{-C}{2i}e^{(\alpha-\omega i) t}$$
$$\xi_1 = \frac{C}{2i}\text{ and } \xi_2 = \frac{-C}{2i}$$
This means that $\lambda = \alpha \pm \omega i $! Thus our sub-threshold oscillation period is given by the imaginary part of the eigenvalues. If the eigenvalues have no imaginary part then there will be no sub-threshold oscillations, and the neuron is not a resonator! This is why linearization is such an important tool in neuroscience. It allows us to get a handle on the very complex interaction of nonlinear terms and still classify computational properties.
Now, why would we care about resonators in the first place. Why does biology even use them? Not only are they found in squids, but they are also found in mammals. Take for example mesencephalic trigeminal neurons or Inferior Olive neurons. Even the thalamus neurons can support sub-threshold oscillations. But why, what can they do that integrators cannot?
Resonators prefer to respond to specific frequency in input, while integrators do not. Integrators tend to sum up the input over time, and if the input signal oscillates it tends to cancel out. This makes integrators great low-pass filters. However, resonators love a specific input frequency that is exactly the frequency of their sub-threshold oscillations. Resonators make great band-pass filters.
We can see this clearly with a simple experiment. Lets look at our two different Izhikevich models, a integrator, and a resonator. We will inject a zap current given by the equation $I(t) = \sin(\Omega t^2)$. This current slowly sweeps through a bunch of different frequencies. The resulting voltage trace shows a attenuated sine wave. This gives us a sort of make-shift Fourier transform, and allows us to estimate the gain of each neuron at a given frequency.
Notice the integrator only fires the spike when the zap frequency is slow. This is because it responding only to slow frequency waves, but if the signal is too fast then the fast oscillation cancel out so they attenuate stronger. However, the resonator seems to reach a maximum somewhere in the middle of the zap current. This is the resonance frequency $\omega$. This is, that input frequency corresponds to the natural frequency of the resonating neuron. As stated before if the eigenvalues are $\lambda = a \pm i\omega$ then $\omega$ is the resonant frequency, as well as the subthreshold oscillation frequency. This has immediate consequences for biology. This means that resonators can act as band pass filters, and of course us creative computational neuroscientists can make useful circuits with these.
Here is an example of a fairly simple circuit, with four neurons, inspired from Dr. Izhikevich's book chapter 7. There is an input neuron and it connects to three different resonating neurons , Rez A, Rez B, and Rez C. As the current input into the input neuron increases from low to high, different resonators will be firing. At the low injection, the green neuron will be firing at just the right frequency that it is inside Rez A frequency band and Rez A will fire. However, the signal is outside rez B and rez C frequency band, so they do not fire.
However, when the signal is medium strength, then the green neuron's firing rate is higher. Thus it is no longer in Rez A frequency band but Rez B frequency band. Thus only Rez B fires. Finally, when the injected current is very strong, then the frequency is to fast for Rez A and Rez B. But now it perfect for Rez C and it fires.
Obviously Rez A, Rez B, and Rez C all have different parameters. Each Parameter set is chosen so that the Resonate frequency band of each neuron corresponds to the correct strength of injected current into the green input neuron. So how do we find these particular parameters? This is a lot of math and it relies heavily on linearization. If you haven't read part 2 I'd suggest you read it now.
If we recall from the last blog article a Izhikevich neuron is a resonator if
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
is true. That is a particular nasty equation but, when this equation is true then neuron will have a pair of eigenvalues with a imaginary part. So the goal is to find a set of parameters that satisfy this particular equality. But there is always an easier trick hiding around the corner. We will make it easy and set $I=0$ so that $V_- = V_r$.
Thus, we know that the Jacobian matrix at rest is given by
$$
J = \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
We also know that the eigenvectors $\xi_1$, and $\xi_2$ make a matrix $\Xi$ and that the eigenvalues can make a diagonal matrix $\Lambda$ such that
$$
\Lambda= \left(
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_2
\end{array}
\right)
$$
We also know that
$$ J = \Xi^{-1} \Lambda \Xi $$
So our goal is to choose two eigenvectors and eigenvalues that corrispond to a system that we want. So say I want to have a neuron with sub-threshold oscillations of $63.6 Hz$ (or $0.0636 kHz$) and I want it to decay with a halflife of 20 ms. Then that implies that $\lambda_{1,2}= \frac{1}{20}\pm 2\pi i(.0636)\approx -\frac{1}{20} \pm 0 .4i$ . Note, I chose the weird Hz so when we multiplied by $2\pi$ the number would be easier to work with. Now, the eigenvectors need to be complex conjugates of each other which means that $\xi_1 = <1, a + b i>$ and $\xi_2= <1,a - b i>$, the choice doesn't matter too much so $\xi_{1,2}= <1, -\frac{1}{20} \pm .4i >$ works.**
Now the eigenvectors can be thought of as the columns of a matrix $\Xi$. Thus we can construct the Jacobian using the equation $J= \Xi \Lambda \Xi^T$. This means that
$$
\left(
\begin{array}{cc}
1 & 1\\
-\frac{1}{20} - .4i & -\frac{1}{20} + .4i
\end{array}
\right)
\left(
\begin{array}{cc}
-\frac{1}{20} + .4i & 0\\
0 & -\frac{1}{20}- .4i
\end{array}
\right)
\left(
\begin{array}{cc}
1 & -\frac{1}{20} + .4i \\
1 & -\frac{1}{20} - .4i
\end{array}
\right)
= \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
Computing the left hand side we see that
$$
\left(
\begin{array}{cc}
1 & 17.004\\
1 & -1.04
\end{array}
\right)= \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$$$
That immediately gives us the parameters $C_m\approx.05$, $a\approx.1.04$ and $b\approx .996$. However we will note that we must solve the equation.
$$
\frac{k(V_r-V_{th})}{.05}=1
$$
Now, I will choose $V_r = -60 mV$ so that it is more realistic neuron. The choice here on the rest potential isn't very important. But you must choose a $V_{th}$ or $k$ and then solve for the one you don't choose. I tend to prefer to choose $k$ because it will allow us more control over the time it takes to spike. larger $k$ means quicker spikes There is no way to exactly determine spike time, but I choose $k=.11$ here. Choosing parameters is always a bit of an art. But here I've boiled the only artistic choice down to parameter $k$.
However when we follow this method we do get many of the parameters that will automatically satisfy the condition
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
And we only needed to arbitrarily choose one parameter $V_r$ and only had to make a educated guess on another parameter $k$. Now, this doesn't give us $c$ and $d$ obviously, but that's where the separatrix graph from the first blog comes in. I have chosen the initial conditions such that $c = -55$ and $d = 10$ so that the neuron resets to the left side of the separatrix after firing. I did this here by eye, but this can also be coded more explicitly.
As a note tuning a model for a network is very hard. Often it is best described as an art. In this example I was playing with $k$ $c$ and $d$ all at the same time. Not just $k$ alone. Perhaps in another blog I will discuss how I try to guess parameters, more systematically. But no matter what the parameters you are guessing, practice makes perfect.
** This isn't quite true. The eigenvectors can help control how wide the band pass filter is. Perhaps I will revisit this in a future blog.
From the outset, we said the Izhikevich model captures many different types of possible neural computations. It captures the many different types of computations by taking advantage of the quadratic nonlinearity $V^2$. Thus by choosing different parameters one can create many different types of neurons capable of many different types of computations.
What does the word "computation" mean? Simply put, computation means given some input into a neuron, how does the neuron transform that input into a output. And more specifically, Which kinds of inputs does the neuron respond to by firing?
One of the biggest classifications of computational properties according to Dr. Izhikevich is integrators and resonators. Very simply put, a resonator has sub-threshold oscillations and integrators do not have any oscillations. This seemingly trivial property is very important in understanding the dynamics of neurons. To quote him, "This feature is so important that many other neuronal properties discussed later are mere consequences of the existence or absence of such oscillations. " (Chapter 7 pg 230).
What is a sub-threshold oscillation? Lets look at a simulation of the famous Hodgkin-Huxley model. It starts out at rest, then at 62 ms we inject a small current. As the neuron returns to rest, it doesn't monotonically decay towards rest. Instead, it oscillates above and below the rest potential as it decays towards rest. This oscillatory return to rest is a sub-threshold oscillation, and thus the Hodgkin-Huxley model is a resonator. Izhikevich neurons can also be resonators; we just need to choose the correct set of parameters. In the same figure I have created a Izhikevich neuron with the exact same sub-threshold oscillation frequency! The graphs are hard to tell apart, if you look closely at the highest peak, you'll see a slight difference.
To compare to a model without this sub-threshold oscillation lets look at the Izhikevich model we talked about in the first blog. This model is an integrator. We give the same input into this integrator neuron and it monotonically returns to rest. It strictly stays above the rest state and never drops below it.
All integrators don't have subthreshold oscillations. Only resonators have subthreshold oscillations. But lets add some math here. The voltage in the Izhikevich integrator decays approximately as the sum of two exponentials, $V(t) \approx \xi_1 e^{(\lambda_1 t)} + \xi_2 e^{(\lambda_2 t)}$.
Where $\lambda$ and $\xi$ are the eigenvalues and the eigenvectors respectively. However, in the Izhikevich Resonator solution is , $V(t) \approx. C e^{a_1 t}\sin(\omega t+\phi)$. Note here that $\lambda_{1,2}=a \pm \omega i$ and $C$ and $\phi$ are dependent on the eigenvectors and the initial conditions.The Hodgkin-Huxley resonator can also be approximately estimated as a sum of four exponentials multiplied by a sine wave. The values of these numbers are exactly the eigenvalues and eigenvectors of the linearization around the rest point. The reason there are four eigenvalues, is because the Hodgkin-Huxley model has 4 state variables, while the Izhikevich model only has 2 state variables. If you are unfamiliar with what linearization is see this blog.
To give a quick summary of linearization, it is looking at a very small region around a equilibrium point and pretending it is a linear system. If we pretend the area around a system is linear then we can use all the tools developed in linear ordinary differential equations, including eigenvalues and eigenvectors. Usually these regions are small, and in neuroscience the region usually only spans a few millivolts in either direction before the nonlinear effects become to big to ignore. Nonetheless this information is very useful.
One of the most important things to note here, is that the eigenvalues tell us a lot of information about the existence of sub-threshold oscillations. Recall that any linear system is a sum of exponentials. $V(t) \approx \xi_1 e^{\lambda_1 t} + \xi_2 e^{\lambda_2 t}$, where $\lambda$ are the eigenvalues, and $\xi_{1,2}$ are the voltage component of the eigenvectors. More importantly note that $\sin(\omega t)$ is also a sum of exponentials. $\sin(\omega t) = \frac{e^{(\omega i t)} - e^{(-\omega i t)}}{2 i }$. Where $i$ is the imaginary number. Thus, if we have a sub-threshold oscillation like the Hodgkin-Huxley model where $V(t)\approx C e^{\alpha t} \sin(\omega t)$, and we assume $\phi=0$ for convenience, then substituting and collecting we will note
$$ V(t)\approx C e^{\alpha t} \sin(\omega t) $$
$$V(t)\approx C e^{\alpha t} \frac{e^{(\omega i t)} - e^{(-\omega i t)}}{2i} $$
$$V(t)\approx \frac{C}{2i} e^{(\alpha+\omega i) t} + \frac{-C}{2i}e^{(\alpha-\omega i) t}$$
$$\xi_1 = \frac{C}{2i}\text{ and } \xi_2 = \frac{-C}{2i}$$
This means that $\lambda = \alpha \pm \omega i $! Thus our sub-threshold oscillation period is given by the imaginary part of the eigenvalues. If the eigenvalues have no imaginary part then there will be no sub-threshold oscillations, and the neuron is not a resonator! This is why linearization is such an important tool in neuroscience. It allows us to get a handle on the very complex interaction of nonlinear terms and still classify computational properties.
Now, why would we care about resonators in the first place. Why does biology even use them? Not only are they found in squids, but they are also found in mammals. Take for example mesencephalic trigeminal neurons or Inferior Olive neurons. Even the thalamus neurons can support sub-threshold oscillations. But why, what can they do that integrators cannot?
Resonators prefer to respond to specific frequency in input, while integrators do not. Integrators tend to sum up the input over time, and if the input signal oscillates it tends to cancel out. This makes integrators great low-pass filters. However, resonators love a specific input frequency that is exactly the frequency of their sub-threshold oscillations. Resonators make great band-pass filters.
We can see this clearly with a simple experiment. Lets look at our two different Izhikevich models, a integrator, and a resonator. We will inject a zap current given by the equation $I(t) = \sin(\Omega t^2)$. This current slowly sweeps through a bunch of different frequencies. The resulting voltage trace shows a attenuated sine wave. This gives us a sort of make-shift Fourier transform, and allows us to estimate the gain of each neuron at a given frequency.
Notice the integrator only fires the spike when the zap frequency is slow. This is because it responding only to slow frequency waves, but if the signal is too fast then the fast oscillation cancel out so they attenuate stronger. However, the resonator seems to reach a maximum somewhere in the middle of the zap current. This is the resonance frequency $\omega$. This is, that input frequency corresponds to the natural frequency of the resonating neuron. As stated before if the eigenvalues are $\lambda = a \pm i\omega$ then $\omega$ is the resonant frequency, as well as the subthreshold oscillation frequency. This has immediate consequences for biology. This means that resonators can act as band pass filters, and of course us creative computational neuroscientists can make useful circuits with these.
Here is an example of a fairly simple circuit, with four neurons, inspired from Dr. Izhikevich's book chapter 7. There is an input neuron and it connects to three different resonating neurons , Rez A, Rez B, and Rez C. As the current input into the input neuron increases from low to high, different resonators will be firing. At the low injection, the green neuron will be firing at just the right frequency that it is inside Rez A frequency band and Rez A will fire. However, the signal is outside rez B and rez C frequency band, so they do not fire.
However, when the signal is medium strength, then the green neuron's firing rate is higher. Thus it is no longer in Rez A frequency band but Rez B frequency band. Thus only Rez B fires. Finally, when the injected current is very strong, then the frequency is to fast for Rez A and Rez B. But now it perfect for Rez C and it fires.
Obviously Rez A, Rez B, and Rez C all have different parameters. Each Parameter set is chosen so that the Resonate frequency band of each neuron corresponds to the correct strength of injected current into the green input neuron. So how do we find these particular parameters? This is a lot of math and it relies heavily on linearization. If you haven't read part 2 I'd suggest you read it now.
If we recall from the last blog article a Izhikevich neuron is a resonator if
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
is true. That is a particular nasty equation but, when this equation is true then neuron will have a pair of eigenvalues with a imaginary part. So the goal is to find a set of parameters that satisfy this particular equality. But there is always an easier trick hiding around the corner. We will make it easy and set $I=0$ so that $V_- = V_r$.
Thus, we know that the Jacobian matrix at rest is given by
$$
J = \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
We also know that the eigenvectors $\xi_1$, and $\xi_2$ make a matrix $\Xi$ and that the eigenvalues can make a diagonal matrix $\Lambda$ such that
$$
\Lambda= \left(
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_2
\end{array}
\right)
$$
We also know that
$$ J = \Xi^{-1} \Lambda \Xi $$
So our goal is to choose two eigenvectors and eigenvalues that corrispond to a system that we want. So say I want to have a neuron with sub-threshold oscillations of $63.6 Hz$ (or $0.0636 kHz$) and I want it to decay with a halflife of 20 ms. Then that implies that $\lambda_{1,2}= \frac{1}{20}\pm 2\pi i(.0636)\approx -\frac{1}{20} \pm 0 .4i$ . Note, I chose the weird Hz so when we multiplied by $2\pi$ the number would be easier to work with. Now, the eigenvectors need to be complex conjugates of each other which means that $\xi_1 = <1, a + b i>$ and $\xi_2= <1,a - b i>$, the choice doesn't matter too much so $\xi_{1,2}= <1, -\frac{1}{20} \pm .4i >$ works.**
Now the eigenvectors can be thought of as the columns of a matrix $\Xi$. Thus we can construct the Jacobian using the equation $J= \Xi \Lambda \Xi^T$. This means that
$$
\left(
\begin{array}{cc}
1 & 1\\
-\frac{1}{20} - .4i & -\frac{1}{20} + .4i
\end{array}
\right)
\left(
\begin{array}{cc}
-\frac{1}{20} + .4i & 0\\
0 & -\frac{1}{20}- .4i
\end{array}
\right)
\left(
\begin{array}{cc}
1 & -\frac{1}{20} + .4i \\
1 & -\frac{1}{20} - .4i
\end{array}
\right)
= \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
Computing the left hand side we see that
$$
\left(
\begin{array}{cc}
1 & 17.004\\
1 & -1.04
\end{array}
\right)= \left(
\begin{array}{cc}
\frac{k(V_r-V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$$$
That immediately gives us the parameters $C_m\approx.05$, $a\approx.1.04$ and $b\approx .996$. However we will note that we must solve the equation.
$$
\frac{k(V_r-V_{th})}{.05}=1
$$
Now, I will choose $V_r = -60 mV$ so that it is more realistic neuron. The choice here on the rest potential isn't very important. But you must choose a $V_{th}$ or $k$ and then solve for the one you don't choose. I tend to prefer to choose $k$ because it will allow us more control over the time it takes to spike. larger $k$ means quicker spikes There is no way to exactly determine spike time, but I choose $k=.11$ here. Choosing parameters is always a bit of an art. But here I've boiled the only artistic choice down to parameter $k$.
However when we follow this method we do get many of the parameters that will automatically satisfy the condition
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
And we only needed to arbitrarily choose one parameter $V_r$ and only had to make a educated guess on another parameter $k$. Now, this doesn't give us $c$ and $d$ obviously, but that's where the separatrix graph from the first blog comes in. I have chosen the initial conditions such that $c = -55$ and $d = 10$ so that the neuron resets to the left side of the separatrix after firing. I did this here by eye, but this can also be coded more explicitly.
As a note tuning a model for a network is very hard. Often it is best described as an art. In this example I was playing with $k$ $c$ and $d$ all at the same time. Not just $k$ alone. Perhaps in another blog I will discuss how I try to guess parameters, more systematically. But no matter what the parameters you are guessing, practice makes perfect.
** This isn't quite true. The eigenvectors can help control how wide the band pass filter is. Perhaps I will revisit this in a future blog.
留言
張貼留言