Part 2 Izhikevich's simple neuron: Stability Analysis
$\newcommand{\Vminus}{V_-}
\newcommand{\Wminus}{W_-}
\newcommand{\Vplus}{V_+}
\newcommand{\dvdv}{\frac{\partial\frac{dV}{dt}}{\partial V}}
\newcommand{\dvdw}{\frac{\partial\frac{dV}{dt}}{\partial W}}
\newcommand{\dwdv}{\frac{\partial\frac{dW}{dt}}{\partial V}}
\newcommand{\dwdw}{\frac{\partial\frac{dW}{dt}}{\partial W}}
$
Izhikevich neurons and proving the rest state
中文在這裡
http://neuroinfo-cclolab.blogspot.tw/2018/04/part-2-izhikevichs-simple-neuron_8.html
Many neurons are highly nonlinear, and the Izhikevich model is no exception. The $V^2$ term in the voltage equation, plays a major role in the dynamics of the model neuron. It is natural to use the theory of nonlinear dynamics to study the Izhikevich model. Oddly, one of the most important concepts in nonlinear dynamics is linearization. Linearization is the process of approximating the nonlinear systems as a linear one. This should feel counter intuitive. The great success of nonlinear models, including the Izhikevich model, in neuroscience is the vast repertoire of phenomena that they can represent. Including the all-or-nothing action potential, which is fundamentally a nonlinear phenomena. Now, I am claiming that the secret to understanding these complicated nonlinear phenomena is to approximate away the nonlinear $V^2$ via linearization. How can I make this bizarre claim?
The reason Linearization is such a powerful tool in neuroscience as the voltage spends quite a bit of time near the rest equilibrium. Linearization tells us that when the voltage is near any equilibrium nonlinear effects are weak, weak enough to mathematically ignore. When we do this we can ask a few very important questions.
To motivate this exposition, we will answer a seemingly simple question. How do we know that that when the Izhikevich model has a rest equilibrium, and that rest equilibrium is stable? That means in the absence of input, the Izhikevich model will always return to rest. We will define exactly what it means to be stable, and we will also show under what conditions the equilibrium points exists, and when they do exist, one is always stable.
Lets walk through the linearization of the biologically realistic Izhikevich model. If your completely unfamiliar with eigenvalues, I suggest watching this video from 3blue1brown. I suggest you have a pencil and some paper with you. It will help to fill in some of the small gaps in algebra.
For now we will ignore the spiking mechanism and focus on the differential equation.
$$ C_m \frac{dV}{dt} = k (V-V_r)(V-V_{th}) - W + I $$
$$ \frac{dW}{dt} = a(b(V-V_r) - W)$$
If you need a reminder of what the parameters represent I suggest reading part 1.
First we need to find the equilibrium points. Recall, equilibrium points are those points where both derivatives $\frac{dV}{dt}=\frac{dW}{dt}=0$. We can visualize this by graphing the nullcline. The blue line represents where $\frac{dV}{dt}=0$ and likewise, the yellow line represents where $\frac{dW}{dt}=0$ They cross in two places, the left most equilibrium $Eq_{-}=(V_{-},W_{-})$, and the right most equilibrium $Eq_{+}=(V_{+},W_{+})$.
$$ 0 = k (V-V_r)(V-V_{th}) - W + I $$
$$ 0 = a(b(V-V_r) - W)$$
Our goal here is to solve the equation for $V$, so we use the bottom equation and solve for $W$ in terms of $V$
$$ W = b(V- V_r)$$
Then plug this into the Voltage equation to get
$$ 0 = k (V-V_r)(V-V_{th}) - b(V- V_r) + I $$
Notice that this equation is is quadratic in $V$ and notice it does not depend on $W$. We can now treat this as a quadratic equation and use the quadratic formula to find that
$$V=\frac{b+ k(V_r+V_{th})\pm \sqrt{(b+k(V_r+V_{th}))^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
Here, we are want $\Vminus$ which is negative root, so we take the negative signed radical giving us
$$\Vminus=\frac{b+ k(V_r+V_{th})- \sqrt{(b+k(V_r+V_{th}))^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
Lets take a minute and pause to see what this equation even means and represents. Because we want to show two things here. That $\Vminus<\Vplus$, and when the roots disappear. $V_o<0$.
Lets build our geometric intuition first. Take a look at the GIF. It begins when $I=0$. As $I$ shifts upward $\Vminus$ and $\Vplus$ move closer and close together and eventually collide, at $V_o$. We want to prove that if $b>0$ that $\Vminus<V_o<0$. This fact will come in very useful later.
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(V_r+V_{th})]^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
We will focus here solely on the part inside the radical. Our goal is to simplify it so that we can use it in some inequalities later. In order to do that we will first expand everything inside the radical the recollect the terms so most of them cancel. We start by looking at just the radical
$$- \sqrt{[b+k(V_r+V_{th})]^2- 4 k(I-b V_r-k V_r V_{th})}$$
We expand everything inside to get
$$- \sqrt{[b^2 + 2 b k V_r +k^2 V_r^2 +2 b k V_{th} + 2 k^2 V_r V_{th}+V_{th}^2] +(-4b k V_r- 4 b k V_r V_{th} -4 I k)}$$
then collect like-terms
$$- \sqrt{[b^2 - 2 b k V_r +k^2 V_r^2 +2 b k V_{th} - 2 k^2 V_r V_{th}+Vth^2] - (4 I k)}$$
Now notice how everything in the $[]$ brackets is the same except for sign changes? This means that we can rearrange everything into this form
$$- \sqrt{[b+k(-V_r+V_{th})]^2- 4 k I}$$
Replace the following back into the expression for $\Vminus$ to get
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(-V_r+V_{th})]^2- 4 k I}}{2k}$$
This equation is much simpler than the previous expression for $\Vminus$. You'll now notice the effect that input current $I$ has on the system is much clearer. As $I$ increases, the value inside the radical gets smaller and smaller. This means the two equilibrium points $\Vplus$ and $\Vminus$ get closer and closer together. To show this more clearly lets start by setting $I=0$. This gives is the following
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(-V_r+V_{th})]^2}}{2k}$$
$$\Vminus= \frac{b+ k(V_r+V_{th}) - {[b+k(-V_r+V_{th})]}}{2k}$$
$$\Vminus =V_r$$
Which is exactly the value we expect for $\Vminus$. Likewise we can do the same for $\Vplus$
$$\Vplus= \frac{b+ k(V_r+V_{th}) + {[b+k(-V_r+V_{th})]}}{2k}$$
$$\Vplus=\frac{2b+ 2k(V_{th})}{2k}$$
Note that $\Vminus = V_r$, but $\Vplus \neq V_{th}$. Thus as $b$ gets more and more positive $\Vplus$ gets more and more positive. However, as we saw ealier, as $I$ increases $\Vminus$ and $\Vplus$ move together,and eventually the roots will collide at some value $V_o$, before disappearing. In fact the roots will collide percisely when $$\sqrt{[b+k(-Vr+Vth)]^2- 4 k I}=0$$
Solving this equation for $I$, we see this happens when
$$I = \frac{[b+k(-Vr+Vth)]^2}{4 k}$$
That means leaves us with
$$V_o = \frac{b+ k(Vr+Vth)}{2k}$$
A small note $V_o$ is known as the saddle node bifurcation, and we will discuss it in a future article. But for now, you can think that when we $I \geq \frac{[b+k(-Vr+Vth)]^2}{- 4 k}$ then the neuron will always spike because there is no rest equilibrium.
Now we can begin the linearization process. Recall, our goal is to find the approximate linear system around this the rest equilibrium fix point. What does it mean to be near an equilibrium point? I have attached two graphs to make this clear. One is the whole phase plane with both $Eq_+$ and $Eq_-$ labeled. I've also attached a graph where I zoom in on a region around $Eq_-$. You'll notice that on that scale the blue quadratic null-cline looks like a line. This is what it means for it to be almost linear. The system when zoomed in looks linear.
We can use this approximate linear system to determine if the equilibrium is stable. a Equilibrium point is stable if and only if all the eigenvalues of the corresponding linear system has negative real parts for its eigenvalues.
In order figure out what this linear system looks like when we zoom in, we need to find a Jacobian matrix. A Jacobian matrix is exactly the mathematical description of these straight blue and yellow nullclines. Mathematically, the Jacobian matrix $J$ satisfies $ \frac{d\vec{x}}{dt} = J\vec{x} $ when $\vec{x}\approx <\Vminus,\Wminus>$. To find $J$ we approximate the nullclines with there tangent lines at the equilibrium point $\Vminus$. To do this we need to calculate the partial derivative with respect to $V$ and $W$ of the $V$-nullcline and $W$-nullcline. This gives us equations. This gives us a matrix that looks like
$$
J = \left(
\begin{array}{cc}
\dvdv & \dvdw \\
\dwdv & \dwdw
\end{array}
\right)
$$
where $\frac{\partial}{\partial V}$ and $\frac{\partial}{\partial W}$ is just the derivative of each function with respect to $V$ and $W$. Performing this algebra we get the following:
$$ \dvdv = \frac{\partial}{\partial V}\frac{k (V-V_r)(V-V_{th}) - W + I}{Cm} = \frac{2 V-k(V_r+V_{th})}{C_m}$$
$$ \dvdw = \frac{\partial}{\partial W}\frac{k (V-V_r)(V-V_{th}) - W + I}{Cm} = -\frac{1}{C_m}$$
$$ \dwdv = \frac{\partial}{\partial V} a(b(V-V_r) - W) = ab $$
$$ \dwdw = \frac{\partial}{\partial W} a(b(V-V_r) - W) = -a $$
Setting $V= \Vminus$ we can now write down our Jacobian matrix as
$$
J = \left(
\begin{array}{cc}
\frac{2 \Vminus-k(V_r+V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
Now our next goal is to find the eigenvalues of this matrix. As described above eigenvalues tell us how quickly a system returns to rest. A large negative eigenvalue corresponds to quickly returning to rest. A two-dimensional system has two eigenvalues ($\lambda_1$ and $\lambda_2$) and thus returns to rest in two directions called eigenvectors$\vec{\xi_1}$ and $\vec{\xi_2}$. In the above picture I have plotted the full Izhikevich phase plane with the vector field with light gray vectors. I have also attached the zoomed in versions as well. This time there are red and black lines on the graph. The Black and Red lines are the Eigenvectors of a linear system. Geometrically, if a linear solution starts on one of these eigenvectors, then it stays on these eigenvectors. It never comes off. However most solutions start off the eigenvectors. The system returns to rest faster along the Red eigenvector than the Black eigenvector. This means The red eigenvalue is more negative than the black eigenvalue.
As an aside, this is the geometric interpretation of the formula:
\begin{align}
V(t) = C_1 \xi_{1_V} e^{{\lambda_1}t} +C_2 \xi_{2_W} e^{{\lambda_2}t}\\
W(t) = C_1 \xi_{1_W} e^{{\lambda_1}t} +C_2 \xi_{2_W} e^{{\lambda_2}t}
\end{align}
Note here $\vec{\xi_1}=<\xi_{1_V},\xi_{1_W}>$. This we can use this formula to estimate the behavior of our neuron near rest. By looking at this formula we can see why eigenvalues are important. They define stability. If a eigenvalue $\lambda$ is positive then as $t\rightarrow \infty$ then $e^{\lambda t}\rightarrow \infty$. However if $\lambda$ is negative as $t\rightarrow \infty$ then $e^{\lambda t}\rightarrow 0$. This tells us which direction a solution moves along its eigenvectors. Thus we know if a system's eigenvalues all have negative real part, then it is stable. Furthermore, how quickly it returns to rest is given by these eigenvalues. Note that $\Vplus$ has a positive eigenvector. This gives rise to the separatrix, and also is the reason why the neuron is capable of spikes.
We will now find the values of these two eigenvalues $\lambda_1$ and $\lambda_2$ for $\Vminus$, and prove they are always negative. The formula we find for these eigenvalues we will use to create a wide variety of interesting neurons.
To find the eigenvalues of the Jacobian we need to find the determinate of this system
$$
\det(J-\lambda \boldsymbol{I}) = \left|
\begin{array}{cc}
\frac{2 \Vminus-k(V_r+V_{th})}{C_m}-\lambda & -\frac{1}{C_m}\\
ab & -a-\lambda
\end{array}
\right|= 0
$$
where $\boldsymbol{I}$ is the identity matrix. If your unfamiliar with the determinate see here. The determinate of this matrix yields the characteristic polynomial giving below:
$$
0=\left(\frac{2 \Vminus-k(V_r+V_{th})}{C_m}-\lambda\right)\left(-a-\lambda\right) - (-\frac{1}{C_m})(b)
$$
Collecting all the $\lambda$'s gives us
$$
\lambda^2 +\left(a -\frac{k(V_r+V_{th}-2\Vminus)}{C_M}\right)\lambda -\left( \frac{ab - a k(V_r+V_{th}-2\Vminus)}{C_m} \right)
$$
Solving for $\lambda$ yields another nasty quadratic formula.
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) \pm \sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}
$$
Now this equation has lots of terms, but this is were all the seemingly pointless work we did before will pay off. Lets start by looking at the part outside the radical. We want to show $$ -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)<0 $$. We note that $a>0$, thus $-a$ is always negative. Therefore our goal will be to show that $$2\Vminus > V_r +V_{th}$$. When this inequality holds then holds $$ -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)< 0$$
Meaning that the term outside the radicle is always negative.
We begin by noting that $V_o > \Vminus$, and that $V_o = \frac{b+ k(V_r+V_{th})}{2k}$. Now we can state that $$\frac{b}{2k}+(Vr+Vth)>2\Vminus > V_r +V_{th}$$
$$\frac{b}{2k}+(Vr+Vth)> V_r +V_{th}$$
$$\frac{b}{2k}> 0$$
and because both $b>0$ and $k>0$ this is always true. Thus, the expression outside the radical is always negative. Now this at least means one of the eigenvalues is always negative.****footnote
Our next goal is to show that when we take the positive root that the radical we add is always smaller that part outside the radical. More formally we have
$$
-a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) > \sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}
$$
Now you can probably already see what we are going to do. Just like above we expand everything in side the radical and collect the terms.
$$\sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}$$
$$\sqrt{(a^2 -2 a \frac{k}{C_m}(V_r+V_{th}-2\Vminus)+ \frac{k^2}{C_m^2}((V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}$$
$$\sqrt{(a
+ \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}$$
plugging this back into the lambda equation we see that
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + \sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}
$$
Now lets take a step back and see what this means. Recall $b>0$, so it is instructive to look at the situation when $b=0$. We note the bigger and bigger $b$ the smaller and smaller the number under the radical gets. This gives us
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + \sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2}
$$
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + (a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus)
$$
$$\lambda = 0$$
What does this mean when a eigenvalue is 0? it means it does not change along this direction.
recall that
$$
\frac{dW}{dt} = a(b V- W)
$$
Thus now $W$ does not depend on $V$ and the system is in a sense broken. However $b\neq 0$ and because the more positive $b$ becomes the smaller the radical becomes. More formally:
$$
+\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2}>+\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}
$$
Thus we can conclude that our system always has two negative eigenvalues.
Now that was quite a handful of mathematics. But hopefully it gives us a flavor of how to linearization is done in a Izhikevich model. But I want to use this process to show something even more powerful. Notice the bigger $b$ becomes the smaller the radical gets. In fact if
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
Then
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]<0
$$
which means that we are taking the square root of a negative number! As we know the square root of a negative number is an imaginary number which means our eigenvalues are complex numbers. We also know that the real part is always negative. Thus our two eigenvalues will take the form $\lambda = \alpha \pm i \beta $ where $\alpha = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)$ and $\beta=|\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}|$.
What does this mean graphically? Well this gives rise to spirals. Because of the Imaginary part of our eigenvalue, the solution will spiral around the equilibrium point. Because of the negative value of $\alpha$ this spiral slowly spirals inward towards the equilibrium. In the next blog article we will show how to use this fact to build a very special kind of neuron that responds only to certain frequency of input that is near this $\beta$.
\newcommand{\Wminus}{W_-}
\newcommand{\Vplus}{V_+}
\newcommand{\dvdv}{\frac{\partial\frac{dV}{dt}}{\partial V}}
\newcommand{\dvdw}{\frac{\partial\frac{dV}{dt}}{\partial W}}
\newcommand{\dwdv}{\frac{\partial\frac{dW}{dt}}{\partial V}}
\newcommand{\dwdw}{\frac{\partial\frac{dW}{dt}}{\partial W}}
$
Izhikevich neurons and proving the rest state
中文在這裡
http://neuroinfo-cclolab.blogspot.tw/2018/04/part-2-izhikevichs-simple-neuron_8.html
Many neurons are highly nonlinear, and the Izhikevich model is no exception. The $V^2$ term in the voltage equation, plays a major role in the dynamics of the model neuron. It is natural to use the theory of nonlinear dynamics to study the Izhikevich model. Oddly, one of the most important concepts in nonlinear dynamics is linearization. Linearization is the process of approximating the nonlinear systems as a linear one. This should feel counter intuitive. The great success of nonlinear models, including the Izhikevich model, in neuroscience is the vast repertoire of phenomena that they can represent. Including the all-or-nothing action potential, which is fundamentally a nonlinear phenomena. Now, I am claiming that the secret to understanding these complicated nonlinear phenomena is to approximate away the nonlinear $V^2$ via linearization. How can I make this bizarre claim?
The reason Linearization is such a powerful tool in neuroscience as the voltage spends quite a bit of time near the rest equilibrium. Linearization tells us that when the voltage is near any equilibrium nonlinear effects are weak, weak enough to mathematically ignore. When we do this we can ask a few very important questions.
- How quickly does a neuron return to rest after spiking?
- Does it oscillate when it returns to rest?
- When the neuron receives an excitatory input why does leave the rest equilibrium?
- How quickly does it leave rest?
- Why does it not come back?
Many of these seemingly simple questions can only be answered by first linearizing the system. We will answer the first question in this blog post, and the second question in the sister blog post HERE. Some of the later questions need some extra theory provided by bifurcation theory (which we will talk about in later posts yet to be released). But, all these questions are answered by first approximating the area around the equilibrium points as linear systems, and leveraging those approximations to understand the extra behavior the nonlinear dynamics add to the system.
In this blog post, I want to show how to use linearization in all of its gory details. Much of this detail is usually done via computer in a few quick commands, but I feel like understanding exactly how the whole process works and with all the complicated algebra filled in is important. To motivate this exposition, we will answer a seemingly simple question. How do we know that that when the Izhikevich model has a rest equilibrium, and that rest equilibrium is stable? That means in the absence of input, the Izhikevich model will always return to rest. We will define exactly what it means to be stable, and we will also show under what conditions the equilibrium points exists, and when they do exist, one is always stable.
Lets walk through the linearization of the biologically realistic Izhikevich model. If your completely unfamiliar with eigenvalues, I suggest watching this video from 3blue1brown. I suggest you have a pencil and some paper with you. It will help to fill in some of the small gaps in algebra.
For now we will ignore the spiking mechanism and focus on the differential equation.
$$ C_m \frac{dV}{dt} = k (V-V_r)(V-V_{th}) - W + I $$
$$ \frac{dW}{dt} = a(b(V-V_r) - W)$$
If you need a reminder of what the parameters represent I suggest reading part 1.
First we need to find the equilibrium points. Recall, equilibrium points are those points where both derivatives $\frac{dV}{dt}=\frac{dW}{dt}=0$. We can visualize this by graphing the nullcline. The blue line represents where $\frac{dV}{dt}=0$ and likewise, the yellow line represents where $\frac{dW}{dt}=0$ They cross in two places, the left most equilibrium $Eq_{-}=(V_{-},W_{-})$, and the right most equilibrium $Eq_{+}=(V_{+},W_{+})$.
$$ 0 = k (V-V_r)(V-V_{th}) - W + I $$
$$ 0 = a(b(V-V_r) - W)$$
Our goal here is to solve the equation for $V$, so we use the bottom equation and solve for $W$ in terms of $V$
$$ W = b(V- V_r)$$
Then plug this into the Voltage equation to get
$$ 0 = k (V-V_r)(V-V_{th}) - b(V- V_r) + I $$
Notice that this equation is is quadratic in $V$ and notice it does not depend on $W$. We can now treat this as a quadratic equation and use the quadratic formula to find that
$$V=\frac{b+ k(V_r+V_{th})\pm \sqrt{(b+k(V_r+V_{th}))^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
Here, we are want $\Vminus$ which is negative root, so we take the negative signed radical giving us
$$\Vminus=\frac{b+ k(V_r+V_{th})- \sqrt{(b+k(V_r+V_{th}))^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
Lets take a minute and pause to see what this equation even means and represents. Because we want to show two things here. That $\Vminus<\Vplus$, and when the roots disappear. $V_o<0$.
Lets build our geometric intuition first. Take a look at the GIF. It begins when $I=0$. As $I$ shifts upward $\Vminus$ and $\Vplus$ move closer and close together and eventually collide, at $V_o$. We want to prove that if $b>0$ that $\Vminus<V_o<0$. This fact will come in very useful later.
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(V_r+V_{th})]^2- 4 k(I-b V_r-k V_r V_{th})}}{2k}$$
We will focus here solely on the part inside the radical. Our goal is to simplify it so that we can use it in some inequalities later. In order to do that we will first expand everything inside the radical the recollect the terms so most of them cancel. We start by looking at just the radical
$$- \sqrt{[b+k(V_r+V_{th})]^2- 4 k(I-b V_r-k V_r V_{th})}$$
We expand everything inside to get
$$- \sqrt{[b^2 + 2 b k V_r +k^2 V_r^2 +2 b k V_{th} + 2 k^2 V_r V_{th}+V_{th}^2] +(-4b k V_r- 4 b k V_r V_{th} -4 I k)}$$
then collect like-terms
$$- \sqrt{[b^2 - 2 b k V_r +k^2 V_r^2 +2 b k V_{th} - 2 k^2 V_r V_{th}+Vth^2] - (4 I k)}$$
Now notice how everything in the $[]$ brackets is the same except for sign changes? This means that we can rearrange everything into this form
$$- \sqrt{[b+k(-V_r+V_{th})]^2- 4 k I}$$
Replace the following back into the expression for $\Vminus$ to get
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(-V_r+V_{th})]^2- 4 k I}}{2k}$$
This equation is much simpler than the previous expression for $\Vminus$. You'll now notice the effect that input current $I$ has on the system is much clearer. As $I$ increases, the value inside the radical gets smaller and smaller. This means the two equilibrium points $\Vplus$ and $\Vminus$ get closer and closer together. To show this more clearly lets start by setting $I=0$. This gives is the following
$$\Vminus= \frac{b+ k(V_r+V_{th}) - \sqrt{[b+k(-V_r+V_{th})]^2}}{2k}$$
$$\Vminus= \frac{b+ k(V_r+V_{th}) - {[b+k(-V_r+V_{th})]}}{2k}$$
$$\Vminus =V_r$$
Which is exactly the value we expect for $\Vminus$. Likewise we can do the same for $\Vplus$
$$\Vplus= \frac{b+ k(V_r+V_{th}) + {[b+k(-V_r+V_{th})]}}{2k}$$
$$\Vplus=\frac{2b+ 2k(V_{th})}{2k}$$
Note that $\Vminus = V_r$, but $\Vplus \neq V_{th}$. Thus as $b$ gets more and more positive $\Vplus$ gets more and more positive. However, as we saw ealier, as $I$ increases $\Vminus$ and $\Vplus$ move together,and eventually the roots will collide at some value $V_o$, before disappearing. In fact the roots will collide percisely when $$\sqrt{[b+k(-Vr+Vth)]^2- 4 k I}=0$$
Solving this equation for $I$, we see this happens when
$$I = \frac{[b+k(-Vr+Vth)]^2}{4 k}$$
That means leaves us with
$$V_o = \frac{b+ k(Vr+Vth)}{2k}$$
A small note $V_o$ is known as the saddle node bifurcation, and we will discuss it in a future article. But for now, you can think that when we $I \geq \frac{[b+k(-Vr+Vth)]^2}{- 4 k}$ then the neuron will always spike because there is no rest equilibrium.
Now we can begin the linearization process. Recall, our goal is to find the approximate linear system around this the rest equilibrium fix point. What does it mean to be near an equilibrium point? I have attached two graphs to make this clear. One is the whole phase plane with both $Eq_+$ and $Eq_-$ labeled. I've also attached a graph where I zoom in on a region around $Eq_-$. You'll notice that on that scale the blue quadratic null-cline looks like a line. This is what it means for it to be almost linear. The system when zoomed in looks linear.
We can use this approximate linear system to determine if the equilibrium is stable. a Equilibrium point is stable if and only if all the eigenvalues of the corresponding linear system has negative real parts for its eigenvalues.
Figure 3: Top, The full system. Again, the rainbow dots are various solutions to the system moving in phase space.
Bottom, The system is zoomed in on the rest equilibrium $Eq_{-}$. Notice, the yellow and blue curve look mostly like lines. Thus we can linearize them. The red and black lines are eigenvectors. Notice that the solutions first move towards the black eigenvalue, and then move towards rest.
|
In order figure out what this linear system looks like when we zoom in, we need to find a Jacobian matrix. A Jacobian matrix is exactly the mathematical description of these straight blue and yellow nullclines. Mathematically, the Jacobian matrix $J$ satisfies $ \frac{d\vec{x}}{dt} = J\vec{x} $ when $\vec{x}\approx <\Vminus,\Wminus>$. To find $J$ we approximate the nullclines with there tangent lines at the equilibrium point $\Vminus$. To do this we need to calculate the partial derivative with respect to $V$ and $W$ of the $V$-nullcline and $W$-nullcline. This gives us equations. This gives us a matrix that looks like
$$
J = \left(
\begin{array}{cc}
\dvdv & \dvdw \\
\dwdv & \dwdw
\end{array}
\right)
$$
where $\frac{\partial}{\partial V}$ and $\frac{\partial}{\partial W}$ is just the derivative of each function with respect to $V$ and $W$. Performing this algebra we get the following:
$$ \dvdv = \frac{\partial}{\partial V}\frac{k (V-V_r)(V-V_{th}) - W + I}{Cm} = \frac{2 V-k(V_r+V_{th})}{C_m}$$
$$ \dvdw = \frac{\partial}{\partial W}\frac{k (V-V_r)(V-V_{th}) - W + I}{Cm} = -\frac{1}{C_m}$$
$$ \dwdv = \frac{\partial}{\partial V} a(b(V-V_r) - W) = ab $$
$$ \dwdw = \frac{\partial}{\partial W} a(b(V-V_r) - W) = -a $$
Setting $V= \Vminus$ we can now write down our Jacobian matrix as
$$
J = \left(
\begin{array}{cc}
\frac{2 \Vminus-k(V_r+V_{th})}{C_m} & -\frac{1}{C_m}\\
ab & -a
\end{array}
\right)
$$
Now our next goal is to find the eigenvalues of this matrix. As described above eigenvalues tell us how quickly a system returns to rest. A large negative eigenvalue corresponds to quickly returning to rest. A two-dimensional system has two eigenvalues ($\lambda_1$ and $\lambda_2$) and thus returns to rest in two directions called eigenvectors$\vec{\xi_1}$ and $\vec{\xi_2}$. In the above picture I have plotted the full Izhikevich phase plane with the vector field with light gray vectors. I have also attached the zoomed in versions as well. This time there are red and black lines on the graph. The Black and Red lines are the Eigenvectors of a linear system. Geometrically, if a linear solution starts on one of these eigenvectors, then it stays on these eigenvectors. It never comes off. However most solutions start off the eigenvectors. The system returns to rest faster along the Red eigenvector than the Black eigenvector. This means The red eigenvalue is more negative than the black eigenvalue.
As an aside, this is the geometric interpretation of the formula:
\begin{align}
V(t) = C_1 \xi_{1_V} e^{{\lambda_1}t} +C_2 \xi_{2_W} e^{{\lambda_2}t}\\
W(t) = C_1 \xi_{1_W} e^{{\lambda_1}t} +C_2 \xi_{2_W} e^{{\lambda_2}t}
\end{align}
Note here $\vec{\xi_1}=<\xi_{1_V},\xi_{1_W}>$. This we can use this formula to estimate the behavior of our neuron near rest. By looking at this formula we can see why eigenvalues are important. They define stability. If a eigenvalue $\lambda$ is positive then as $t\rightarrow \infty$ then $e^{\lambda t}\rightarrow \infty$. However if $\lambda$ is negative as $t\rightarrow \infty$ then $e^{\lambda t}\rightarrow 0$. This tells us which direction a solution moves along its eigenvectors. Thus we know if a system's eigenvalues all have negative real part, then it is stable. Furthermore, how quickly it returns to rest is given by these eigenvalues. Note that $\Vplus$ has a positive eigenvector. This gives rise to the separatrix, and also is the reason why the neuron is capable of spikes.
We will now find the values of these two eigenvalues $\lambda_1$ and $\lambda_2$ for $\Vminus$, and prove they are always negative. The formula we find for these eigenvalues we will use to create a wide variety of interesting neurons.
To find the eigenvalues of the Jacobian we need to find the determinate of this system
$$
\det(J-\lambda \boldsymbol{I}) = \left|
\begin{array}{cc}
\frac{2 \Vminus-k(V_r+V_{th})}{C_m}-\lambda & -\frac{1}{C_m}\\
ab & -a-\lambda
\end{array}
\right|= 0
$$
where $\boldsymbol{I}$ is the identity matrix. If your unfamiliar with the determinate see here. The determinate of this matrix yields the characteristic polynomial giving below:
$$
0=\left(\frac{2 \Vminus-k(V_r+V_{th})}{C_m}-\lambda\right)\left(-a-\lambda\right) - (-\frac{1}{C_m})(b)
$$
Collecting all the $\lambda$'s gives us
$$
\lambda^2 +\left(a -\frac{k(V_r+V_{th}-2\Vminus)}{C_M}\right)\lambda -\left( \frac{ab - a k(V_r+V_{th}-2\Vminus)}{C_m} \right)
$$
Solving for $\lambda$ yields another nasty quadratic formula.
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) \pm \sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}
$$
Now this equation has lots of terms, but this is were all the seemingly pointless work we did before will pay off. Lets start by looking at the part outside the radical. We want to show $$ -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)<0 $$. We note that $a>0$, thus $-a$ is always negative. Therefore our goal will be to show that $$2\Vminus > V_r +V_{th}$$. When this inequality holds then holds $$ -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)< 0$$
Meaning that the term outside the radicle is always negative.
We begin by noting that $V_o > \Vminus$, and that $V_o = \frac{b+ k(V_r+V_{th})}{2k}$. Now we can state that $$\frac{b}{2k}+(Vr+Vth)>2\Vminus > V_r +V_{th}$$
$$\frac{b}{2k}+(Vr+Vth)> V_r +V_{th}$$
$$\frac{b}{2k}> 0$$
and because both $b>0$ and $k>0$ this is always true. Thus, the expression outside the radical is always negative. Now this at least means one of the eigenvalues is always negative.****footnote
Our next goal is to show that when we take the positive root that the radical we add is always smaller that part outside the radical. More formally we have
$$
-a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) > \sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}
$$
Now you can probably already see what we are going to do. Just like above we expand everything in side the radical and collect the terms.
$$\sqrt{(a -\frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}$$
$$\sqrt{(a^2 -2 a \frac{k}{C_m}(V_r+V_{th}-2\Vminus)+ \frac{k^2}{C_m^2}((V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}+a\frac{k}{C_m}(V_r+V_{th}-2\Vminus)\right]}$$
$$\sqrt{(a
+ \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}$$
plugging this back into the lambda equation we see that
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + \sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}
$$
Now lets take a step back and see what this means. Recall $b>0$, so it is instructive to look at the situation when $b=0$. We note the bigger and bigger $b$ the smaller and smaller the number under the radical gets. This gives us
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + \sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2}
$$
$$
\lambda = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus) + (a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus)
$$
$$\lambda = 0$$
What does this mean when a eigenvalue is 0? it means it does not change along this direction.
recall that
$$
\frac{dW}{dt} = a(b V- W)
$$
Thus now $W$ does not depend on $V$ and the system is in a sense broken. However $b\neq 0$ and because the more positive $b$ becomes the smaller the radical becomes. More formally:
$$
+\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2}>+\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}
$$
Thus we can conclude that our system always has two negative eigenvalues.
Now that was quite a handful of mathematics. But hopefully it gives us a flavor of how to linearization is done in a Izhikevich model. But I want to use this process to show something even more powerful. Notice the bigger $b$ becomes the smaller the radical gets. In fact if
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2<4\left[a\frac{b}{C_m}\right]
$$
Then
$$
(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]<0
$$
which means that we are taking the square root of a negative number! As we know the square root of a negative number is an imaginary number which means our eigenvalues are complex numbers. We also know that the real part is always negative. Thus our two eigenvalues will take the form $\lambda = \alpha \pm i \beta $ where $\alpha = -a - \frac{k}{C_m} (V_r+V_{th} -2\Vminus)$ and $\beta=|\sqrt{(a + \frac{k}{C_m}(V_r+V_{th}-2\Vminus))^2-4\left[a\frac{b}{C_m}\right]}|$.
What does this mean graphically? Well this gives rise to spirals. Because of the Imaginary part of our eigenvalue, the solution will spiral around the equilibrium point. Because of the negative value of $\alpha$ this spiral slowly spirals inward towards the equilibrium. In the next blog article we will show how to use this fact to build a very special kind of neuron that responds only to certain frequency of input that is near this $\beta$.
留言
張貼留言