Simplifying a Neuron’s Morphology Using Krylov Subspace Projection.
In my last post, I talked about how to make neural simulators more efficient when integrating forward in time. However, an accurate simulation doesn’t only vary in time, but also space. Neurons have intricate, and highly branched dendrites and axons. Thus, an accurate simulation needs to account for the different morphologies of a neuron. In other words, the spatial dimension matters! However, accounting for the spatial dimension is very computationally expensive. Numerically we are trying to solve a partial differential equation (PDE). Traditionally we do this by discretizing the neuron and it’s PDE into thousands of little interconnected cylinders, and create a system of first order ordinary differential equations (ODEs). However, one single neuron can be represented by 100 to 1000 of cylinders and thus a 100 to 1000 equations. Thus, a fly brain with 100,000 neurons would have 10,000,000 ODEs that need to be solved every time step. Simulate 10,000 time steps (or 1 second) and that’s 100,000,000,000 equations to solve!
Enter Dr. Hedrick and Cox. In their paper, “Structure-preserving model reduction of passive and quasi-active neurons,” they propose an algorithm using Krylov subspace projection. They are able to reduce the number of equations per neuron from one thousand to three! There are a lot of assumptions they make about the neurons in order to do this, and they do introduce some numerical error, but the resulting speed up is impressive.
The first thing to understand about Krylov subspace projection is the algorithm only works on linear equations, and Neurons are known for being highly nonlinear. In order to get around this Dr. Hedrick and Dr. Cox assume that the axon (and/or Soma) is nonlinear and the dendrites are linear or are quasi active (weakly nonlinear), and thus approximated by a linear-system. Thus they use the Krylov reduction only on the Dendrites. This of course is an approximation of the truth, as dendrites can also be highly nonlinear, but it is a good simplifying assumption if your short on computing power.
The next step is to apply the algorithm to the list of equations, usually represented in matirx form. For linear dendrites the equations take the form C dV/dt = G V + I . where C dV/dt is for the capacitance times the derivative of voltage and I is the injected current. The Krylov Reduction is applied to the matrix G. The matrix G represents how voltage changes with respect to the spatial dimension. In an oversimplifying way, G represents a linear dendrite.
Remember from linear algebra and from calculus that a matrix’s smallest eigenvalues (ones with real part closest to zero) are the most important for dynamics of the system. The more negative the eigenvalue, the quicker it approaches zero. The idea behind Dr. Hedrick and Dr. Cox’s paper is find a new matrix G’ that has only 3 eigenvalues that are exactly the same as the first three smallest eigenvalues of G. This is exactly what the Krylov Reduction does, and it produces a reducer matrix X. When one multiplies This matrix to the above equation on gets G’ = XT G X, and C’ = Xt C X. Using this reducer matrix we can reduce the thousands of equations represented by the matrix G into a much smaller and more manageable number of equations (three) in matrix G’.
It’s worth noting however that the simulation aren’t hundreds of times faster. Unfortunately, synapses slow the simulation down. The reason for this is complex, and is mentioned briefly in the paper, but it has to do with the computational complexity of the new matrix G’. In a very oversimplified sense, G has a special structure, it is almost tridiagonal, and thus can be inverted in O(n) time. Where n is the number of compartments. However, G’ is dense and thus is inverted in O(r3) time, where r is the reduced matrix size (in the papers example r usually is 3). Thus in general for the algorithm to be “worth it”, r^3 < n.
There is one final caveat, for larger numbers of compartments, and for physically big neurons, r =3 may be too small. Thus there is always a trade-off between error, and simulation speed. There is never a one size fits all approach to neuron simulations, so understanding the limitations of this algorithm are just as important as using it to speed up linear dendrites simulations. While Dr. Hedrick and Dr. Cox’s algorithm has limited scope and can’t be applied in every case, it still can result in substantial simulation speed ups when its conditions are met, namely linearized dendrites.
Author: Alex White
Source: Hedrick, K.R. & Cox, S.J. "Structure-preserving model reduction of passive and quasi-active neurons" J Comput Neurosci (2013) 34: 1. https://doi.org/10.1007/s10827-012-0403-y
Enter Dr. Hedrick and Cox. In their paper, “Structure-preserving model reduction of passive and quasi-active neurons,” they propose an algorithm using Krylov subspace projection. They are able to reduce the number of equations per neuron from one thousand to three! There are a lot of assumptions they make about the neurons in order to do this, and they do introduce some numerical error, but the resulting speed up is impressive.
The first thing to understand about Krylov subspace projection is the algorithm only works on linear equations, and Neurons are known for being highly nonlinear. In order to get around this Dr. Hedrick and Dr. Cox assume that the axon (and/or Soma) is nonlinear and the dendrites are linear or are quasi active (weakly nonlinear), and thus approximated by a linear-system. Thus they use the Krylov reduction only on the Dendrites. This of course is an approximation of the truth, as dendrites can also be highly nonlinear, but it is a good simplifying assumption if your short on computing power.
The next step is to apply the algorithm to the list of equations, usually represented in matirx form. For linear dendrites the equations take the form C dV/dt = G V + I . where C dV/dt is for the capacitance times the derivative of voltage and I is the injected current. The Krylov Reduction is applied to the matrix G. The matrix G represents how voltage changes with respect to the spatial dimension. In an oversimplifying way, G represents a linear dendrite.
Remember from linear algebra and from calculus that a matrix’s smallest eigenvalues (ones with real part closest to zero) are the most important for dynamics of the system. The more negative the eigenvalue, the quicker it approaches zero. The idea behind Dr. Hedrick and Dr. Cox’s paper is find a new matrix G’ that has only 3 eigenvalues that are exactly the same as the first three smallest eigenvalues of G. This is exactly what the Krylov Reduction does, and it produces a reducer matrix X. When one multiplies This matrix to the above equation on gets G’ = XT G X, and C’ = Xt C X. Using this reducer matrix we can reduce the thousands of equations represented by the matrix G into a much smaller and more manageable number of equations (three) in matrix G’.
It’s worth noting however that the simulation aren’t hundreds of times faster. Unfortunately, synapses slow the simulation down. The reason for this is complex, and is mentioned briefly in the paper, but it has to do with the computational complexity of the new matrix G’. In a very oversimplified sense, G has a special structure, it is almost tridiagonal, and thus can be inverted in O(n) time. Where n is the number of compartments. However, G’ is dense and thus is inverted in O(r3) time, where r is the reduced matrix size (in the papers example r usually is 3). Thus in general for the algorithm to be “worth it”, r^3 < n.
There is one final caveat, for larger numbers of compartments, and for physically big neurons, r =3 may be too small. Thus there is always a trade-off between error, and simulation speed. There is never a one size fits all approach to neuron simulations, so understanding the limitations of this algorithm are just as important as using it to speed up linear dendrites simulations. While Dr. Hedrick and Dr. Cox’s algorithm has limited scope and can’t be applied in every case, it still can result in substantial simulation speed ups when its conditions are met, namely linearized dendrites.
Author: Alex White
Source: Hedrick, K.R. & Cox, S.J. "Structure-preserving model reduction of passive and quasi-active neurons" J Comput Neurosci (2013) 34: 1. https://doi.org/10.1007/s10827-012-0403-y
留言
張貼留言