GLM captures properties of adaptation: gain scaling and fractional differentiation

This paper explores the extent in which properties of adaptation can be captured by the GLM (Generalized Linear Model). Specifically, two well-known properties of adaptation are gain scaling and fractional differentiation, which can be simulated via the HH (Hodgkin-Huxley) neuronal model. However, the mechanisms are often obscured when viewed in terms of the HH equations; here, the authors replicated these behaviors in the GLM, which elucidates these subjects of interest.

Before we dive in, let us first explain what “gain scaling”, “fractional differentiation” and “generalized linear models” are. Gain scaling is where the input is scaled by the stimulus’ standard deviation. Essentially, the dynamic range for encoding should adjust itself according to the statistics of the stimulus, so as to capture more information. For a neuron that has perfect gain scaling, its spike triggered stimulus distribution should be independent of the variance of the stimulus. That is, if everything is perfectly scaled by the stimulus variance, then the input is no longer influenced by the variance.

Fractional differentiation is a generalization of integer differentiation. Two first-order differentiation results in a second order differentiation; similarly, two one-half order differentiation results in a first-order differentiation. The order of differentiation $\alpha$ can be probed by using stimuli with well known differentiation properties, for instance sine waves (whose gain and phase shift is a function of $\alpha$) and square waves (results in exponential decay). 

The GLM considered here assumes that the mean firing rate of time $t$ is a function of past stimulus as well as past spike history. More precisely, it takes the summation of the weighted past stimulus and weighted spike history as input, and passes it through a nonlinear function to obtain the mean firing rate. 

The authors successfully fitted GLMs to the results of HH models, with some quantitative differences and limitations. For gain scaling, although GLMs trained at different $\sigma$ values could fit various stimulus standard deviations, it performed gain scaling during parameter regimes that lacked gain scaling. As for fractional differentiation, the GLM model captures this property well when the spike history filter is long (on the order of tens of seconds). That is, the longer the spike history, the better the fit. In summary, the authors showed that GLMs can be fit to the output of HH neurons, and can be used to investigate adaptation.


Author: Belle Liu


Check out the original article: Latimer, Kenneth W., and Adrienne L. Fairhall. "Capturing multiple timescales of adaptation to second-order statistics with generalized linear models: gain scaling and fractional differentiation." Frontiers in systems neuroscience 14 (2020): 60.

留言