How do efficient coding strategies depend on the origin of noise?

How do efficient coding strategies depend on the origin of noise?
 The efficient coding hypothesis was proposed by Dr. Barlow, who suggested that neurons encode sensory information that allowed for the least amount of spikes. Previous researches on this hypothetical model have relied heavily on certain assumptions, such as the location or correlation of noise. In this paper, the author relaxes these assumptions and show how different origins of noise results in different coding strategies, and how multiple noise source comes into play.
 The author employed a model that mimics the retinal pathway – in particular, the bipolar-to-ganglion synapse (Fig 1). Noises are placed in three different locations: upstream noise that is mixed with the incoming signal, Poisson noise that is present in the synapse (e.g. vesicle release), and downstream noise in the ganglion. The size of the noises are represented by $σ_{up}^2$, $κ^2$ and  $σ_{down}^2$ respectively. Aside from assuming that the nonlinearity is monotonic and that parallel pathways receive identical stimuli, the author makes no further assumptions on the location, correlation and type of noise; there are also no assumptions on the shape of the nonlinearity. The nonlinearity as a function of noise location and relative noise size is first discussed in terms of a single pathway, and later generalized into parallel pathways.



Fig 1. Feedforward model of bipolar-to-ganglion synapse. Input is first corrupted by upstream noise, then by Poisson noise, and finally downstream noise.


For single pathways, as the upstream noise increases, the slope of the optimal nonlinearity decreases (Fig 2). This is because the noise is mixed in with the signal and cannot be separated by tuning the nonlinearity curve. Therefore, to insure optimal information transmission, the goal for the nonlinearity is to encode as wide a range of stimulus as possible. As noise increases, the range increases as well, hence the decrease in slope. For Poisson noise, noise increment shifts the curve off-center (Fig 2). This is because the Poisson noise grows as the stimulus grows, therefore the weakest stimuli must be the least corrupted stimuli, and the nonlinearity shifts accordingly to ensure that the likeliest output corresponds to the weakest stimuli. For downstream noise (Fig 2), the slope’s trend is the opposite of those of the upstream – it increases along with noise. This is because an increasing slope amplifies response in terms of stimuli, but leaves the size of the downstream noise unchanged. Hence the signal-to-noise ratio will be increased this way.
 For parallel pathways, two new variables are introduced – the polarity of the pathways (ON/ON, OFF/OFF or ON/OFF) and correlation of signals. The results concerning slopes after considering the new variables are the same as in the case of single pathways. The overlapping (offset) of the nonlinearities work under the same principle as the ones mentioned above. For a more detailed explanation, please refer to the original paper linked below.

 When none of the noise sources are dominant in magnitude, the effects each noise source has on the nonlinearity shape competes with each other. This can be seen by cuts through the parameter space of $\{η$,$κ$,$ζ\}$, which is shown in Fig 4 in the original paper.


Fig 2. Optimal nonlinearity and stimulus distribution under different amount of noise.





Source: Brinkman, Braden A. W., et al. “How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits?” PLOS Computational Biology, vol. 12, no. 10, 2016, doi:10.1371/journal.pcbi.1005150.
Summary written by: Pei-Hsien Liu

留言