Unifying efficient coding, predictive coding and sparse coding (part II)

Previously, we’ve discussed the framework in which optimal filters can be extracted under different stimuli and objectives. It neatly unifies efficient coding with predictive coding. Now, the target is changed from a one-dimensional stimulus to natural stimuli, which is non-Gaussian and has high dimensions. A natural scene is used, and the stimulus samples the scene by a stochastic walk, with white noise added to it (Fig 3A). The optimization was ran across a population of neurons, instead of just one. Two different ∆ values (-6 and 1) are shown here to demonstrate how things change when the objective changes from encoding the past to encoding the future.

For ∆=-6, the spatiotemporal filters for different neurons are spatially displaced for different time periods, which suggests that it encodes some preferred direction of the stimulus (and its directionality index is high, as shown in Fig 3E). On the other hand, ∆=1 yields separable spatiotemporal filters and are not selective of directions, as evidenced by a directionality index near 0.


To better understand these results, the authors used a simpler stimulus that reproduced the same results. The stimulus consists of Gaussian bumps that drifted randomly in one dimension. This helped reveal that while a having individual neurons insensitive to directions seems counterintuitive for prediction, when viewed as a population, such a strategy did allow them to encode information about the future. This strategy change from selective to unselective can also be explained when taking sparsity into account. Sparse coding has been shown to be an instance of efficient coding when dealing with stimuli with sparse latent structures, which is what the current stimulus (as well as natural stimulus) is. For sparse coding to work, neurons must be precise in when to fire, but such precision can only be obtained when enough information is integrated over. However, this integration process takes up time and is disadvantageous to prediction. Therefore, a tradeoff with precision is present when the objective changes from encoding the past to the future, which explains why only optimal filters for encoding the past has direction selectivity.

In summary, this study unifies efficient coding (and sparse coding) with predictive coding, two very important topics in sensory encoding. The authors also showed that under such a framework, with different objectives one obtains different optimal filters, which might explain why V1 neurons have different RFs.


author: Pei-Hsien Liu


Disclaimer: All the figures (and results, of course) shown here are from the original paper. 
Original paper: Chalk, M., Marre, O., & Tkačik, G. (2017). Toward a unified theory of efficient, predictive, and sparse coding. Proceedings of the National Academy of Sciences, 115(1), 186-191. doi:10.1073/pnas.1711114115

留言