Stimulus domain transfer in recurrent models for large scale cortical population prediction on video

2018 | preprint

Jump to: Cite & Linked | Documents & Media | Details | Version history

Cite this publication

​Stimulus domain transfer in recurrent models for large scale cortical population prediction on video​
Sinz, F. H. ; Ecker, A. S. ; Fahey, P. G.; Walker, E. Y.; Cobos, E.; Froudarakis, E.& Yatsenko, D. et al.​ (2018). DOI: https://doi.org/10.1101/452672 

Documents & Media

License

GRO License GRO License

Details

Authors
Sinz, Fabian H. ; Ecker, Alexander S. ; Fahey, Paul G.; Walker, Edgar Y.; Cobos, Erick; Froudarakis, Emmanouil; Yatsenko, Dimitri; Pitkow, Xaq; Reimer, Jacob; Tolias, Andreas S.
Abstract
To better understand the representations in visual cortex, we need to generate better predictions of neural activity in awake animals presented with their ecological input: natural video. Despite recent advances in models for static images, models for predicting responses to natural video are scarce and standard linear-nonlinear models perform poorly. We developed a new deep recurrent network architecture that predicts inferred spiking activity of thousands of mouse V1 neurons simulta-neously recorded with two-photon microscopy, while accounting for confounding factors such as the animal’s gaze position and brain state changes related to running state and pupil dilation. Powerful system identification models provide an opportunity to gain insight into cortical functions through in silico experiments that can subsequently be tested in the brain. However, in many cases this approach requires that the model is able to generalize to stimulus statistics that it was not trained on, such as band-limited noise and other parameterized stimuli. We investigated these domain transfer properties in our model and find that our model trained on natural images is able to correctly predict the orientation tuning of neurons in responses to artificial noise stimuli. Finally, we show that we can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer’s weights on a network otherwise trained on natural movies. The converse, however, is not true.
Issue Date
2018
Language
English

Reference

Citations


Social Media