## Frontmatter | | | | --- | --- | | Authors | [[Dora Gözükara]], [[Djamari Oetringer]], [[Linda Geerligs]], [[Umut Güçlü]] | | Date | 2023/08 | | Source | [[Conference on Cognitive Computational Neuroscience]] | | URL | https://doi.org/10.32470/CCN.2023.1567-0 | | Citation | Gözükara, D., Oetringer, D., Geerligs, L., & Güçlü, U. (2023). [[Precision brain encoding under naturalistic conditions]]. In _Conference on Cognitive Computational Neuroscience_. [[URL](https://doi.org/10.32470/CCN.2023.1567-0)]. #Conference | ## Abstract Convolutional Neural Networks (CNNs) are often used as a model of the visual system. Using CNN features to train brain encoding models requires a lot of data and conventional modelling practices also require these data to be collected under controlled conditions. By enhancing our models with additional measures, such as eye-tracking and receptive field maps, we can use data from more ecologically valid tasks such as free movie viewing, while decreasing the amount of data needed. Here, we showcase this by training precision brain encoding models on the Study Forrest dataset. Combining the population receptive field estimate of a voxel with eye-tracking data at each frame, we create subject and voxel specific feature timeseries by sampling only the relevant parts of the CNN feature map for only the relevant timepoints for a given voxel. We show that our precision encoders overperform conventional models and enable encoding under naturalistic viewing conditions. ## PDF ![[Precision brain encoding under naturalistic conditions.pdf]]