## Frontmatter
| | |
| --- | --- |
| Authors | [[Thirza Dado]], [[Paolo Papale]], [[Antonio Lozano]], [[Lynn Le]], [[Marcel van Gerven]], [[Pieter Roelfsema]], [[Yağmur Güçlütürk]], [[Umut Güçlü]] |
| Date | 2023/08 |
| Source | [[Conference on Cognitive Computational Neuroscience]] |
| URL | https://doi.org/10.32470/CCN.2023.1495-0 |
| Citation | Dado, T., Papale, P., Lozano, A., Le, L., van Gerven, M., Roelfsema, P., Güçlütürk, Y., & Güçlü, U. (2023). [[Feature-disentangled reconstruction of perception from multi-unit recording]]. In _Conference on Cognitive Computational Neuroscience_. [[URL](https://doi.org/10.32470/CCN.2023.1495-0)]. #Conference |
## Abstract
Here, we aimed to explain neural representations of perception, for which we analyzed the relationship between multi-unit activity (MUA) recorded from the primate brain and various feature representations of visual stimuli. Our encoding analysis revealed that the $w$-latent representations of feature-disentangled generative adversarial networks (GANs) were the most effective candidate for predicting neural responses to images. Importantly, the usage of synthesized yet photorealistic images allowed for superior control over these data as their underlying latent representations were known a priori rather than approximated post-hoc. As such, we leveraged this property in neural reconstruction of the perceived images. Taken together with the fact that the (unsupervised) generative models themselves were never optimized on neural data, these results highlight the importance of feature disentanglement and unsupervised training as driving factors in shaping neural representations.
## PDF
![[Feature-disentangled reconstruction of perception from multi-unit recording.pdf]]