## Frontmatter
| | |
| --- | --- |
| Authors | [[Lynn Le]], [[Paolo Papale]], [[Antonio Lozano]], [[Thirza Dado]], [[Feng Wang]], [[Marcel van Gerven]], [[Pieter Roelfsema]], [[Yağmur Güçlütürk]], [[Umut Güçlü]] |
| Date | 2023/08 |
| Source | [[Conference on Cognitive Computational Neuroscience]] |
| URL | https://doi.org/10.32470/CCN.2023.1700-0 |
| Citation | Le, L., Papale, P., Lozano, A., Dado, T., Wang, F., van Gerven, M., Roelfsema, P., Güçlütürk, Y., & Güçlü, U. (2023). [[End-to-end reconstruction of natural images from multi-unit recordings with Brain2Pix]]. In _Conference on Cognitive Computational Neuroscience_. [[URL](https://doi.org/10.32470/CCN.2023.1700-0)]. #Conference |
## Abstract
Reconstructing naturalistic images from brain signals has been a challenging task for scientists, with successful results largely limited to large human fMRI datasets. In this study, we apply the brain2pix reconstruction model to multi-unit activity (MUA) data from the macaque brain, providing a novel extension of the model. This approach allows for investigation of information representation in different brain regions and time windows with greater spatial and temporal precision. Our results offer insights into the neural basis of visual perception, showing that V1 neurons represent texture and color, V4 neurons exhibit symmetric representations, and IT neurons reveal concept-like features. We also demonstrate that the model can be used to decode features at different layers of a neural network, with V1 more strongly correlated with initial layers and V4 and IT with deeper layers. Overall, our approach provides a valuable tool for studying brain representations in high temporal and spatial detail.
## PDF
![[End-to-end reconstruction of natural images from multi-unit recordings with Brain2Pix.pdf]]