## Frontmatter | | | | --- | --- | | Authors | [[Florian Mahner]], [[Lukas Muttenthaler]], [[Umut Güçlü]], [[Martin Hebart]] | | Date | 2023/08 | | Source | [[Conference on Cognitive Computational Neuroscience]] | | URL | https://doi.org/10.32470/CCN.2023.1291-0 | | Citation | Mahner, F., Muttenthaler, L., Güçlü, U., & Hebart, M. (2023). [[Dimensions that matter - Interpretable object dimensions in humans and deep neural networks]]. In _Conference on Cognitive Computational Neuroscience_. [[URL](https://doi.org/10.32470/CCN.2023.1291-0)]. #Conference | ## Abstract How do mind and machines represent objects? This question has sparked continued interest in the connected fields of cognitive neuroscience and artificial intelligence. Here we address this question by introducing a novel approach that allows us to compare human and deep neural network (DNN) representations through an interpretable embedding. We achieve this by treating the DNN as an in-silico human observer and asking it to rate the similarities between objects in a triplet task. We find that (i) DNN representations capture meaningful object properties, (ii) demonstrate with multiple in-silico tests that the DNN contains conceptual and perceptual representations including shape, and (iii) identify similarities and differences in their representational content. ## PDF ![[Dimensions that matter - Interpretable object dimensions in humans and deep neural networks.pdf]]