TY - JOUR
T1 - Using deep neural networks to disentangle visual and semantic information in human perception and memory
AU - Shoham, Adva
AU - Grosbard, Idan Daniel
AU - Patashnik, Or
AU - Cohen-Or, Daniel
AU - Yovel, Galit
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Nature Limited 2024.
PY - 2024/4
Y1 - 2024/4
N2 - Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
AB - Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
UR - http://www.scopus.com/inward/record.url?scp=85184466211&partnerID=8YFLogxK
U2 - 10.1038/s41562-024-01816-9
DO - 10.1038/s41562-024-01816-9
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 38332339
AN - SCOPUS:85184466211
SN - 2397-3374
VL - 8
SP - 702
EP - 717
JO - Nature Human Behaviour
JF - Nature Human Behaviour
IS - 4
ER -