An Integrated Neural Framework for Dynamic and Static Face Processing

Michal Bernstein*, Yaara Erez, Idan Blank, Galit Yovel

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Faces convey rich information including identity, gender and expression. Current neural models of face processing suggest a dissociation between the processing of invariant facial aspects such as identity and gender, that engage the fusiform face area (FFA) and the processing of changeable aspects, such as expression and eye gaze, that engage the posterior superior temporal sulcus face area (pSTS-FA). Recent studies report a second dissociation within this network such that the pSTS-FA, but not the FFA, shows much stronger response to dynamic than static faces. The aim of the current study was to test a unified model that accounts for these two functional characteristics of the neural face network. In an fMRI experiment, we presented static and dynamic faces while subjects judged an invariant (gender) or a changeable facial aspect (expression). We found that the pSTS-FA was more engaged in processing dynamic than static faces and changeable than invariant aspects, whereas the OFA and FFA showed similar response across all four conditions. These findings support an integrated neural model of face processing in which the ventral areas extract form information from both invariant and changeable facial aspects whereas the dorsal face areas are sensitive to dynamic and changeable facial aspects.

Original languageEnglish
Article number7036
JournalScientific Reports
Volume8
Issue number1
DOIs
StatePublished - 1 Dec 2018

Fingerprint

Dive into the research topics of 'An Integrated Neural Framework for Dynamic and Static Face Processing'. Together they form a unique fingerprint.

Cite this