iCatcher: A neural network approach for automated coding of young children's eye movements

Yotam Erel*, Christine E. Potter, Sagi Jaffe-Dax, Casey Lew-Williams, Amit H. Bermano

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Infants' looking behaviors are often used for measuring attention, real-time processing, and learning—often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.

Original languageEnglish
Pages (from-to)765-779
Number of pages15
JournalInfancy
Volume27
Issue number4
DOIs
StatePublished - 1 Jul 2022

Funding

FundersFunder number
National Institute of Child Health and Human DevelopmentF32HD093139
Eunice Kennedy Shriver National Institute of Child Health and Human DevelopmentR01HD095912

    Fingerprint

    Dive into the research topics of 'iCatcher: A neural network approach for automated coding of young children's eye movements'. Together they form a unique fingerprint.

    Cite this