TY - JOUR
T1 - iCatcher
T2 - A neural network approach for automated coding of young children's eye movements
AU - Erel, Yotam
AU - Potter, Christine E.
AU - Jaffe-Dax, Sagi
AU - Lew-Williams, Casey
AU - Bermano, Amit H.
N1 - Publisher Copyright:
© 2022 The Authors. Infancy published by Wiley Periodicals LLC on behalf of International Congress of Infant Studies.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Infants' looking behaviors are often used for measuring attention, real-time processing, and learning—often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.
AB - Infants' looking behaviors are often used for measuring attention, real-time processing, and learning—often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.
UR - http://www.scopus.com/inward/record.url?scp=85127943296&partnerID=8YFLogxK
U2 - 10.1111/infa.12468
DO - 10.1111/infa.12468
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 35416378
AN - SCOPUS:85127943296
SN - 1525-0008
VL - 27
SP - 765
EP - 779
JO - Infancy
JF - Infancy
IS - 4
ER -