Automated Testing and Feedback (ATF) systems are widely applied in programming courses, providing learners with immediate feedback and facilitating hands-on practice. When it comes to Massive Open Online Courses (MOOCs), where students often struggle and instructors’ assistance is scarce, ATF appears to be particularly essential. However, the impact of ATF on learning in MOOCs for programming is understudied. This study explores the connections between ATF usage and learning behavior, addressing relevant measures of learning in MOOCs. We extracted data of learners’ engagement with the course material, code-submissions and self-reported questionnaire in a Python programming MOOC with an ATF system embedded, to compile an overall and unique picture of learning behavior. Learners’ response to feedback was determined by sequence analysis of code submission, identifying improved or feedback-ignored re-submissions. Clusters of learners with common learning behaviors were identified, and their response to feedback was compared. We believe that our findings, as well as the holistic approach we propose to investigate ATF impact, will contribute to research in this field and to effective integration of ATF systems to maximize learning experience in MOOCs for programming.