Human-in-the-Loop AI Reviewing: Feasibility, Opportunities, and Risks

Iddo Drori, Dov Te’eni

Research output: Contribution to journalEditorial

3 Scopus citations

Abstract

The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks.

Original languageEnglish
Article number7
Pages (from-to)98-109
Number of pages12
JournalJournal of the Association for Information Systems
Volume25
Issue number1
DOIs
StatePublished - 2024

Keywords

  • AI
  • Human
  • Journals
  • LLM
  • Reviewing
  • Risks

Fingerprint

Dive into the research topics of 'Human-in-the-Loop AI Reviewing: Feasibility, Opportunities, and Risks'. Together they form a unique fingerprint.

Cite this