TY - JOUR
T1 - Human-in-the-Loop AI Reviewing
T2 - Feasibility, Opportunities, and Risks
AU - Drori, Iddo
AU - Te’eni, Dov
N1 - Publisher Copyright:
© 2024 by the Association for Information Systems.
PY - 2024
Y1 - 2024
N2 - The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks.
AB - The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks.
KW - AI
KW - Human
KW - Journals
KW - LLM
KW - Reviewing
KW - Risks
UR - http://www.scopus.com/inward/record.url?scp=85182233859&partnerID=8YFLogxK
U2 - 10.17705/1jais.00867
DO - 10.17705/1jais.00867
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.editorial???
AN - SCOPUS:85182233859
SN - 1558-3457
VL - 25
SP - 98
EP - 109
JO - Journal of the Association for Information Systems
JF - Journal of the Association for Information Systems
IS - 1
M1 - 7
ER -