TY - JOUR
T1 - Performance of generative AI across ENT tasks
T2 - A systematic review and meta-analysis
AU - Hack, Sholem
AU - Attal, Rebecca
AU - Farzad, Armin
AU - Alon, Eran E.
AU - Glikson, Eran
AU - Remer, Eric
AU - Maria Saibene, Alberto
AU - Zalzal, Habib G.
N1 - Publisher Copyright:
© 2025
PY - 2025/10
Y1 - 2025/10
N2 - Objective: To systematically evaluate the diagnostic accuracy, educational utility, and communication potential of generative AI, particularly Large Language Models (LLMs) such as ChatGPT, in otolaryngology. Data Sources: A comprehensive search of PubMed, Embase, Scopus, Web of Science, and IEEE Xplore identified English-language peer-reviewed studies from January 2022 to March 2025. Review Methods: Eligible studies evaluated text-based generative AI models used in otolaryngology. Two reviewers screened and assessed studies using JBI and QUADAS-2 tools. A random-effects meta-analysis was conducted on diagnostic accuracy outcomes, with subgroup analyses by task type and model version. Results: Ninety-one studies were included; 61 reported quantitative outcomes. Of these, 43 provided diagnostic accuracy data across 59 model-task pairs. Pooled diagnostic accuracy was 72.7 % (95 % CI: 67.4–77.6 %; I² = 93.8 %). Accuracy was highest in education (83.0 %) and diagnostic imaging tasks (84.9 %), and lowest in clinical decision support (67.1 %). GPT-4 consistently outperformed GPT-3.5 across both education and CDS domains. Hallucinations and performance variability were noted in complex clinical reasoning tasks. Conclusion: Generative AI performs well in structured otolaryngology tasks, particularly education and communication. However, its inconsistent performance in clinical reasoning tasks limits standalone use. Future research should focus on hallucination mitigation, standardized evaluation, and prospective validation to guide safe clinical integration.
AB - Objective: To systematically evaluate the diagnostic accuracy, educational utility, and communication potential of generative AI, particularly Large Language Models (LLMs) such as ChatGPT, in otolaryngology. Data Sources: A comprehensive search of PubMed, Embase, Scopus, Web of Science, and IEEE Xplore identified English-language peer-reviewed studies from January 2022 to March 2025. Review Methods: Eligible studies evaluated text-based generative AI models used in otolaryngology. Two reviewers screened and assessed studies using JBI and QUADAS-2 tools. A random-effects meta-analysis was conducted on diagnostic accuracy outcomes, with subgroup analyses by task type and model version. Results: Ninety-one studies were included; 61 reported quantitative outcomes. Of these, 43 provided diagnostic accuracy data across 59 model-task pairs. Pooled diagnostic accuracy was 72.7 % (95 % CI: 67.4–77.6 %; I² = 93.8 %). Accuracy was highest in education (83.0 %) and diagnostic imaging tasks (84.9 %), and lowest in clinical decision support (67.1 %). GPT-4 consistently outperformed GPT-3.5 across both education and CDS domains. Hallucinations and performance variability were noted in complex clinical reasoning tasks. Conclusion: Generative AI performs well in structured otolaryngology tasks, particularly education and communication. However, its inconsistent performance in clinical reasoning tasks limits standalone use. Future research should focus on hallucination mitigation, standardized evaluation, and prospective validation to guide safe clinical integration.
KW - Artificial intelligence
KW - ChatGPT
KW - Generative AI
KW - Large language models
KW - Otolaryngology
KW - Systematic review
UR - https://www.scopus.com/pages/publications/105014826711
U2 - 10.1016/j.anl.2025.08.010
DO - 10.1016/j.anl.2025.08.010
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.systematicreview???
C2 - 40912131
AN - SCOPUS:105014826711
SN - 0385-8146
VL - 52
SP - 585
EP - 596
JO - Auris Nasus Larynx
JF - Auris Nasus Larynx
IS - 5
ER -