TY - JOUR
T1 - ChatGPT-4 Assistance in Optimizing Emergency Department Radiology Referrals and Imaging Selection
AU - Barash, Yiftach
AU - Klang, Eyal
AU - Konen, Eli
AU - Sorin, Vera
N1 - Publisher Copyright:
© 2023 American College of Radiology
PY - 2023/10
Y1 - 2023/10
N2 - Purpose: The quality of radiology referrals influences patient management and imaging interpretation by radiologists. The aim of this study was to evaluate ChatGPT-4 as a decision support tool for selecting imaging examinations and generating radiology referrals in the emergency department (ED). Methods: Five consecutive clinical notes from the ED were retrospectively extracted, for each of the following pathologies: pulmonary embolism, obstructing kidney stones, acute appendicitis, diverticulitis, small bowel obstruction, acute cholecystitis, acute hip fracture, and testicular torsion. A total of 40 cases were included. These notes were entered into ChatGPT-4, requesting recommendations on the most appropriate imaging examinations and protocols. The chatbot was also asked to generate radiology referrals. Two independent radiologists graded the referral on a scale ranging from 1 to 5 for clarity, clinical relevance, and differential diagnosis. The chatbot's imaging recommendations were compared with the ACR Appropriateness Criteria (AC) and with the examinations performed in the ED. Agreement between readers was assessed using linear weighted Cohen's κ coefficient. Results: ChatGPT-4's imaging recommendations aligned with the ACR AC and ED examinations in all cases. Protocol discrepancies between ChatGPT and the ACR AC were observed in two cases (5%). ChatGPT-4-generated referrals received mean scores of 4.6 and 4.8 for clarity, 4.5 and 4.4 for clinical relevance, and 4.9 from both reviewers for differential diagnosis. Agreement between readers was moderate for clinical relevance and clarity and substantial for differential diagnosis grading. Conclusions: ChatGPT-4 has shown potential in aiding imaging study selection for select clinical cases. As a complementary tool, large language models may improve radiology referral quality. Radiologists should stay informed about this technology and be mindful of potential challenges and risks.
AB - Purpose: The quality of radiology referrals influences patient management and imaging interpretation by radiologists. The aim of this study was to evaluate ChatGPT-4 as a decision support tool for selecting imaging examinations and generating radiology referrals in the emergency department (ED). Methods: Five consecutive clinical notes from the ED were retrospectively extracted, for each of the following pathologies: pulmonary embolism, obstructing kidney stones, acute appendicitis, diverticulitis, small bowel obstruction, acute cholecystitis, acute hip fracture, and testicular torsion. A total of 40 cases were included. These notes were entered into ChatGPT-4, requesting recommendations on the most appropriate imaging examinations and protocols. The chatbot was also asked to generate radiology referrals. Two independent radiologists graded the referral on a scale ranging from 1 to 5 for clarity, clinical relevance, and differential diagnosis. The chatbot's imaging recommendations were compared with the ACR Appropriateness Criteria (AC) and with the examinations performed in the ED. Agreement between readers was assessed using linear weighted Cohen's κ coefficient. Results: ChatGPT-4's imaging recommendations aligned with the ACR AC and ED examinations in all cases. Protocol discrepancies between ChatGPT and the ACR AC were observed in two cases (5%). ChatGPT-4-generated referrals received mean scores of 4.6 and 4.8 for clarity, 4.5 and 4.4 for clinical relevance, and 4.9 from both reviewers for differential diagnosis. Agreement between readers was moderate for clinical relevance and clarity and substantial for differential diagnosis grading. Conclusions: ChatGPT-4 has shown potential in aiding imaging study selection for select clinical cases. As a complementary tool, large language models may improve radiology referral quality. Radiologists should stay informed about this technology and be mindful of potential challenges and risks.
KW - AI
KW - ChatGPT
KW - Large language models
KW - radiology
KW - referrals
UR - http://www.scopus.com/inward/record.url?scp=85168320525&partnerID=8YFLogxK
U2 - 10.1016/j.jacr.2023.06.009
DO - 10.1016/j.jacr.2023.06.009
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 37423350
AN - SCOPUS:85168320525
SN - 1546-1440
VL - 20
SP - 998
EP - 1003
JO - Journal of the American College of Radiology
JF - Journal of the American College of Radiology
IS - 10
ER -