TY - JOUR
T1 - Diagnostic capabilities of ChatGPT in ophthalmology
AU - Shemer, Asaf
AU - Cohen, Michal
AU - Altarescu, Aya
AU - Atar-Vardi, Maya
AU - Hecht, Idan
AU - Dubinsky-Pertzov, Biana
AU - Shoshany, Nadav
AU - Zmujack, Sigal
AU - Or, Lior
AU - Einan-Lifshitz, Adi
AU - Pras, Eran
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.
PY - 2024/7
Y1 - 2024/7
N2 - Purpose: The purpose of this study is to assess the diagnostic accuracy of ChatGPT in the field of ophthalmology. Methods: This is a retrospective cohort study conducted in one academic tertiary medical center. We reviewed data of patients admitted to the ophthalmology department from 06/2022 to 01/2023. We then created two clinical cases for each patient. The first case is according to the medical history alone (Hx). The second case includes an addition of the clinical examination (Hx and Ex). For each case, we asked for the three most likely diagnoses from ChatGPT, residents, and attendings. Then, we compared the accuracy rates (at least one correct diagnosis) of all groups. Additionally, we evaluated the total duration for completing the assignment between the groups. Results: ChatGPT, residents, and attendings evaluated 126 cases from 63 patients (history only or history and exam findings for each patient). ChatGPT achieved a significantly lower accurate diagnosis rate (54%) in the Hx, as compared to the residents (75%; p < 0.01) and attendings (71%; p < 0.01). After adding the clinical examination findings, the diagnosis rate of ChatGPT was 68%, whereas for the residents and the attendings, it increased to 94% (p < 0.01) and 86% (p < 0.01), respectively. ChatGPT was 4 to 5 times faster than the attendings and residents. Conclusions and relevance: ChatGPT showed low diagnostic rates in ophthalmology cases compared to residents and attendings based on patient history alone or with additional clinical examination findings. However, ChatGPT completed the task faster than the physicians. (Figure presented.)
AB - Purpose: The purpose of this study is to assess the diagnostic accuracy of ChatGPT in the field of ophthalmology. Methods: This is a retrospective cohort study conducted in one academic tertiary medical center. We reviewed data of patients admitted to the ophthalmology department from 06/2022 to 01/2023. We then created two clinical cases for each patient. The first case is according to the medical history alone (Hx). The second case includes an addition of the clinical examination (Hx and Ex). For each case, we asked for the three most likely diagnoses from ChatGPT, residents, and attendings. Then, we compared the accuracy rates (at least one correct diagnosis) of all groups. Additionally, we evaluated the total duration for completing the assignment between the groups. Results: ChatGPT, residents, and attendings evaluated 126 cases from 63 patients (history only or history and exam findings for each patient). ChatGPT achieved a significantly lower accurate diagnosis rate (54%) in the Hx, as compared to the residents (75%; p < 0.01) and attendings (71%; p < 0.01). After adding the clinical examination findings, the diagnosis rate of ChatGPT was 68%, whereas for the residents and the attendings, it increased to 94% (p < 0.01) and 86% (p < 0.01), respectively. ChatGPT was 4 to 5 times faster than the attendings and residents. Conclusions and relevance: ChatGPT showed low diagnostic rates in ophthalmology cases compared to residents and attendings based on patient history alone or with additional clinical examination findings. However, ChatGPT completed the task faster than the physicians. (Figure presented.)
KW - Artificial intelligence
KW - ChatGPT
KW - Diagnosis
KW - Ophthalmology
KW - Residents
UR - http://www.scopus.com/inward/record.url?scp=85181502036&partnerID=8YFLogxK
U2 - 10.1007/s00417-023-06363-z
DO - 10.1007/s00417-023-06363-z
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 38183467
AN - SCOPUS:85181502036
SN - 0721-832X
VL - 262
SP - 2345
EP - 2352
JO - Graefe's Archive for Clinical and Experimental Ophthalmology
JF - Graefe's Archive for Clinical and Experimental Ophthalmology
IS - 7
ER -