TY - JOUR
T1 - Advantages and pitfalls in utilizing artificial intelligence for crafting medical examinations
T2 - a medical education pilot study with GPT-4
AU - Klang, E.
AU - Portugez, S.
AU - Gross, R.
AU - Kassif Lerner, R.
AU - Brenner, A.
AU - M, Gilboa
AU - Ortal, T.
AU - Ron, S.
AU - Robinzon, V.
AU - Meiri, H.
AU - Segal, G.
N1 - Publisher Copyright:
© 2023, BioMed Central Ltd., part of Springer Nature.
PY - 2023/12
Y1 - 2023/12
N2 - Background: The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable. Methods: During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities. Results: After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning. Conclusion: GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal.
AB - Background: The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable. Methods: During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities. Results: After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning. Conclusion: GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal.
KW - Artificial intelligence
KW - Chat GPT
KW - Medical examinations
KW - Multiple choice questions
UR - http://www.scopus.com/inward/record.url?scp=85174460994&partnerID=8YFLogxK
U2 - 10.1186/s12909-023-04752-w
DO - 10.1186/s12909-023-04752-w
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 37848913
AN - SCOPUS:85174460994
SN - 1472-6920
VL - 23
JO - BMC Medical Education
JF - BMC Medical Education
IS - 1
M1 - 772
ER -