TY - JOUR
T1 - Assessing data gathering of chatbot based symptom checkers - a clinical vignettes study
AU - Ben-Shabat, Niv
AU - Sharvit, Gal
AU - Meimis, Ben
AU - Ben Joya, Daniel
AU - Sloma, Ariel
AU - Kiderman, David
AU - Shabat, Aviv
AU - Tsur, Avishai M.
AU - Watad, Abdulla
AU - Amital, Howard
N1 - Publisher Copyright:
© 2022 The Author(s)
PY - 2022/12
Y1 - 2022/12
N2 - Background: The burden on healthcare systems is mounting continuously owing to population growth and aging, overuse of medical services, and the recent COVID-19 pandemic. This overload is also causing reduced healthcare quality and outcomes. One solution gaining momentum is the integration of intelligent self-assessment tools, known as symptom-checkers, into healthcare-providers’ systems. To the best of our knowledge, no study so far has investigated the data-gathering capabilities of these tools, which represent a crucial resource for simulating doctors’ skills in medical-interviews. Objectives: The goal of this study was to evaluate the data-gathering function of currently available chatbot symptom-checkers. Methods: We evaluated 8 symptom-checkers using 28 clinical vignettes from the repository of MSD-Manual case studies. The mean number of predefined pertinent findings for each case was 31.8 ± 6.8. The vignettes were entered into the platforms by 3 medical students who simulated the role of the patient. For each conversation, we obtained the number of pertinent findings retrieved and the number of questions asked. We then calculated the recall-rates (pertinent-findings retrieved out of all predefined pertinent-findings), and efficiency-rates (pertinent-findings retrieved out of the number of questions asked) of data-gathering, and compared them between the platforms. Results: The overall recall rate for all symptom-checkers was 0.32(2,280/7,112;95 %CI 0.31–0.33) for all pertinent findings, 0.37(1,110/2,992;95 %CI 0.35–0.39) for present findings, and 0.28(1140/4120;95 %CI 0.26–0.29) for absent findings. Among the symptom-checkers, Kahun platform had the highest recall rate with 0.51(450/889;95 %CI 0.47–0.54). Out of 4,877 questions asked overall, 2,280 findings were gathered, yielding an efficiency rate of 0.46(95 %CI 0.45–0.48) across all platforms. Kahun was the most efficient tool 0.74 (95 %CI 0.70–0.77) without a statistically significant difference from Your.MD 0.69(95 %CI 0.65–0.73). Conclusion: The data-gathering performance of currently available symptom checkers is questionable. From among the tools available, Kahun demonstrated the best overall performance.
AB - Background: The burden on healthcare systems is mounting continuously owing to population growth and aging, overuse of medical services, and the recent COVID-19 pandemic. This overload is also causing reduced healthcare quality and outcomes. One solution gaining momentum is the integration of intelligent self-assessment tools, known as symptom-checkers, into healthcare-providers’ systems. To the best of our knowledge, no study so far has investigated the data-gathering capabilities of these tools, which represent a crucial resource for simulating doctors’ skills in medical-interviews. Objectives: The goal of this study was to evaluate the data-gathering function of currently available chatbot symptom-checkers. Methods: We evaluated 8 symptom-checkers using 28 clinical vignettes from the repository of MSD-Manual case studies. The mean number of predefined pertinent findings for each case was 31.8 ± 6.8. The vignettes were entered into the platforms by 3 medical students who simulated the role of the patient. For each conversation, we obtained the number of pertinent findings retrieved and the number of questions asked. We then calculated the recall-rates (pertinent-findings retrieved out of all predefined pertinent-findings), and efficiency-rates (pertinent-findings retrieved out of the number of questions asked) of data-gathering, and compared them between the platforms. Results: The overall recall rate for all symptom-checkers was 0.32(2,280/7,112;95 %CI 0.31–0.33) for all pertinent findings, 0.37(1,110/2,992;95 %CI 0.35–0.39) for present findings, and 0.28(1140/4120;95 %CI 0.26–0.29) for absent findings. Among the symptom-checkers, Kahun platform had the highest recall rate with 0.51(450/889;95 %CI 0.47–0.54). Out of 4,877 questions asked overall, 2,280 findings were gathered, yielding an efficiency rate of 0.46(95 %CI 0.45–0.48) across all platforms. Kahun was the most efficient tool 0.74 (95 %CI 0.70–0.77) without a statistically significant difference from Your.MD 0.69(95 %CI 0.65–0.73). Conclusion: The data-gathering performance of currently available symptom checkers is questionable. From among the tools available, Kahun demonstrated the best overall performance.
KW - Artificial intelligence
KW - Chatbots
KW - Computer-assisted diagnosis
KW - Data-gathering
KW - Diagnosis
KW - Medical interview
KW - Symptom checker
KW - Telemedicine
KW - Triage
UR - http://www.scopus.com/inward/record.url?scp=85140333853&partnerID=8YFLogxK
U2 - 10.1016/j.ijmedinf.2022.104897
DO - 10.1016/j.ijmedinf.2022.104897
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 36306653
AN - SCOPUS:85140333853
SN - 1386-5056
VL - 168
JO - International Journal of Medical Informatics
JF - International Journal of Medical Informatics
M1 - 104897
ER -