TY - JOUR
T1 - It takes one to know one—Machine learning for identifying OBGYN abstracts written by ChatGPT
AU - Levin, Gabriel
AU - Meyer, Raanan
AU - Guigue, Paul Adrien
AU - Brezinov, Yoav
N1 - Publisher Copyright:
© 2024 The Authors. International Journal of Gynecology & Obstetrics published by John Wiley & Sons Ltd on behalf of International Federation of Gynecology and Obstetrics.
PY - 2024/6
Y1 - 2024/6
N2 - Objectives: To use machine learning to optimize the detection of obstetrics and gynecology (OBGYN) Chat Generative Pre-trained Transformer (ChatGPT) -written abstracts of all OBGYN journals. Methods: We used Web of Science to identify all original articles published in all OBGYN journals in 2022. Seventy-five original articles were randomly selected. For each, we prompted ChatGPT to write an abstract based on the title and results of the original abstracts. Each abstract was tested by Grammarly software and reports were inserted into a database. Machine-learning modes were trained and examined on the database created. Results: Overall, 75 abstracts from 12 different OBGYN journals were randomly selected. There were seven (58%) Q1 journals, one (8%) Q2 journal, two (17%) Q3 journals, and two (17%) Q4 journals. Use of mixed dialects of English, absence of comma-misuse, absence of incorrect verb forms, and improper formatting were important prediction variables of ChatGPT-written abstracts. The deep-learning model had the highest predictive performance of all examined models. This model achieved the following performance: accuracy 0.90, precision 0.92, recall 0.85, area under the curve 0.95. Conclusions: Machine-learning-based tools reach high accuracy in identifying ChatGPT-written OBGYN abstracts.
AB - Objectives: To use machine learning to optimize the detection of obstetrics and gynecology (OBGYN) Chat Generative Pre-trained Transformer (ChatGPT) -written abstracts of all OBGYN journals. Methods: We used Web of Science to identify all original articles published in all OBGYN journals in 2022. Seventy-five original articles were randomly selected. For each, we prompted ChatGPT to write an abstract based on the title and results of the original abstracts. Each abstract was tested by Grammarly software and reports were inserted into a database. Machine-learning modes were trained and examined on the database created. Results: Overall, 75 abstracts from 12 different OBGYN journals were randomly selected. There were seven (58%) Q1 journals, one (8%) Q2 journal, two (17%) Q3 journals, and two (17%) Q4 journals. Use of mixed dialects of English, absence of comma-misuse, absence of incorrect verb forms, and improper formatting were important prediction variables of ChatGPT-written abstracts. The deep-learning model had the highest predictive performance of all examined models. This model achieved the following performance: accuracy 0.90, precision 0.92, recall 0.85, area under the curve 0.95. Conclusions: Machine-learning-based tools reach high accuracy in identifying ChatGPT-written OBGYN abstracts.
KW - Artificial intelligence
KW - ChatGPT
KW - Machine learning
KW - Obstetrics and gynecology
KW - performance
UR - http://www.scopus.com/inward/record.url?scp=85182423028&partnerID=8YFLogxK
U2 - 10.1002/ijgo.15365
DO - 10.1002/ijgo.15365
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 38234125
AN - SCOPUS:85182423028
SN - 0020-7292
VL - 165
SP - 1257
EP - 1260
JO - International Journal of Gynecology and Obstetrics
JF - International Journal of Gynecology and Obstetrics
IS - 3
ER -