TY - GEN
T1 - The Role of Explainability in Collaborative Human-AI Disinformation Detection
AU - Schmitt, Vera
AU - Villa-Arenas, Luis Felipe
AU - Feldhus, Nils
AU - Meyer, Joachim
AU - Spang, Robert P.
AU - Möller, Sebastian
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/6/3
Y1 - 2024/6/3
N2 - Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users' background knowledge and individual factors. Thus, this research implements a human-centered evaluation schema to evaluate different XAI features for the collaborative human-AI disinformation detection task. Hereby, objective and subjective evaluation dimensions, such as performance, perceived usefulness, understandability, and trust in the AI system, are used to evaluate different XAI features. A user study was conducted with an overall total of 433 participants, whereas 406 crowdworkers and 27 journalists participated as experts in detecting disinformation. The results show that free-text explanations contribute to improving non-expert performance but do not influence the performance of experts. The XAI features increase the perceived usefulness, understandability, and trust in the AI system, but they can also lead crowdworkers to blindly trust the AI system when its predictions are wrong.
AB - Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users' background knowledge and individual factors. Thus, this research implements a human-centered evaluation schema to evaluate different XAI features for the collaborative human-AI disinformation detection task. Hereby, objective and subjective evaluation dimensions, such as performance, perceived usefulness, understandability, and trust in the AI system, are used to evaluate different XAI features. A user study was conducted with an overall total of 433 participants, whereas 406 crowdworkers and 27 journalists participated as experts in detecting disinformation. The results show that free-text explanations contribute to improving non-expert performance but do not influence the performance of experts. The XAI features increase the perceived usefulness, understandability, and trust in the AI system, but they can also lead crowdworkers to blindly trust the AI system when its predictions are wrong.
KW - Collaborative disinformation detection
KW - expert and lay people evaluation
KW - transparent AI systems
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85196421791&partnerID=8YFLogxK
U2 - 10.1145/3630106.3659031
DO - 10.1145/3630106.3659031
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85196421791
T3 - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
SP - 2157
EP - 2174
BT - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
PB - Association for Computing Machinery, Inc
T2 - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
Y2 - 3 June 2024 through 6 June 2024
ER -