TY - GEN
T1 - Evaluating Human-Centered AI Explanations
T2 - 3rd ACM International Workshop on Multimedia AI against Disinformation, MAD 2024
AU - Schmitt, Vera
AU - Csomor, Balázs Patrik
AU - Meyer, Joachim
AU - Villa-Areas, Luis Felipe
AU - Jakob, Charlott
AU - Polzehl, Tim
AU - Möller, Sebastian
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/6/10
Y1 - 2024/6/10
N2 - The rapidly increasing amount of online information and the advent of Generative Artificial Intelligence (GenAI) make the manual verification of information impractical. Consequently, AI systems are deployed to detect disinformation and deepfakes. Prior studies have indicated that combining AI and human capabilities yields enhanced performance in detecting disinformation. Furthermore, the European Union (EU) AI Act mandates human supervision for AI applications in areas impacting essential human rights, like freedom of speech, necessitating that AI systems be transparent and provide adequate explanations to ensure comprehensibility. Extensive research has been conducted on incorporating explainability (XAI) attributes to augment AI transparency, yet these often miss a human-centric assessment. The effectiveness of such explanations also varies with the user's prior knowledge and personal attributes. Therefore, we developed a framework for validating XAI features for the collaborative human-AI fact-checking task. The framework allows the testing of XAI features with objective and subjective evaluation dimensions and follows human-centric design principles when displaying information about the AI system to the users. The framework was tested in a crowdsourcing experiment with 433 participants, including 406 crowdworkers and 27 journalists for the collaborative disinformation detection task. The tested XAI features increase the AI system's perceived usefulness, understandability, and trust. With this publication, the XAI evaluation framework is made open source.
AB - The rapidly increasing amount of online information and the advent of Generative Artificial Intelligence (GenAI) make the manual verification of information impractical. Consequently, AI systems are deployed to detect disinformation and deepfakes. Prior studies have indicated that combining AI and human capabilities yields enhanced performance in detecting disinformation. Furthermore, the European Union (EU) AI Act mandates human supervision for AI applications in areas impacting essential human rights, like freedom of speech, necessitating that AI systems be transparent and provide adequate explanations to ensure comprehensibility. Extensive research has been conducted on incorporating explainability (XAI) attributes to augment AI transparency, yet these often miss a human-centric assessment. The effectiveness of such explanations also varies with the user's prior knowledge and personal attributes. Therefore, we developed a framework for validating XAI features for the collaborative human-AI fact-checking task. The framework allows the testing of XAI features with objective and subjective evaluation dimensions and follows human-centric design principles when displaying information about the AI system to the users. The framework was tested in a crowdsourcing experiment with 433 participants, including 406 crowdworkers and 27 journalists for the collaborative disinformation detection task. The tested XAI features increase the AI system's perceived usefulness, understandability, and trust. With this publication, the XAI evaluation framework is made open source.
KW - Human-centered eXplanations
KW - blind trust in AI systems
KW - objective and subjective evaluation of eXplanations
UR - http://www.scopus.com/inward/record.url?scp=85196375447&partnerID=8YFLogxK
U2 - 10.1145/3643491.3660283
DO - 10.1145/3643491.3660283
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85196375447
T3 - ACM International Conference Proceeding Series
SP - 91
EP - 100
BT - MAD 2024 - Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation
PB - Association for Computing Machinery
Y2 - 10 June 2024
ER -