The Role of Explainability in Collaborative Human-AI Disinformation Detection

Vera Schmitt, Luis Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P. Spang, Sebastian Möller

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users' background knowledge and individual factors. Thus, this research implements a human-centered evaluation schema to evaluate different XAI features for the collaborative human-AI disinformation detection task. Hereby, objective and subjective evaluation dimensions, such as performance, perceived usefulness, understandability, and trust in the AI system, are used to evaluate different XAI features. A user study was conducted with an overall total of 433 participants, whereas 406 crowdworkers and 27 journalists participated as experts in detecting disinformation. The results show that free-text explanations contribute to improving non-expert performance but do not influence the performance of experts. The XAI features increase the perceived usefulness, understandability, and trust in the AI system, but they can also lead crowdworkers to blindly trust the AI system when its predictions are wrong.

Original languageEnglish
Title of host publication2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
PublisherAssociation for Computing Machinery, Inc
Pages2157-2174
Number of pages18
ISBN (Electronic)9798400704505
DOIs
StatePublished - 3 Jun 2024
Event2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 - Rio de Janeiro, Brazil
Duration: 3 Jun 20246 Jun 2024

Publication series

Name2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024

Conference

Conference2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
Country/TerritoryBrazil
CityRio de Janeiro
Period3/06/246/06/24

Funding

FundersFunder number
Bundesministerium für Bildung und Forschung03RU2U151C

    Keywords

    • Collaborative disinformation detection
    • expert and lay people evaluation
    • transparent AI systems
    • XAI

    Fingerprint

    Dive into the research topics of 'The Role of Explainability in Collaborative Human-AI Disinformation Detection'. Together they form a unique fingerprint.

    Cite this