A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems

Oren Barkan, Veronika Bogina, Liya Gurevitch, Yuval Asher, Noam Koenigstein*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

In the field of recommender systems, explainability remains a pivotal yet challenging aspect. To address this, we introduce the Learning to eXplain Recommendations (LXR) framework, a post-hoc, model-agnostic approach designed for providing counterfactual explanations. LXR is compatible with any differentiable recommender algorithm and scores the relevance of user data in relation to recommended items. A distinctive feature of LXR is its use of novel self-supervised counterfactual loss terms, which effectively highlight the most influential user data responsible for a specific recommended item. Additionally, we propose several innovative counterfactual evaluation metrics specifically tailored for assessing the quality of explanations in recommender systems. Our code is available on our GitHub repository: https://github.com/DeltaLabTLV/LXR.

Original languageEnglish
Title of host publicationWWW 2024 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Pages3723-3733
Number of pages11
ISBN (Electronic)9798400701719
DOIs
StatePublished - 13 May 2024
Event33rd ACM Web Conference, WWW 2024 - Singapore, Singapore
Duration: 13 May 202417 May 2024

Publication series

NameWWW 2024 - Proceedings of the ACM Web Conference

Conference

Conference33rd ACM Web Conference, WWW 2024
Country/TerritorySingapore
CitySingapore
Period13/05/2417/05/24

Funding

FundersFunder number
Israel Science Foundation2243/20

    Keywords

    • attributions
    • counterfactual explanations
    • explainable ai
    • explanation evaluation
    • interpretability
    • recommender systems

    Fingerprint

    Dive into the research topics of 'A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems'. Together they form a unique fingerprint.

    Cite this