3VL: Using Trees to Improve Vision-Language Models' Interpretability

Nir Yellinek, Leonid Karlinsky, Raja Giryes*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Vision-Language models (VLMs) have proven to be effective at aligning image and text representations, producing superior zero-shot results when transferred to many downstream tasks. However, these representations suffer from some key shortcomings in understanding Compositional Language Concepts (CLC), such as recognizing objects' attributes, states, and relations between different objects. Moreover, VLMs typically have poor interpretability, making it challenging to debug and mitigate compositional-understanding failures. In this work, we introduce the architecture and training technique of Tree-augmented Vision-Language (3VL) model accompanied by our proposed Anchor inference method and Differential Relevance (DiRe) interpretability tool. By expanding the text of an arbitrary image-text pair into a hierarchical tree structure using language analysis tools, 3VL allows the induction of this structure into the visual representation learned by the model, enhancing its interpretability and compositional reasoning. Additionally, we show how Anchor, a simple technique for text unification, can be used to filter nuisance factors while increasing CLC understanding performance, e.g., on the fundamental VL-Checklist benchmark. We also show how DiRe, which performs a differential comparison between VLM relevancy maps, enables us to generate compelling visualizations of the reasons for a model's success or failure.

Original languageEnglish
Pages (from-to)495-509
Number of pages15
JournalIEEE Transactions on Image Processing
Volume34
DOIs
StatePublished - 2025

Funding

FundersFunder number
Tel Aviv University

    Keywords

    • Convolutional neural networks
    • Visual Language models (VLMs)
    • compositional reasoning
    • explainable AI

    Fingerprint

    Dive into the research topics of '3VL: Using Trees to Improve Vision-Language Models' Interpretability'. Together they form a unique fingerprint.

    Cite this