A 2-DIMENSIONAL STATE SPACE LAYER FOR SPATIAL INDUCTIVE BIAS

Ethan Baron, Itamar Zimerman, Lior Wolf

Research output: Contribution to conferencePaperpeer-review

Abstract

A central objective in computer vision is to design models with appropriate 2-D inductive bias. Desiderata for 2-D inductive bias include two-dimensional position awareness, dynamic spatial locality, and translation and permutation invariance. To address these goals, we leverage an expressive variation of the multidimensional State Space Model (SSM). Our approach introduces efficient parameterization, accelerated computation, and a suitable normalization scheme. Empirically, we observe that incorporating our layer at the beginning of each transformer block of Vision Transformers (ViT), as well as when replacing the Conv2D filters of ConvNeXT with our proposed layers significantly enhances performance for multiple backbones and across multiple datasets. The new layer is effective even with a negligible amount of additional parameters and inference time. Ablation studies and visualizations demonstrate that the layer has a strong 2-D inductive bias. For example, vision transformers equipped with our layer exhibit effective performance even without positional encoding. Our code is available at this git https URL.

Original languageEnglish
StatePublished - 2024
Event12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Austria
Duration: 7 May 202411 May 2024

Conference

Conference12th International Conference on Learning Representations, ICLR 2024
Country/TerritoryAustria
CityHybrid, Vienna
Period7/05/2411/05/24

Funding

FundersFunder number
Blavatnik Family Foundation
Tel Aviv University
Ministry of Innovation, Science & Technology,Israel1001576154
Michael J. Fox Foundation for Parkinson's ResearchMJFF-022407

    Fingerprint

    Dive into the research topics of 'A 2-DIMENSIONAL STATE SPACE LAYER FOR SPATIAL INDUCTIVE BIAS'. Together they form a unique fingerprint.

    Cite this