Skip to main navigation Skip to search Skip to main content

Transformer Language Models without Positional Encodings Still Learn Positional Information

  • Adi Haviv
  • , Ori Ram
  • , Ofir Press
  • , Peter Izsak
  • , Omer Levy

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

53 Scopus citations

Abstract

Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings. However, we show that LMs without any explicit positional encoding are still competitive with standard models, and that this phenomenon is robust across different datasets, model sizes, and sequence lengths. Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information. We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position. Our findings indicate that causal LMs might derive positional awareness not only from the explicit positioning mechanism, but also from the effects of the causal mask.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationEMNLP 2022
EditorsYoav Goldberg, Zornitsa Kozareva, Yue Zhang
PublisherAssociation for Computational Linguistics (ACL)
Pages1382-1390
Number of pages9
ISBN (Electronic)9781959429432
DOIs
StatePublished - 2022
Event2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Hybrid, Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022

Publication series

NameFindings of the Association for Computational Linguistics: EMNLP 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Country/TerritoryUnited Arab Emirates
CityHybrid, Abu Dhabi
Period7/12/2211/12/22

Funding

Funders
Intel Corporation
Meta

    Fingerprint

    Dive into the research topics of 'Transformer Language Models without Positional Encodings Still Learn Positional Information'. Together they form a unique fingerprint.

    Cite this