Deep Learning Methods for Improved Decoding of Linear Codes

Eliya Nachmani*, Elad Marciano, Loren Lugosch, Warren J. Gross, David Burshtein, Yair Be'Ery

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

Original languageEnglish
Pages (from-to)119-131
Number of pages13
JournalIEEE Journal on Selected Topics in Signal Processing
Issue number1
StatePublished - Feb 2018


  • Deep learning
  • belief propagation
  • error correcting codes
  • min-sum decoding


Dive into the research topics of 'Deep Learning Methods for Improved Decoding of Linear Codes'. Together they form a unique fingerprint.

Cite this