Neural Decoding With Optimization of Node Activations

Eliya Nachmani*, Yair Be'ery

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

The problem of maximum likelihood decoding with a neural decoder for error-correcting code is considered. It is shown that the neural decoder can be improved with two novel loss terms on the node's activations. The first loss term imposes a sparse constraint on the node's activations. Whereas, the second loss term tried to mimic the node's activations from a teacher decoder which has better performance. The proposed method has the same run time complexity and model size as the neural Belief Propagation decoder, while improving the decoding performance by up to 1.1dB on BCH codes.

Original languageEnglish
Pages (from-to)2527-2531
Number of pages5
JournalIEEE Communications Letters
Volume26
Issue number11
DOIs
StatePublished - 1 Nov 2022

Keywords

  • Information theory
  • deep learning
  • error correcting codes
  • neural decoder

Fingerprint

Dive into the research topics of 'Neural Decoding With Optimization of Node Activations'. Together they form a unique fingerprint.

Cite this