Reinforcement Learning for Channel Coding: Learned Bit-Flipping Decoding

Published

Journal Article

© 2019 IEEE. In this paper, we use reinforcement learning to find effective decoding strategies for binary linear codes. We start by reviewing several iterative decoding algorithms that involve a decision-making process at each step, including bit-flipping (BF) decoding, residual belief propagation, and anchor decoding. We then illustrate how such algorithms can be mapped to Markov decision processes allowing for data-driven learning of optimal decision strategies, rather than basing decisions on heuristics or intuition. As a case study, we consider BF decoding for both the binary symmetric and additive white Gaussian noise channel. Our results show that learned BF decoders can offer a range of performance-complexity trade-offs for the considered Reed-Muller and BCH codes, and achieve near-optimal performance in some cases. We also demonstrate learning convergence speed-ups when biasing the learning process towards correct decoding decisions, as opposed to relying only on random explorations and past knowledge.

Full Text

Duke Authors

Cited Authors

  • Carpi, F; Hager, C; Martalo, M; Raheli, R; Pfister, HD

Published Date

  • September 1, 2019

Published In

  • 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019

Start / End Page

  • 922 - 929

Digital Object Identifier (DOI)

  • 10.1109/ALLERTON.2019.8919799

Citation Source

  • Scopus