
While in the past several decades the trend to go towards increasing error-correcting code lengths was predominant to get closer to the Shannon limit, applications that require short block length are developing. Therefore, decoding techniques that can achieve near-maximum-likelihood (near-ML) are gaining momentum. This overview paper surveys recent progress in this emerging field by reviewing the GRAND algorithm, linear programming decoding, machine-learning aided decoding and the recursive projection-aggregation decoding algorithm. For each of the decoding algorithms, both algorithmic and hardware implementations are considered, and future research directions are outlined.
Linear programming decoding, Maximum-likelihood decoding, Machine-learning aided decoding, GRAND, Recursive projection-aggregation decoding
Linear programming decoding, Maximum-likelihood decoding, Machine-learning aided decoding, GRAND, Recursive projection-aggregation decoding
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
