Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Publikationer från K...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
Publikationer från KTH
Bachelor thesis . 2017
ResearchGate Data
Thesis . 2017
Data sources: Datacite
versions View all 2 versions
addClaim

Machine Intelligence in Decoding of Forward Error Correction Codes.

Authors: Agrawal, Navneet;

Machine Intelligence in Decoding of Forward Error Correction Codes.

Abstract

A deep learning algorithm for improving the performance of the Sum-ProductAlgorithm (SPA) based decoders is investigated. The proposed Neural NetworkDecoders (NND) [22] generalizes the SPA by assigning weights to the edges ofthe Tanner graph. We elucidate the peculiar design, training, and working of theNND. We analyze the edge weight’s distribution of the trained NND and providea deeper insight into its working. The training process of NND learns the edgeweights in such a way that the effects of artifacts in the Tanner graph (such ascycles or trapping sets) are mitigated, leading to a significant improvement inperformance over the SPA.We conduct an extensive analysis of the training hyper-parameters affectingthe performance of the NND, and present hypotheses for determining theirappropriate choices for different families and sizes of codes. Experimental resultsare used to verify the hypotheses and rationale presented. Furthermore,we propose a new loss-function that improves performance over the standardcross-entropy loss. We also investigate the limitations of the NND in termsof complexity and performance. Although the SPA based design of the NNDenables faster training and reduced complexity, the design constraints restrictthe neural network to reach its maximum potential. Our experiments show thatthe NND is unable to reach Maximum Likelihood (ML) performance thresholdfor any plausible set of hyper-parameters. However for short length (n 128)High Density Parity Check (HDPC) codes such as Polar or BCH codes, theperformance improvement over the SPA is significant. En djup inlärningsalgoritm för att förbättra prestanda hos SPA-baserade (Sum-Product Algorithm) avkodare undersöks. Den föreslagna neuronnätsavkodaren(Neural Network Decoder, NND) [22] generaliserar SPA genom att tilldela viktertill bågarna i Tannergrafen. Vi undersöker neuronnätsavkodarens utformning,träning och funktion. Vi analyserar fördelningen av båg vikter hos en tränadneuronnätsavkodare och förmedlar en djupare insikt i dess funktion. Träningenav neuronnätsavkodaren är sådan att den lär sig bågvikter så att effekternaav artefakter hos Tannergrafen (såsom cykler och fångstmängder‡) minimeras,vilket leder till betydande prestandaförbättringar jämfört med SPA.Vi genomför en omfattande analys av de tränings-hyper-parametrar sompåverkar prestanda hos neuronnätsavkodaren och presenterar hypoteser för lämpligaval av tränings-hyper-parametrar för olika familjer och storlekar av koder. Experimentellaresultat används för att verifiera dessa hypoteser och förklaringarpresenteras. Dessutom föreslår vi ett nytt felmått som förbättrar prestanda jämförtmed det vanliga korsentropimåttet§. Vi undersöker också begränsningar hosneuronnätsavkodaren med avseende på komplexitet och prestanda. Neuronnätsavkodarenär baserad på SPA vilket möjliggör snabbare träning och minskadkomplexitet, till priset av begränsningar hos neuronnätsavkodaren som gör attden inte kan nå ML-prestanda för någon rimlig uppsättning tränings-hyperparametrar.För korta (n <= 128) högdensitetsparitetskoder (High-DensityParity Check, HDPC), exempelvis Polarkoder eller BCH-koder, är prestandaförbättringarnajämfört med SPA dock betydande.

Country
Sweden
Related Organizations
Keywords

Electrical Engineering, Electronic Engineering, Information Engineering, Elektroteknik och elektronik

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green
Related to Research communities