Downloads provided by UsageCounts
Neural Networks (NN), although successfully applied to several Artificial Intelligence tasks, are often unnecessarily over-parametrized. In edge/fog computing, this might make their training prohibitive on resource-constrained devices, contrasting with the current trend of decentralizing intelligence from remote data centres to local constrained devices. Therefore, we investigate the problem of training effective NN models on constrained devices having a fixed, potentially small, memory budget. We target techniques that are both resource-efficient and performance effective while enabling significant network compression. Our Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training, identifying neurons that marginally contribute to the model accuracy. DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training. Freed memory is reused by a dynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy, improving its convergence and effectiveness. We assess the performance of DynHP through reproducible experiments on three public datasets, comparing them against reference competitors. Results show that DynHP compresses a NN up to 10 times without significant performance drops (up to 3.5% additional error w.r.t. the competitors), reducing up to 80% the training memory occupancy.
This work is partially supported by four projects: HumanE AI Network (EU H2020 HumanAI-Net, GA #952026), SoBigData++ (EU H2020 SoBigData++, GA #871042) OK-INSAID (MIUR PON ARS01 00917), H2020 MARVEL (GA #957337), SAI: Social Explainable AI (EC CHIST-ERA-19-XAI-010).
FOS: Computer and information sciences, Computer Science - Machine Learning, Artificial neural networks, Compression, pruning, compression, resource-constrained devices, Pruning, Resource-constrained devices, Machine Learning (cs.LG), artificial neural networks
FOS: Computer and information sciences, Computer Science - Machine Learning, Artificial neural networks, Compression, pruning, compression, resource-constrained devices, Pruning, Resource-constrained devices, Machine Learning (cs.LG), artificial neural networks
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 5 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
| views | 11 | |
| downloads | 16 |

Views provided by UsageCounts
Downloads provided by UsageCounts