Downloads provided by UsageCounts
Neural network pruning allows for impressive theoretical reduction of models sizes and complexity. However it usually offers little practical benefits as it is most often limited to just zeroing out weights, without actually removing the pruned parameters. This precludes from actual advantages provided by sparsification methods. We propose Simplify, a PyTorch compatible library for achieving effective model simplification. Simplified models benefit of both a smaller memory footprint and a lower inference time, making their deployment to embedded or mobile devices much more efficient.
Optimization, QA76.75-76.765, Deep Learning, PyTorch, Deep learning, Computer software, Simplification, Pruning
Optimization, QA76.75-76.765, Deep Learning, PyTorch, Deep learning, Computer software, Simplification, Pruning
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 12 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
| views | 9 | |
| downloads | 7 |

Views provided by UsageCounts
Downloads provided by UsageCounts