Downloads provided by UsageCounts
Machine learning algorithms are complex to model on hardware. This is due to the fact that these algorithms require a lot of complex design systems, which are not easily synthesizable. Therefore, over the years, multiple researchers have developed various state-of-the art techniques, each of them has certain distinct advantages over the others. In this text, we compare the different techniques for hardware modelling of different machine learning (ML) algorithms, and their hardware-level performance. This text will be useful for any researcher or system designer that needs to first evaluate the optimum techniques for ML design, and then inspired by this, they can further extend it and optimize the system’s performance. Our evaluation is based on the 3 primary parameters of hardware design; i.e.; area, energy and delay. Any design technique that can find a balance between these 3 parameters can be termed as optimum. This work also recommends certain improvements for some of the techniques, which can be taken up for further research.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 4 | |
| downloads | 10 |

Views provided by UsageCounts
Downloads provided by UsageCounts