Downloads provided by UsageCounts
Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning.
FOS: Computer and information sciences, Computer engineering. Computer hardware, Computer Science - Machine Learning, Near-memory-processing, Machine Learning (cs.LG), TK7885-7895, Neural networks (Computer science), Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors, Processing-in-memory, Machine learning, Deep neural networks, Aprenentatge automàtic, Hardware Architecture (cs.AR), Xarxes neuronals (Informàtica), near-memory-processing, Computer Science - Hardware Architecture, conventional memory technology, Conventional memory technology, Near-data processing, 004, near-data processing, machine learning, deep neural networks, machine learning; deep neural networks; near-data processing; near-memory-processing; processing-in-memory; conventional memory technology; emerging memory technology; hardware architecture, Hardware architecture, Emerging memory technology, processing-in-memory
FOS: Computer and information sciences, Computer engineering. Computer hardware, Computer Science - Machine Learning, Near-memory-processing, Machine Learning (cs.LG), TK7885-7895, Neural networks (Computer science), Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors, Processing-in-memory, Machine learning, Deep neural networks, Aprenentatge automàtic, Hardware Architecture (cs.AR), Xarxes neuronals (Informàtica), near-memory-processing, Computer Science - Hardware Architecture, conventional memory technology, Conventional memory technology, Near-data processing, 004, near-data processing, machine learning, deep neural networks, machine learning; deep neural networks; near-data processing; near-memory-processing; processing-in-memory; conventional memory technology; emerging memory technology; hardware architecture, Hardware architecture, Emerging memory technology, processing-in-memory
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 12 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
| views | 63 | |
| downloads | 50 |

Views provided by UsageCounts
Downloads provided by UsageCounts