
The sparse linear solver is an important component in lots of scientific computing applications. For large-scale sparse linear systems, general-purpose processors such as CPUs and GPUs are facing challenges of high time complexity and massive data movements between processors and main memories. This work utilizes the ability of in-situ analog computing of RRAMs and builds an RRAMbased accelerator for iterative linear solvers.We first propose a basic principle of mapping iterative solvers onto RRAM-based crossbar arrays. The proposed principle eliminates not only the iterations but also the convergence condition. Based on the principle, we propose a scalable architecture that can solve large-scale sparse matrices in O(1) time complexity. Compared with a massively parallel iterative solver on GPU, our accelerator shows 100× higher performance and 1000× energy reduction. If the solution obtained by our accelerator is used as the seed for a further refinement on GPU, about 35% of the solving time and energy consumption can be saved compared with a pure GPU solving process.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 33 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
