
doi: 10.21236/ada406427
Abstract : Benchmarks can be useful in estimating the performance of a computer system when it is not possible or practical to test out the new system with an actual workload. In the field of high performance computing, some common benchmarks are the various versions of Linpack, the various versions of the Numerical Aerospace Simulation Systems Division of NASA Ames Research Center (NAS) benchmarks, and the STREAMS benchmark, as well as older and less frequently referenced benchmarks such as the Livermore Loops. There are also those who recommend estimating the performance based solely on the peak speed of the computer systems. Unfortunately, the per processor levels of performance measured using these benchmarks can vary by 1 to 2 orders of magnitude for the same system. Therefore, one has to ask, which benchmark(s) should we be looking at? This report attempts to answer that question by comparing the measured performance for a variety of real world codes to the measured performance of the standard benchmarks when run of systems of interest to the Department of Defense (DOD) High Performance Computing Modernization Program.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
