Downloads provided by UsageCounts
This dataset contains the validation data of PeASS. It shows that from every performance change documented by developers, 81% can be found by measuring the performance of unit tests, as long as the changed source is covered by a unit test. If taking into account also changed not covered by unit tests, 55% of all changes still can be found by PeASS. While this is far from full coverage, it shows that measuring the performance of unit tests is able to detect a big share of performance changes without manual effort. Creation of this dataset has been funded by a PhD scholarship of Hanns-Seidel-Stiftung. For the calculations of this dataset, resources of the computing centre of Universität Leipzig were used.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 4 | |
| downloads | 1 |

Views provided by UsageCounts
Downloads provided by UsageCounts