Downloads provided by UsageCounts
In this article, we attempt to answer the question of how reproducible research software should be by defining four levels of reproducibility, suggesting criteria to help you decide which level your research software should be at, and recommending practices to reach these levels of reproducibility. The article is the result of a discussion session at the Software Sustainability Institute Fellows Online Selection Day 2021.
{"references": ["Wilson et al (2017). Good enough practices in scientific computing. doi: 10.1371/journal.pcbi.1005510", "Lee et al (2021). Barely sufficient practices in scientific computing. doi: 10.1016/j.patter.2021.100206", "McArthur (2019). Repeatability, Reproducibility, and Replicability: Tackling the 3R challenge in biointerface science and engineering. doi: 10.1116/1.5093621"]}
software sustainability, research software, reproducibility
software sustainability, research software, reproducibility
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 60 | |
| downloads | 40 |

Views provided by UsageCounts
Downloads provided by UsageCounts