
Extraction, categorisation and recommendations of research software quality indicators from seven canonical source. This work began at the BioHackathon 2024 (BH24) as Project #5. Over 300 indicators from seven sources were extracted. In follow on calls after the BH24 can carried on refining the indicators - and deciding on which ones should be kept, maybe kept and discarded; this was informed by duplicate indicators and those that advocated a particular philosophy that might not have being universally recognised as necessary for quality research software. We also highlighted the difficulty level in implemented the indicators - i.e. in how much effort was required (easy, possible, hard) in ascertaining whether software, a service or project governance satisfied a particular indicator. This is made available to allow others to use this as a starting point for their own project considerations of which software quality indicators to include and/or take into account. There are current gaps around green software indicators and the FAIR supergroup categorisation are not all present. This exercise did not define new indicators, it set out to categorise existing indicators from various canonical sources (both in the research software space and in the wider software engineering space). You can see the slide about progress at the BH24 and further work has been undertake as part of the ELIXIR Tools Platform WP3 (Software Best Practices group + it was open to those who attended the BH24 Project #5) which is part of the ELIXIR Scientific Programme of Work 2024-2028. Some individuals who took part were funded by the EOSC EVERSE and ELIXIR STEERS projects. This is currently a work in progress so not all indicators have been categorised and there maybe definitions may be paraphrased rather than verbatim.
biohackathon, software, quality, software quality indicators, software quality, elixir, indicators
biohackathon, software, quality, software quality indicators, software quality, elixir, indicators
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
