
In 1974, Richard Feynman coined the famous phrase “Cargo Cult Science” for research that appears scientific but has no scholarly contribution nor impact. [1] The current hype on “Research Data Management” is similar in many ways: If we were just all properly filling out data management plans, FAIRly publish all our data irrespective of their quality, use electronic lab notebooks and all the other fancy new promising tools, and employ enough data stewarts and data architects to relieve us from the burden of documenting what we actually did, science cannot help but flourish and golden times are dawning. We could not be more wrong. Data is not insight, but its prerequisite at best. Similarly, tools are not solutions. If we were really to advance science, we better understand that research can be without scholarly contribution, and that data, let alone sharing data, is a highly non-trivial concept resting on many implicit assumptions often not fulfilled. [2]Here, we present a series of strategies and tools that, when competently used, help us enhance the quality of our research and eventually contribute to science and scholarship. The journey starts with tools for collecting all relevant information during data acquisition (i.e., data provenance), [3] and electronic lab notebooks. [4] A framework for scientific data analysis providing a gap-less and complete protocol of each step and relieving the user from actually programming [5] is a huge step forward. Dedicated packages for different spectroscopic methods are available [6, 7] and in active development. [8, 9] On top comes a larger (local) infrastructure consisting of persistent and locally unique IDs (PID, UID), a repository for “warm” research data, lab management, and knowledge base. [10] All these strategies and tools focus on the individual scientists, as only they can potentially ensure the urgently required quality of data and results scientific insight rests upon.Eventually, we need to teach [11] the students early on what science is all about and why properly handling research data is a prerequisite for scholarly contribution, rather than assuming that they’ve “caught on by osmosis”. [1] “At stake is the future of scholarship.” [2] References[1] R. P. Feynman, Cargo cult science, Eng. Sci. 1974, 37(7), 10.[2] C. L. Borgman, Big Data, Little Data, No Data. MIT Press, Cambridge MA 2015.[3] B. Paulus, T. Biskup, Towards more reproducible and FAIRer research data: documenting provenanceduring data acquisition using the Infofile format, Digit. Discov. 2023, 2, 234.[4] M. Schröder, T. Biskup, LabInform ELN: A lightweight and flexible electronic laboratory notebook foracademic research based on the open-source software DokuWiki, ChemRxiv 2023,doi:10.26434/chemrxiv-2023-2tvct[5] J. Popp, T. Biskup, ASpecD: A modular framework for the analysis of spectroscopic data focussing onreproducibility and good scientific practice, Chem. Meth. 2022, 2, e202100097[6] M. Schröder, T. Biskup, cwepr – a Python package for analysing cw-EPR data focussing on reproducibilityand simple usage, J. Magn. Reson. 2022, 335, 107140[7] J. Popp, M. Schröder, T. Biskup, trepr Python package, doi:10.5281/zenodo.4897112[8] M. Schröder, NMRAspecds Python package, doi:10.5281/zenodo.13293054[9] T. Biskup, FitPy Python package, doi:10.5281/zenodo.5920380[10] T. Biskup, LabInform: A modular laboratory information system built from open source components,ChemRxiv 2022, doi:10.26434/chemrxiv-2022-vz360[11] https://www.till-biskup.de/de/lehre/forschungsdatenmanagement/
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
