193 Research products, page 1 of 20
Loading
- Other research product . Other ORP type . InteractiveResource . 2022Open Access EnglishAuthors:Philip Verhagen; Bjørn P. Bartholdy;Philip Verhagen; Bjørn P. Bartholdy;Publisher: ZenodoCountry: Netherlands
This is part 4 of the Rchon statistics course. It continues the basics of statistical testing in R. In this tutorial, we will treat the following statistical testing methods: Mann-Whitney test Kruskal-Wallis test Kolmogorov-Smirnov test Follow the instructions in Instructions Tutorial 4.pdf to start the tutorial. This course was originally created for Archon Research School of Archaeology by Philip Verhagen (Vrije Universiteit Amsterdam) and Bjørn P. Bartholdy (University of Leiden), and consists of an instruction, a tutorial, a test and two datafiles. All content is CC BY-NC-SA: it can be freely distributed and modified under the condition of proper attribution and non-commercial use. How to cite: Verhagen, P. & B.P. Bartholdy, 2022. "Rchon statistics course, part 3". Amsterdam, ARCHON Research School of Archaeology. https://doi.org/10.5281/zenodo.7458108
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . InteractiveResource . 2022Open Access EnglishAuthors:Philip Verhagen; Bjørn P. Bartholdy;Philip Verhagen; Bjørn P. Bartholdy;Publisher: ARCHON Research School of ArchaeologyCountry: Netherlands
This is part 3 of the Rchon statistics course. It continues the basics of statistical testing in R. In this tutorial, we will treat the following statistical testing methods: chi square test Fisher's exact test Follow the instructions in Instructions Tutorial 3.pdf to start the tutorial. This course was originally created for Archon Research School of Archaeology by Philip Verhagen (Vrije Universiteit Amsterdam) and Bjørn P. Bartholdy (University of Leiden), and consists of an instruction, a tutorial, a test and two datafiles. All content is CC BY-NC-SA: it can be freely distributed and modified under the condition of proper attribution and non-commercial use. How to cite: Verhagen, P. & B.P. Bartholdy, 2022. "Rchon statistics course, part 3". Amsterdam, ARCHON Research School of Archaeology. https://doi.org/10.5281/zenodo.7457698
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Hollander, Hella; Wright, Holly; Ronzino, Paola; Massara, Flavia; Doorn, P.K.; Flohr, Pascal;Hollander, Hella; Wright, Holly; Ronzino, Paola; Massara, Flavia; Doorn, P.K.; Flohr, Pascal;Publisher: ARIADNEplusCountry: Netherlands
This final ARIADNEplus project report on Policies and Good Practices for FAIR Archaeological Data Management describes how focused and dedicated support has established structured policies and strategies for the creation of archaeological data of high quality. A standardised and online tool offers a Data Management Plan for archaeologists which helps researchers to conduct their research following standard quality criteria. Researchers can save a lot of time when preparing a Data Management Plan, since a motivation is only required when deviating from the standard reply, which the Domain Protocol for Archaeological Data Management offers as pre-formulated statements. A related Guide for Archaeological Data Management Planning helps users in their work to find good practices in archaeological data management.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:Lerchi, A.; Krap, T.; Eppenberger, P.; Pedergnana, A.;Lerchi, A.; Krap, T.; Eppenberger, P.; Pedergnana, A.;Country: Netherlands
Residue analysis is an established area of expertise focused on detecting traces of substances found on the surface of objects. It is routinely employed in forensic casework and increasingly incorporated into archaeological investigations.In archaeology, sampling and data interpretation sometimes lacked strict standards, resulting in incorrect residue classifications. In particular, molecular signals of salts of fatty acids identified by FTIR have been, at times, interpreted as evidence for adipocere, a substance formed as a consequence of adipose tissues' degradation.This article reviews and discusses the possibilities and limitations of the analytical protocols used in residue analysis in archaeology. The focus is on three main points: (1) reviewing the decomposition processes and the chemical components of adipocere; (2) highlighting potential misidentifications of adipocere while, at the same time, addressing issues related to residue preservation and contamination; and (3) proposing new research avenues to identify adipocere on archaeological objects.(c) 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:Rulkens, C.C.S.; Van Eyghen, Hans; Pear, Rachel; Peels, R.; Bouter, Lex; Stols-Witlox, Maartje; van den Brink, Gijsbert; Meloni, Sabrina; Buijsen, Edwin; van Woudenberg, René;Rulkens, C.C.S.; Van Eyghen, Hans; Pear, Rachel; Peels, R.; Bouter, Lex; Stols-Witlox, Maartje; van den Brink, Gijsbert; Meloni, Sabrina; Buijsen, Edwin; van Woudenberg, René;Publisher: Center for Open SciencesCountry: Netherlands
At the Vrije Universiteit Amsterdam, we have set out to explore the strengths and limitations of replication studies in the humanities in practice. We are doing so by replicating two original studies: one in the field of art history, the other in the field of history of science and religion. In this blog, we outline the design, purposes, and aims of these projects and explore some of the challenges.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:van Berckel Smit, Floris; Coussement, Alexia;van Berckel Smit, Floris; Coussement, Alexia;Publisher: ECHER BlogCountries: Netherlands, Belgium
add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Parrini, I.; Luca, F.; Rao, C.M.; Parise, G.; Micali, L.R.; Musumeci, G.; La Meir, M.; Colivicchi, F.; Gulizia, M.M.; Gelsomino, S.;Parrini, I.; Luca, F.; Rao, C.M.; Parise, G.; Micali, L.R.; Musumeci, G.; La Meir, M.; Colivicchi, F.; Gulizia, M.M.; Gelsomino, S.;Country: Netherlands
Background and aim. Cancer and atrial fibrillation (AF) may be associated, and anticoagulation, either with vitamin K antagonists (VKAs) or direct oral anticoagulants (DOACs), is necessary to prevent thromboembolic events by reducing the risk of bleeding. The log incidence rate ratio (IRR) and 95% confidence interval were used as index statistics. Higgin's I-2 test was adopted to assess statistical inconsistencies by considering interstudy variations, defined by values ranging from 0 to 100%. I-2 values of less than 40% are associated with very low heterogeneity among the studies; values between 40% and 75% indicate moderate heterogeneity, and those greater than 75% suggest severe heterogeneity. The aim of this meta-analysis was to compare the safety and efficacy of VKAs and DOACs in oncologic patients with AF. Methods. A meta-analysis was conducted comparing VKAs to DOACs in terms of thromboembolic events and bleeding. A meta-regression was conducted to investigate the differences in efficacy and safety between four different DOACs. Moreover, a sub-analysis on active-cancer-only patients was conducted. Results. A total of eight papers were included. The log incidence rate ratio (IRR) for thromboembolic events between the two groups was -0.69 (p 0.9). The Log IRR was -0.38 (p = 0.008) for ischemic stroke, -0.43 (p = 0.02) for myocardial infarction, -0.39 (p = 0.45) for arterial embolism, and -1.04 (p = 0.003) for venous thromboembolism. The log IRR for bleeding events was -0.43 (p < 0.005), and the meta-regression revealed no statistical difference (p = 0.7). The log IRR of hemorrhagic stroke, major bleeding, and clinically relevant non-major bleeding between the VKA and DOAC groups was -0.51 (p < 0.0001), -0.45 (p = 0.03), and 0.0045 (p = 0.97), respectively. Similar results were found in active-cancer patients for all the endpoints except for clinically-relevant non-major bleedings. Conclusions. DOACs showed better efficacy and safety outcomes than VKAs. No difference was found between types of DOACs.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:van Zundert, Joris J.;van Zundert, Joris J.;Country: Netherlands
Increasingly code and algorithms are techniques also applied in textual scholarship, giving rise to new interactions between software engineers and textual scholars. This book argues that much of that process and its effects on textual scholarship are still poorly understood and go unchecked by otherwise normal processes of quality control in scholarship such as peer review. The text provides case studies in which some of these interactions become more apparent, as well as the academic challenges and problems that they introduce. The book demonstrates that the space between code creation and conventional scholarship is one that offers many affordances to textual scholarship that until now remain unexplored. The author argues that it is an intellectual obligation of programmers and textual scholars to examine the properties of digital text and how its existence changes and challenges textual scholarship.
- Other research product . Other ORP type . 2022Closed Access English
From an Ancient Egyptian plague to the Black Death and Spanish flu, epidemics have often spurred societal transformations. Understanding why can help us create a better world after covid-19
add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Ordelman, Roeland; Sanders, Willemien; Zijdeman, Richard; Klein, Rana; Noordegraaf, Julia; van Gorp, Jasmijn; Wigham, Mari; Windhouwer, Menzo; LS Performance, Media and the City; ICON - guest publications; +2 moreOrdelman, Roeland; Sanders, Willemien; Zijdeman, Richard; Klein, Rana; Noordegraaf, Julia; van Gorp, Jasmijn; Wigham, Mari; Windhouwer, Menzo; LS Performance, Media and the City; ICON - guest publications; LS Taal en cultuurstudies; ICON - Media and Performance Studies;Country: Netherlands
Online stories, from blog posts to journalistic articles to scientific publications, are commonly illustrated with media (e.g. images, audio clips) or statistical summaries (e.g. tables and graphs). Such “illustrations” are the result of a process of acquiring, parsing, filtering, mining, representing, refining and interacting with data [3]. Unfortunately, such processes are typically taken for granted and seldom mentioned in the story itself. Although recently a wide variety of interactive data visualisation techniques have been developed (see e.g., [6]), in many cases the illustrations in such publications are static; this prevents different audiences from engaging with the data and analyses as they desire. In this paper, we share our experiences with the concept of “data stories” that tackles both issues, enhancing opportunities for outreach, reporting on scientific inquiry, and FAIR data representation [9]. In journalism data stories are becoming widely accepted as the output of a process that is in many aspects similar to that of a computational scholar: gaining insights by analyzing data sets using (semi-)automatized methods and presenting these insights using (interactive) visualizations and other textual outputs based on data [4] [7] [5] [6]. In the context of scientific output, data stories can be regarded as digital “publications enriched with or linking to related research results, such as research data, workflows, software, and possibly connections among them” [1]. However, as infrastructure for (peerreviewed) enhanced publications is in an early stage of development (see e.g., [2]), scholarly data stories are currently often produced as blog posts, discussing a relevant topic. These may be accompanied by illustrations not limited to a single graph or image but characterized by different forms of interactivity: readers can, for instance, change the perspective or zoom level of graphs, or cycle through images or audio clips. Having experimented successfully with various types and uses of data stories1 in the CLARIAH2 project, we are working towards a more generic, stable and sustainable infrastructure to create, publish, and archive data stories. This includes providing environments for reproduction of data stories and verification of data via “close reading”. From an infrastructure perspective, this involves the provisioning of services for persistent storage of data (e.g. triple stores), data registration and search (registries), data publication (SPARQL end-points, search-APIs), data visualization, and (versioned) query creation. These services can be used by environments to develop data stories, either or not facilitating additional data analysis steps. For data stories that make use of data analysis, for example via Jupyter Notebooks [8], the infrastructure also needs to take computational requirements (load balancing) and restrictions (security) into account. Also, when data sets are restricted for copyright or privacy reasons, authentication and authorization infrastructure (AAI) is required. The large and rich data sets in (European) heritage archives that are increasingly made interoperable using FAIR principles, are eminently qualified as fertile ground for data stories. We therefore hope to be able to present our experiences with data stories, share our strategy for a more generic solution and receive feedback on shared challenges.
193 Research products, page 1 of 20
Loading
- Other research product . Other ORP type . InteractiveResource . 2022Open Access EnglishAuthors:Philip Verhagen; Bjørn P. Bartholdy;Philip Verhagen; Bjørn P. Bartholdy;Publisher: ZenodoCountry: Netherlands
This is part 4 of the Rchon statistics course. It continues the basics of statistical testing in R. In this tutorial, we will treat the following statistical testing methods: Mann-Whitney test Kruskal-Wallis test Kolmogorov-Smirnov test Follow the instructions in Instructions Tutorial 4.pdf to start the tutorial. This course was originally created for Archon Research School of Archaeology by Philip Verhagen (Vrije Universiteit Amsterdam) and Bjørn P. Bartholdy (University of Leiden), and consists of an instruction, a tutorial, a test and two datafiles. All content is CC BY-NC-SA: it can be freely distributed and modified under the condition of proper attribution and non-commercial use. How to cite: Verhagen, P. & B.P. Bartholdy, 2022. "Rchon statistics course, part 3". Amsterdam, ARCHON Research School of Archaeology. https://doi.org/10.5281/zenodo.7458108
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . InteractiveResource . 2022Open Access EnglishAuthors:Philip Verhagen; Bjørn P. Bartholdy;Philip Verhagen; Bjørn P. Bartholdy;Publisher: ARCHON Research School of ArchaeologyCountry: Netherlands
This is part 3 of the Rchon statistics course. It continues the basics of statistical testing in R. In this tutorial, we will treat the following statistical testing methods: chi square test Fisher's exact test Follow the instructions in Instructions Tutorial 3.pdf to start the tutorial. This course was originally created for Archon Research School of Archaeology by Philip Verhagen (Vrije Universiteit Amsterdam) and Bjørn P. Bartholdy (University of Leiden), and consists of an instruction, a tutorial, a test and two datafiles. All content is CC BY-NC-SA: it can be freely distributed and modified under the condition of proper attribution and non-commercial use. How to cite: Verhagen, P. & B.P. Bartholdy, 2022. "Rchon statistics course, part 3". Amsterdam, ARCHON Research School of Archaeology. https://doi.org/10.5281/zenodo.7457698
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Hollander, Hella; Wright, Holly; Ronzino, Paola; Massara, Flavia; Doorn, P.K.; Flohr, Pascal;Hollander, Hella; Wright, Holly; Ronzino, Paola; Massara, Flavia; Doorn, P.K.; Flohr, Pascal;Publisher: ARIADNEplusCountry: Netherlands
This final ARIADNEplus project report on Policies and Good Practices for FAIR Archaeological Data Management describes how focused and dedicated support has established structured policies and strategies for the creation of archaeological data of high quality. A standardised and online tool offers a Data Management Plan for archaeologists which helps researchers to conduct their research following standard quality criteria. Researchers can save a lot of time when preparing a Data Management Plan, since a motivation is only required when deviating from the standard reply, which the Domain Protocol for Archaeological Data Management offers as pre-formulated statements. A related Guide for Archaeological Data Management Planning helps users in their work to find good practices in archaeological data management.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:Lerchi, A.; Krap, T.; Eppenberger, P.; Pedergnana, A.;Lerchi, A.; Krap, T.; Eppenberger, P.; Pedergnana, A.;Country: Netherlands
Residue analysis is an established area of expertise focused on detecting traces of substances found on the surface of objects. It is routinely employed in forensic casework and increasingly incorporated into archaeological investigations.In archaeology, sampling and data interpretation sometimes lacked strict standards, resulting in incorrect residue classifications. In particular, molecular signals of salts of fatty acids identified by FTIR have been, at times, interpreted as evidence for adipocere, a substance formed as a consequence of adipose tissues' degradation.This article reviews and discusses the possibilities and limitations of the analytical protocols used in residue analysis in archaeology. The focus is on three main points: (1) reviewing the decomposition processes and the chemical components of adipocere; (2) highlighting potential misidentifications of adipocere while, at the same time, addressing issues related to residue preservation and contamination; and (3) proposing new research avenues to identify adipocere on archaeological objects.(c) 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:Rulkens, C.C.S.; Van Eyghen, Hans; Pear, Rachel; Peels, R.; Bouter, Lex; Stols-Witlox, Maartje; van den Brink, Gijsbert; Meloni, Sabrina; Buijsen, Edwin; van Woudenberg, René;Rulkens, C.C.S.; Van Eyghen, Hans; Pear, Rachel; Peels, R.; Bouter, Lex; Stols-Witlox, Maartje; van den Brink, Gijsbert; Meloni, Sabrina; Buijsen, Edwin; van Woudenberg, René;Publisher: Center for Open SciencesCountry: Netherlands
At the Vrije Universiteit Amsterdam, we have set out to explore the strengths and limitations of replication studies in the humanities in practice. We are doing so by replicating two original studies: one in the field of art history, the other in the field of history of science and religion. In this blog, we outline the design, purposes, and aims of these projects and explore some of the challenges.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:van Berckel Smit, Floris; Coussement, Alexia;van Berckel Smit, Floris; Coussement, Alexia;Publisher: ECHER BlogCountries: Netherlands, Belgium
add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Parrini, I.; Luca, F.; Rao, C.M.; Parise, G.; Micali, L.R.; Musumeci, G.; La Meir, M.; Colivicchi, F.; Gulizia, M.M.; Gelsomino, S.;Parrini, I.; Luca, F.; Rao, C.M.; Parise, G.; Micali, L.R.; Musumeci, G.; La Meir, M.; Colivicchi, F.; Gulizia, M.M.; Gelsomino, S.;Country: Netherlands
Background and aim. Cancer and atrial fibrillation (AF) may be associated, and anticoagulation, either with vitamin K antagonists (VKAs) or direct oral anticoagulants (DOACs), is necessary to prevent thromboembolic events by reducing the risk of bleeding. The log incidence rate ratio (IRR) and 95% confidence interval were used as index statistics. Higgin's I-2 test was adopted to assess statistical inconsistencies by considering interstudy variations, defined by values ranging from 0 to 100%. I-2 values of less than 40% are associated with very low heterogeneity among the studies; values between 40% and 75% indicate moderate heterogeneity, and those greater than 75% suggest severe heterogeneity. The aim of this meta-analysis was to compare the safety and efficacy of VKAs and DOACs in oncologic patients with AF. Methods. A meta-analysis was conducted comparing VKAs to DOACs in terms of thromboembolic events and bleeding. A meta-regression was conducted to investigate the differences in efficacy and safety between four different DOACs. Moreover, a sub-analysis on active-cancer-only patients was conducted. Results. A total of eight papers were included. The log incidence rate ratio (IRR) for thromboembolic events between the two groups was -0.69 (p 0.9). The Log IRR was -0.38 (p = 0.008) for ischemic stroke, -0.43 (p = 0.02) for myocardial infarction, -0.39 (p = 0.45) for arterial embolism, and -1.04 (p = 0.003) for venous thromboembolism. The log IRR for bleeding events was -0.43 (p < 0.005), and the meta-regression revealed no statistical difference (p = 0.7). The log IRR of hemorrhagic stroke, major bleeding, and clinically relevant non-major bleeding between the VKA and DOAC groups was -0.51 (p < 0.0001), -0.45 (p = 0.03), and 0.0045 (p = 0.97), respectively. Similar results were found in active-cancer patients for all the endpoints except for clinically-relevant non-major bleedings. Conclusions. DOACs showed better efficacy and safety outcomes than VKAs. No difference was found between types of DOACs.
- Other research product . Other ORP type . 2022Open Access EnglishAuthors:van Zundert, Joris J.;van Zundert, Joris J.;Country: Netherlands
Increasingly code and algorithms are techniques also applied in textual scholarship, giving rise to new interactions between software engineers and textual scholars. This book argues that much of that process and its effects on textual scholarship are still poorly understood and go unchecked by otherwise normal processes of quality control in scholarship such as peer review. The text provides case studies in which some of these interactions become more apparent, as well as the academic challenges and problems that they introduce. The book demonstrates that the space between code creation and conventional scholarship is one that offers many affordances to textual scholarship that until now remain unexplored. The author argues that it is an intellectual obligation of programmers and textual scholars to examine the properties of digital text and how its existence changes and challenges textual scholarship.
- Other research product . Other ORP type . 2022Closed Access English
From an Ancient Egyptian plague to the Black Death and Spanish flu, epidemics have often spurred societal transformations. Understanding why can help us create a better world after covid-19
add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Other research product . Other ORP type . 2022Open Access EnglishAuthors:Ordelman, Roeland; Sanders, Willemien; Zijdeman, Richard; Klein, Rana; Noordegraaf, Julia; van Gorp, Jasmijn; Wigham, Mari; Windhouwer, Menzo; LS Performance, Media and the City; ICON - guest publications; +2 moreOrdelman, Roeland; Sanders, Willemien; Zijdeman, Richard; Klein, Rana; Noordegraaf, Julia; van Gorp, Jasmijn; Wigham, Mari; Windhouwer, Menzo; LS Performance, Media and the City; ICON - guest publications; LS Taal en cultuurstudies; ICON - Media and Performance Studies;Country: Netherlands
Online stories, from blog posts to journalistic articles to scientific publications, are commonly illustrated with media (e.g. images, audio clips) or statistical summaries (e.g. tables and graphs). Such “illustrations” are the result of a process of acquiring, parsing, filtering, mining, representing, refining and interacting with data [3]. Unfortunately, such processes are typically taken for granted and seldom mentioned in the story itself. Although recently a wide variety of interactive data visualisation techniques have been developed (see e.g., [6]), in many cases the illustrations in such publications are static; this prevents different audiences from engaging with the data and analyses as they desire. In this paper, we share our experiences with the concept of “data stories” that tackles both issues, enhancing opportunities for outreach, reporting on scientific inquiry, and FAIR data representation [9]. In journalism data stories are becoming widely accepted as the output of a process that is in many aspects similar to that of a computational scholar: gaining insights by analyzing data sets using (semi-)automatized methods and presenting these insights using (interactive) visualizations and other textual outputs based on data [4] [7] [5] [6]. In the context of scientific output, data stories can be regarded as digital “publications enriched with or linking to related research results, such as research data, workflows, software, and possibly connections among them” [1]. However, as infrastructure for (peerreviewed) enhanced publications is in an early stage of development (see e.g., [2]), scholarly data stories are currently often produced as blog posts, discussing a relevant topic. These may be accompanied by illustrations not limited to a single graph or image but characterized by different forms of interactivity: readers can, for instance, change the perspective or zoom level of graphs, or cycle through images or audio clips. Having experimented successfully with various types and uses of data stories1 in the CLARIAH2 project, we are working towards a more generic, stable and sustainable infrastructure to create, publish, and archive data stories. This includes providing environments for reproduction of data stories and verification of data via “close reading”. From an infrastructure perspective, this involves the provisioning of services for persistent storage of data (e.g. triple stores), data registration and search (registries), data publication (SPARQL end-points, search-APIs), data visualization, and (versioned) query creation. These services can be used by environments to develop data stories, either or not facilitating additional data analysis steps. For data stories that make use of data analysis, for example via Jupyter Notebooks [8], the infrastructure also needs to take computational requirements (load balancing) and restrictions (security) into account. Also, when data sets are restricted for copyright or privacy reasons, authentication and authorization infrastructure (AAI) is required. The large and rich data sets in (European) heritage archives that are increasingly made interoperable using FAIR principles, are eminently qualified as fertile ground for data stories. We therefore hope to be able to present our experiences with data stories, share our strategy for a more generic solution and receive feedback on shared challenges.