Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Conference object . 2025
License: CC BY
Data sources: Datacite
ZENODO
Conference object . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Community Code Review in the Digital Humanities

Authors: Damerow, Julia; Koeser, Rebecca; Vogl, Malte; Carver, Jeffrey;

Community Code Review in the Digital Humanities

Abstract

Code review is a widespread technique to improve software and reduce the number of flaws. In a code review, a programmer (other than the original code author(s)) reviews the source code. They ask questions and make suggestions for improving the software. In addition to identifying and eliminating errors, code review can improve overall quality by making the source code more readable and maintainable. There are multiple ways a code review can be conducted, for instance, the whole code base might be reviewed or just a part. There are also different aspects the code review may focus on, for example, the reviewer can validate that the code works as expected or evaluate its readability and maintainability. However, one thing all types of code review have in common is that they require another developer. According to a recent survey of people developing software for digital humanities projects, about 50% of the respondents work in teams of one or two people (the results have not yet been published as of the time of writing). This is similar to the results of the RSE International Survey 2022, in which about 55% reported working in teams of one or two people (Hettrick 2022). This makes it virtually impossible to implement a code review process. In fact, approximately 46% of respondents to the digital humanities developer survey report that their code is never or only occasionally reviewed, and the main reason given is a lack of peers. To solve this issue, the Code Review Working Group of DHTech has implemented a community code review process, in which volunteers from across organizations worldwide review code submitted by digital humanities projects. Although code reviews were initially conducted asynchronously, the process was refined by incorporating virtual meetings at both the beginning and end of the review period to promote clearer communication and a greater sense of engagement. Each code review request is assigned two reviewers with experience matching the submitted code. Furthermore, a distinct facilitator role has been added to the process to recognize the work it takes to coordinate a review. To this point the group has facilitated 9 code reviews. Reviews are conducted via GitHub pull requests. Completed reviews are listed on the working group website listing code submitters, reviewers, and facilitators. Furthermore, in the first virtual meeting, reviewers and authors agree on what kind of recognition is given to the reviewers for their work. The proposed poster will describe the history, process, and future work of the DHTech Code Review Working Group. It will also detail challenges encountered and open questions. References Hettrick, Simon, Radovan Bast, Alex Botzki, Jeff Carver, Ian Cosden, Steve Crouch, Florencia D’Andrea, Abhishek Dasgupta, William Godoy, Alejandra Gonzalez-Beltran, Ulf Hamster, Scott Henwood, Patric Holmvall, Stephan Janosch, Thibault Lestang, Nick May, Olivier Philippe, Johan Philips, Nooriyah Poonawala-Lohani, Paul Richmond, Manodeep Sinha, Florian Thiery, Ben van Werkhoven, Claire Wyatt & Qian Zhang (2022). “RSE Survey 2022”, Pre-final release for 2022 results (Version 2022-v0.9.0). Accessed July 9, 2025. https://softwaresaved.github.io/international-survey-2022/. 

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average