Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
54 Research products, page 1 of 6

  • Publications
  • 2012-2021
  • Article
  • English
  • DARIAH EU
  • Digital Humanities and Cultural Heritage

10
arrow_drop_down
Date (most recent)
arrow_drop_down
  • Open Access English
    Authors: 
    Sander Münster; Ronja Utescher; Selda Ulutas Aydogan;
    Publisher: Springer Singapore

    AbstractIn research and policies, the identification of trends as well as emerging topics and topics in decline is an important source of information for both academic and innovation management. Since at present policy analysis mostly employs qualitative research methods, the following article presents and assesses different approaches – trend analysis based on questionnaires, quantitative bibliometric surveys, the use of computer-linguistic approaches and machine learning and qualitative investigations. Against this backdrop, this article examines digital applications in cultural heritage and, in particular, built heritage via various investigative frameworks to identify topics of relevance and trendlines, mainly for European Union (EU)-based research and policies. Furthermore, this article exemplifies and assesses the specific opportunities and limitations of the different methodical approaches against the backdrop of data-driven vs. data-guided analytical frameworks. As its major findings, our study shows that both research and policies related to digital applications for cultural heritage are mainly driven by the availability of new technologies. Since policies focus on meta-topics such as digitisation, openness or automation, the research descriptors are more granular. In general, data-driven approaches are promising for identifying topics and trendlines and even predicting the development of near future trends. Conversely, qualitative approaches are able to answer “why” questions with regard to whether topics are emerging due to disruptive innovations or due to new terminologies or whether topics are becoming obsolete because they are common knowledge, as is the case for the term “internet”.

  • Open Access English
    Authors: 
    Anna Foka; Osman Cenk Demiroglu; Elton Barker; Nasrin Mostofian; Kyriaki Konstantinidou; Brady Kiesling; Linda Talatas; Kajsa Palm;
    Publisher: Uppsala universitet, Institutionen för ABM
    Country: Sweden

    Abstract This progress article focuses on an overview of the potential and challenges of using contemporary Geographic Information System (GIS) applications for the visual rendering and analysis of textual spatial data. The case study is an ancient traveling narrative, Pausanias’s Description of Greece (Periegesis Hellados) which was written in the second century CE. First, we describe the process of converting the volumes to spatial data using a customized version of the open-source digital semantic annotation platform Recogito. Then the focus shifts to the implementation of collected and organized spatial data to a number of GIS applications: namely Google Maps, DARIAH Geo-Browser, Gephi, Palladio and ArcGIS. Through empirical experimentation with spatial data and their implementation in different platforms, our paper charts the ways in which contemporary GIS applications may be implemented to cast new light on ancient understandings of identity, space, and place.

  • Open Access English
    Authors: 
    Frank Uiterwaal; Franco Niccolucci; Sheena Bassett; Steven Krauwer; Hella Hollander; Femmy Admiraal; Laurent Romary; George Bruseker; Carlo Meghini; Jennifer Edmond; +1 more
    Publisher: Edinburgh University Press for the Association for History and Computing,, Edinburgh , Regno Unito
    Countries: Italy, France, Netherlands, France, France, Italy
    Project: EC | PARTHENOS (654119)

    This article has been accepted for publication by EUP in the IJHAC: International Journal of Humanities and Arts Computing (https://www.euppublishing.com/loi/ijhac); International audience; Since the first ESFRI roadmap in 2006, multiple humanities Research Infrastructures (RIs) have been set up all over the European continent, supporting archaeologists (ARIADNE), linguists (CLARIN-ERIC), Holocaust researchers (EHRI), cultural heritage specialists (IPERION-CH) and others. These examples only scratch the surface of the breadth of research communities that have benefited from close cooperation in the European Research Area.While each field developed discipline-specific services over the years, common themes can also be distinguished. All humanities RIs address, in varying degrees, questions around research data management, the use of standards and the desired interoperability of data across disciplinary boundaries.This article sheds light on how cluster project PARTHENOS developed pooled services and shared solutions for its audience of humanities researchers, RI managers and policymakers. In a time where the convergence of existing infrastructure is becoming ever more important – with the construction of a European Open Science Cloud as an audacious, ultimate goal – we hope that our experiences inform future work and provide inspiration on how to exploit synergies in interdisciplinary, transnational, scientific cooperation.

  • Open Access English
    Authors: 
    S. Münster; K. Fritsche; H. Richards-Rissetto; Fabrizio Ivan Apollonio; B. Aehnlich; V. Schwartze; R. Smolarski;
    Publisher: Copernicus Publications
    Country: Italy

    Abstract. Digital literacy and technology education has gained much relevance in humanities and heritage related disciplines during the recent decades. Against this background, the purpose of this article is to examine the current state of educational programs in digital cultural heritage and related disciplines primarily in Europe with supplemental information from the US. A further aim is to highlight core topics, challenges, and demands, and to show innovative formats and prospects.

  • Open Access English
    Authors: 
    Frank Lehrbass;
    Publisher: Zenodo

    The European Markets Infrastructure Regulation (EMIR) allows burdening a clearing obligation on non-financial corporations, which formerly did not necessarily clear their business. We give 10 recommendations on how to cope with this obligation. These are motivated by a case study for which we consider a stylized German power producer. For this entity, we derive optimal levels of planned production and forward sales of power using microeconomic theory. Since this results in a significant short position in the German power forward market, we investigate the resulting variation margin call dynamics with a special interest in the ability to forecast worst-case price up moves. We compare different models for the forward log-returns and their performance in 99% quantile forecasting. A GARCH model with Student-t distribution emerges as the most suitable model. This is used in the case study, which is inspired by data published by the power producer E.ON. Using recent material from the Basel Committee on Banking Supervision we distill the reliable liquidity buffer from an allegedly rich liquidity position and show how suddenly it can be eroded. We point to feedback loops, which make the challenges—posed by the clearing obligation—even more severe. We also spend some thoughts on how to cope with the crisis caused by Corona.

  • Open Access English
    Authors: 
    Enrico Daga; Luigi Asprino; Rossana Damiano; Marilena Daquino; Belen Diaz Agudo; Aldo Gangemi; Tsvi Kuflik; Antonio Lieto; Mark Maguire; Anna Maria Marras; +5 more
    Country: Italy
    Project: EC | Polifonia (101004746), EC | SPICE (870811)

    Digital archives of memory institutions are typically concerned with the cataloguing of artefacts of artistic, historical, and cultural value. Recently, new forms of citizen participation in cultural heritage have emerged, producing a wealth of material spanning from visitors’ experiential feedback on exhibitions and cultural artefacts to digitally mediated interactions like the ones happening on social media platforms. Citizen curation is proposed in the context of the European project SPICE (Social Participation, Cohesion, and Inclusion through Cultural Engagement) as a methodology for producing, collecting, interpreting, and archiving people’s responses to cultural objects, with the aim of favouring the emergence of multiple, sometimes conflicting, viewpoints and motivating users and memory institutions to reflect upon them. We argue that citizen curation urges to rethink the nature of computational infrastructures supporting data management of memory institutions, bringing novel challenges that include issues of distribution, authoritativeness, interdependence, privacy, and rights management. To approach these issues, we survey relevant literature toward a distributed, Linked Data infrastructure, with a focus on identifying the roles and requirements involved in such an infrastructure. We show how existing research can contribute significantly in facing the challenges raised by citizen curation and discuss challenges and opportunities from the socio-technical standpoint.

  • Open Access English
    Authors: 
    Stefan Buddenbohm; Maaike A. de Jong; Jean-Luc Minel; Yoann Moranville;
    Publisher: HAL CCSD
    Country: France
    Project: EC | HaS-DARIAH (675570)

    AbstractHow can researchers identify suitable research data repositories for the deposit of their research data? Which repository matches best the technical and legal requirements of a specific research project? For this end and with a humanities perspective the Data Deposit Recommendation Service (DDRS) has been developed as a prototype. It not only serves as a functional service for selecting humanities research data repositories but it is particularly a technical demonstrator illustrating the potential of re-using an already existing infrastructure - in this case re3data - and the feasibility to set up this kind of service for other research disciplines. The documentation and the code of this project can be found in the DARIAH GitHub repository: https://dariah-eric.github.io/ddrs/.

  • Open Access English
    Authors: 
    Luca Foppiano; Laurent Romary;
    Publisher: HAL CCSD
    Country: France
    Project: EC | HIRMEOS (731102)

    International audience; This paper presents an attempt to provide a generic named-entity recognition and disambiguation module (NERD) called entity-fishing as a stable online service that demonstrates the possible delivery of sustainable technical services within DARIAH, the European digital research infrastructure for the arts and humanities. Deployed as part of the national infrastructure Huma-Num in France, this service provides an efficient state-of-the-art implementation coupled with standardised interfaces allowing an easy deployment on a variety of potential digital humanities contexts. The topics of accessibility and sustainability have been long discussed in the attempt of providing some best practices in the widely fragmented ecosystem of the DARIAH research infrastructure. The history of entity-fishing has been mentioned as an example of good practice: initially developed in the context of the FP9 CENDARI, the project was well received by the user community and continued to be further developed within the H2020 HIRMEOS project where several open access publishers have integrated the service to their collections of published monographs as a means to enhance retrieval and access.entity-fishing implements entity extraction as well as disambiguation against Wikipedia and Wikidata entries. The service is accessible through a REST API which allows easier and seamless integration, language independent and stable convention and a widely used service oriented architecture (SOA) design. Input and output data are carried out over a query data model with a defined structure providing flexibility to support the processing of partially annotated text or the repartition of text over several queries. The interface implements a variety of functionalities, like language recognition, sentence segmentation and modules for accessing and looking up concepts in the knowledge base. The API itself integrates more advanced contextual parametrisation or ranked outputs, allowing for the resilient integration in various possible use cases. The entity-fishing API has been used as a concrete use case3 to draft the experimental stand-off proposal, which has been submitted for integration into the TEI guidelines. The representation is also compliant with the Web Annotation Data Model (WADM).In this paper we aim at describing the functionalities of the service as a reference contribution to the subject of web-based NERD services. In order to cover all aspects, the architecture is structured to provide two complementary viewpoints. First, we discuss the system from the data angle, detailing the workflow from input to output and unpacking each building box in the processing flow. Secondly, with a more academic approach, we provide a transversal schema of the different components taking into account non-functional requirements in order to facilitate the discovery of bottlenecks, hotspots and weaknesses. The attempt here is to give a description of the tool and, at the same time, a technical software engineering analysis which will help the reader to understand our choice for the resources allocated in the infrastructure.Thanks to the work of million of volunteers, Wikipedia has reached today stability and completeness that leave no usable alternatives on the market (considering also the licence aspect). The launch of Wikidata in 2010 have completed the picture with a complementary language independent meta-model which is becoming the scientific reference for many disciplines. After providing an introduction to Wikipedia and Wikidata, we describe the knowledge base: the data organisation, the entity-fishing process to exploit it and the way it is built from nightly dumps using an offline process.We conclude the paper by presenting our solution for the service deployment: how and which the resources where allocated. The service has been in production since Q3 of 2017, and extensively used by the H2020 HIRMEOS partners during the integration with the publishing platforms. We believe we have strived to provide the best performances with the minimum amount of resources. Thanks to the Huma-num infrastructure we still have the possibility to scale up the infrastructure as needed, for example to support an increase of demand or temporary needs to process huge backlog of documents. On the long term, thanks to this sustainable environment, we are planning to keep delivering the service far beyond the end of the H2020 HIRMEOS project.

  • Publication . Article . Other literature type . Conference object . 2020
    Open Access English
    Authors: 
    Stefan Bornhofen; Marten Düring;
    Publisher: HAL CCSD
    Country: France
    Project: ANR | BLIZAAR (ANR-15-CE23-0002)

    AbstractThe paper presents Intergraph, a graph-based visual analytics technical demonstrator for the exploration and study of content in historical document collections. The designed prototype is motivated by a practical use case on a corpus of circa 15.000 digitized resources about European integration since 1945. The corpus allowed generating a dynamic multilayer network which represents different kinds of named entities appearing and co-appearing in the collections. To our knowledge, Intergraph is one of the first interactive tools to visualize dynamic multilayer graphs for collections of digitized historical sources. Graph visualization and interaction methods have been designed based on user requirements for content exploration by non-technical users without a strong background in network science, and to compensate for common flaws with the annotation of named entities. Users work with self-selected subsets of the overall data by interacting with a scene of small graphs which can be added, altered and compared. This allows an interest-driven navigation in the corpus and the discovery of the interconnections of its entities across time.

  • Open Access English
    Authors: 
    Reinhard Altenhöner; Ina Blümel; Franziska Boehm; Jens Bove; Katrin Bicher; Christian Bracht; Ortrun Brand; Lisa Dieckmann; Maria Effinger; Malte Hagener; +15 more
    Publisher: Pensoft Publishers
    Country: Germany

    Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a user-centered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies. The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to high-level cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
54 Research products, page 1 of 6
  • Open Access English
    Authors: 
    Sander Münster; Ronja Utescher; Selda Ulutas Aydogan;
    Publisher: Springer Singapore

    AbstractIn research and policies, the identification of trends as well as emerging topics and topics in decline is an important source of information for both academic and innovation management. Since at present policy analysis mostly employs qualitative research methods, the following article presents and assesses different approaches – trend analysis based on questionnaires, quantitative bibliometric surveys, the use of computer-linguistic approaches and machine learning and qualitative investigations. Against this backdrop, this article examines digital applications in cultural heritage and, in particular, built heritage via various investigative frameworks to identify topics of relevance and trendlines, mainly for European Union (EU)-based research and policies. Furthermore, this article exemplifies and assesses the specific opportunities and limitations of the different methodical approaches against the backdrop of data-driven vs. data-guided analytical frameworks. As its major findings, our study shows that both research and policies related to digital applications for cultural heritage are mainly driven by the availability of new technologies. Since policies focus on meta-topics such as digitisation, openness or automation, the research descriptors are more granular. In general, data-driven approaches are promising for identifying topics and trendlines and even predicting the development of near future trends. Conversely, qualitative approaches are able to answer “why” questions with regard to whether topics are emerging due to disruptive innovations or due to new terminologies or whether topics are becoming obsolete because they are common knowledge, as is the case for the term “internet”.

  • Open Access English
    Authors: 
    Anna Foka; Osman Cenk Demiroglu; Elton Barker; Nasrin Mostofian; Kyriaki Konstantinidou; Brady Kiesling; Linda Talatas; Kajsa Palm;
    Publisher: Uppsala universitet, Institutionen för ABM
    Country: Sweden

    Abstract This progress article focuses on an overview of the potential and challenges of using contemporary Geographic Information System (GIS) applications for the visual rendering and analysis of textual spatial data. The case study is an ancient traveling narrative, Pausanias’s Description of Greece (Periegesis Hellados) which was written in the second century CE. First, we describe the process of converting the volumes to spatial data using a customized version of the open-source digital semantic annotation platform Recogito. Then the focus shifts to the implementation of collected and organized spatial data to a number of GIS applications: namely Google Maps, DARIAH Geo-Browser, Gephi, Palladio and ArcGIS. Through empirical experimentation with spatial data and their implementation in different platforms, our paper charts the ways in which contemporary GIS applications may be implemented to cast new light on ancient understandings of identity, space, and place.

  • Open Access English
    Authors: 
    Frank Uiterwaal; Franco Niccolucci; Sheena Bassett; Steven Krauwer; Hella Hollander; Femmy Admiraal; Laurent Romary; George Bruseker; Carlo Meghini; Jennifer Edmond; +1 more
    Publisher: Edinburgh University Press for the Association for History and Computing,, Edinburgh , Regno Unito
    Countries: Italy, France, Netherlands, France, France, Italy
    Project: EC | PARTHENOS (654119)

    This article has been accepted for publication by EUP in the IJHAC: International Journal of Humanities and Arts Computing (https://www.euppublishing.com/loi/ijhac); International audience; Since the first ESFRI roadmap in 2006, multiple humanities Research Infrastructures (RIs) have been set up all over the European continent, supporting archaeologists (ARIADNE), linguists (CLARIN-ERIC), Holocaust researchers (EHRI), cultural heritage specialists (IPERION-CH) and others. These examples only scratch the surface of the breadth of research communities that have benefited from close cooperation in the European Research Area.While each field developed discipline-specific services over the years, common themes can also be distinguished. All humanities RIs address, in varying degrees, questions around research data management, the use of standards and the desired interoperability of data across disciplinary boundaries.This article sheds light on how cluster project PARTHENOS developed pooled services and shared solutions for its audience of humanities researchers, RI managers and policymakers. In a time where the convergence of existing infrastructure is becoming ever more important – with the construction of a European Open Science Cloud as an audacious, ultimate goal – we hope that our experiences inform future work and provide inspiration on how to exploit synergies in interdisciplinary, transnational, scientific cooperation.

  • Open Access English
    Authors: 
    S. Münster; K. Fritsche; H. Richards-Rissetto; Fabrizio Ivan Apollonio; B. Aehnlich; V. Schwartze; R. Smolarski;
    Publisher: Copernicus Publications
    Country: Italy

    Abstract. Digital literacy and technology education has gained much relevance in humanities and heritage related disciplines during the recent decades. Against this background, the purpose of this article is to examine the current state of educational programs in digital cultural heritage and related disciplines primarily in Europe with supplemental information from the US. A further aim is to highlight core topics, challenges, and demands, and to show innovative formats and prospects.

  • Open Access English
    Authors: 
    Frank Lehrbass;
    Publisher: Zenodo

    The European Markets Infrastructure Regulation (EMIR) allows burdening a clearing obligation on non-financial corporations, which formerly did not necessarily clear their business. We give 10 recommendations on how to cope with this obligation. These are motivated by a case study for which we consider a stylized German power producer. For this entity, we derive optimal levels of planned production and forward sales of power using microeconomic theory. Since this results in a significant short position in the German power forward market, we investigate the resulting variation margin call dynamics with a special interest in the ability to forecast worst-case price up moves. We compare different models for the forward log-returns and their performance in 99% quantile forecasting. A GARCH model with Student-t distribution emerges as the most suitable model. This is used in the case study, which is inspired by data published by the power producer E.ON. Using recent material from the Basel Committee on Banking Supervision we distill the reliable liquidity buffer from an allegedly rich liquidity position and show how suddenly it can be eroded. We point to feedback loops, which make the challenges—posed by the clearing obligation—even more severe. We also spend some thoughts on how to cope with the crisis caused by Corona.

  • Open Access English
    Authors: 
    Enrico Daga; Luigi Asprino; Rossana Damiano; Marilena Daquino; Belen Diaz Agudo; Aldo Gangemi; Tsvi Kuflik; Antonio Lieto; Mark Maguire; Anna Maria Marras; +5 more
    Country: Italy
    Project: EC | Polifonia (101004746), EC | SPICE (870811)

    Digital archives of memory institutions are typically concerned with the cataloguing of artefacts of artistic, historical, and cultural value. Recently, new forms of citizen participation in cultural heritage have emerged, producing a wealth of material spanning from visitors’ experiential feedback on exhibitions and cultural artefacts to digitally mediated interactions like the ones happening on social media platforms. Citizen curation is proposed in the context of the European project SPICE (Social Participation, Cohesion, and Inclusion through Cultural Engagement) as a methodology for producing, collecting, interpreting, and archiving people’s responses to cultural objects, with the aim of favouring the emergence of multiple, sometimes conflicting, viewpoints and motivating users and memory institutions to reflect upon them. We argue that citizen curation urges to rethink the nature of computational infrastructures supporting data management of memory institutions, bringing novel challenges that include issues of distribution, authoritativeness, interdependence, privacy, and rights management. To approach these issues, we survey relevant literature toward a distributed, Linked Data infrastructure, with a focus on identifying the roles and requirements involved in such an infrastructure. We show how existing research can contribute significantly in facing the challenges raised by citizen curation and discuss challenges and opportunities from the socio-technical standpoint.

  • Open Access English
    Authors: 
    Stefan Buddenbohm; Maaike A. de Jong; Jean-Luc Minel; Yoann Moranville;
    Publisher: HAL CCSD
    Country: France
    Project: EC | HaS-DARIAH (675570)

    AbstractHow can researchers identify suitable research data repositories for the deposit of their research data? Which repository matches best the technical and legal requirements of a specific research project? For this end and with a humanities perspective the Data Deposit Recommendation Service (DDRS) has been developed as a prototype. It not only serves as a functional service for selecting humanities research data repositories but it is particularly a technical demonstrator illustrating the potential of re-using an already existing infrastructure - in this case re3data - and the feasibility to set up this kind of service for other research disciplines. The documentation and the code of this project can be found in the DARIAH GitHub repository: https://dariah-eric.github.io/ddrs/.

  • Open Access English
    Authors: 
    Luca Foppiano; Laurent Romary;
    Publisher: HAL CCSD
    Country: France
    Project: EC | HIRMEOS (731102)

    International audience; This paper presents an attempt to provide a generic named-entity recognition and disambiguation module (NERD) called entity-fishing as a stable online service that demonstrates the possible delivery of sustainable technical services within DARIAH, the European digital research infrastructure for the arts and humanities. Deployed as part of the national infrastructure Huma-Num in France, this service provides an efficient state-of-the-art implementation coupled with standardised interfaces allowing an easy deployment on a variety of potential digital humanities contexts. The topics of accessibility and sustainability have been long discussed in the attempt of providing some best practices in the widely fragmented ecosystem of the DARIAH research infrastructure. The history of entity-fishing has been mentioned as an example of good practice: initially developed in the context of the FP9 CENDARI, the project was well received by the user community and continued to be further developed within the H2020 HIRMEOS project where several open access publishers have integrated the service to their collections of published monographs as a means to enhance retrieval and access.entity-fishing implements entity extraction as well as disambiguation against Wikipedia and Wikidata entries. The service is accessible through a REST API which allows easier and seamless integration, language independent and stable convention and a widely used service oriented architecture (SOA) design. Input and output data are carried out over a query data model with a defined structure providing flexibility to support the processing of partially annotated text or the repartition of text over several queries. The interface implements a variety of functionalities, like language recognition, sentence segmentation and modules for accessing and looking up concepts in the knowledge base. The API itself integrates more advanced contextual parametrisation or ranked outputs, allowing for the resilient integration in various possible use cases. The entity-fishing API has been used as a concrete use case3 to draft the experimental stand-off proposal, which has been submitted for integration into the TEI guidelines. The representation is also compliant with the Web Annotation Data Model (WADM).In this paper we aim at describing the functionalities of the service as a reference contribution to the subject of web-based NERD services. In order to cover all aspects, the architecture is structured to provide two complementary viewpoints. First, we discuss the system from the data angle, detailing the workflow from input to output and unpacking each building box in the processing flow. Secondly, with a more academic approach, we provide a transversal schema of the different components taking into account non-functional requirements in order to facilitate the discovery of bottlenecks, hotspots and weaknesses. The attempt here is to give a description of the tool and, at the same time, a technical software engineering analysis which will help the reader to understand our choice for the resources allocated in the infrastructure.Thanks to the work of million of volunteers, Wikipedia has reached today stability and completeness that leave no usable alternatives on the market (considering also the licence aspect). The launch of Wikidata in 2010 have completed the picture with a complementary language independent meta-model which is becoming the scientific reference for many disciplines. After providing an introduction to Wikipedia and Wikidata, we describe the knowledge base: the data organisation, the entity-fishing process to exploit it and the way it is built from nightly dumps using an offline process.We conclude the paper by presenting our solution for the service deployment: how and which the resources where allocated. The service has been in production since Q3 of 2017, and extensively used by the H2020 HIRMEOS partners during the integration with the publishing platforms. We believe we have strived to provide the best performances with the minimum amount of resources. Thanks to the Huma-num infrastructure we still have the possibility to scale up the infrastructure as needed, for example to support an increase of demand or temporary needs to process huge backlog of documents. On the long term, thanks to this sustainable environment, we are planning to keep delivering the service far beyond the end of the H2020 HIRMEOS project.

  • Publication . Article . Other literature type . Conference object . 2020
    Open Access English
    Authors: 
    Stefan Bornhofen; Marten Düring;
    Publisher: HAL CCSD
    Country: France
    Project: ANR | BLIZAAR (ANR-15-CE23-0002)

    AbstractThe paper presents Intergraph, a graph-based visual analytics technical demonstrator for the exploration and study of content in historical document collections. The designed prototype is motivated by a practical use case on a corpus of circa 15.000 digitized resources about European integration since 1945. The corpus allowed generating a dynamic multilayer network which represents different kinds of named entities appearing and co-appearing in the collections. To our knowledge, Intergraph is one of the first interactive tools to visualize dynamic multilayer graphs for collections of digitized historical sources. Graph visualization and interaction methods have been designed based on user requirements for content exploration by non-technical users without a strong background in network science, and to compensate for common flaws with the annotation of named entities. Users work with self-selected subsets of the overall data by interacting with a scene of small graphs which can be added, altered and compared. This allows an interest-driven navigation in the corpus and the discovery of the interconnections of its entities across time.

  • Open Access English
    Authors: 
    Reinhard Altenhöner; Ina Blümel; Franziska Boehm; Jens Bove; Katrin Bicher; Christian Bracht; Ortrun Brand; Lisa Dieckmann; Maria Effinger; Malte Hagener; +15 more
    Publisher: Pensoft Publishers
    Country: Germany

    Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a user-centered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies. The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to high-level cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.

Send a message
How can we help?
We usually respond in a few hours.