Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Conference object
Data sources: ZENODO
addClaim

Advances in deep learning for multimodal XRF/XRD analysis for cultural heritage

Authors: Preisler, Zdenek;

Advances in deep learning for multimodal XRF/XRD analysis for cultural heritage

Abstract

The current advancements of multimodal non-invasive imaging methods applied for the study and conservation of cultural heritage have driven a rapid development of novel computational methods. Macro X-ray fluorescence (MA-XRF) is often combined with complementary analytical techniques, such as X-ray powder diffraction (XRPD), for comprehensive material characterization. We have developed a deep learning framework for automated multimodal analysis utilizing synthetic datasets. In the following we introduce three key innovations aimed towards native multimodal data analysis. The first involves the implementation of vision transformers as an architectural alternative to previously employed convolutional networks.[1] This methodological refinement enables a substantial reduction in network parameters while maintaining analytical precision, with the transformer-based architecture demonstrating enhanced efficacy in identifying complex elemental distributions and facilitating multimodal feature fusion. Our second advancement mitigates critical measurement geometry effects through algorithmic corrections addressing solid angle variations and air attenuation phenomena. These corrections constitute essential refinements for subsequent data analysis such as clustering of XRF imaging data or stitching, significantly enhancing the spatial coherence of elemental distribution maps across large-scale artifacts.Finally, building upon this architectural foundation, we introduce a novel framework for integrated multimodal analysis of MA-XRF and XRPD data. In the case of paintings, because the polycrystalline nature of pigments, XRPD enables their direct identification even in complex mixtures, complementing the elemental information from MA-XRF. Our methodology was used to analyse XRPD scans obtained by the MA-XRD/MA-XRF system at the XRAYLab of ISPCCNR in Catania. For one scanning session, several tens of thousands of XRPD patterns are recorded, rendering manual analysis impracticable. We developed a neural network trained on synthetic data generated using tabulated diffraction patterns augmented to resemble experimental measurements. This network processes XRF and XRPD inputs simultaneously, enabling multimodal analysis that substantially improves pigment and degradation products identification in painted artifacts. Compelling applications in real cases are presented and discussed.

Powered by OpenAIRE graph
Found an issue? Give us feedback