
Despite recent progress in splicing detection, deep learning-based forensic tools remain difficult to deploy in practice due to their high sensitivity to training conditions. Even mild post-processing applied to evaluation images can significantly degrade detector performance, raising concerns about their reliability in operational contexts. In this work, we show that the same deep architecture can react very differently to unseen post-processing depending on the learned weights, despite achieving similar accuracy on in-distribution test data. This variability stems from differences in the latent spaces induced by training, which affect how samples are separated internally. Our experiments reveal a strong correlation between the distribution of latent margins and a detector's ability to generalize to post-processed images. Based on this observation, we propose a practical strategy for building more robust detectors: train several variants of the same model under different conditions, and select the one that maximizes latent margins.
in French language. GRETSI 2025 - Colloque Francophone de Traitement du Signal et des Images, https://gretsi.fr/colloque2025/, Aug 2025, Strasbourg, France
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], FOS: Computer and information sciences, [INFO.INFO-MM] Computer Science [cs]/Multimedia [cs.MM], [INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing, Cryptography and Security, [INFO.INFO-NE] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE], Digital Forensics, Multimedia Forensics, Splicing Detection, Computer Vision, Deep Learning, Out of distribution generalization, Neural networks, Signal Processing, Machine Learning (cs.LG), Machine Learning, [INFO.INFO-CY] Computer Science [cs]/Computers and Society [cs.CY], Artificial Intelligence (cs.AI), [INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV], Artificial Intelligence, Cryptography and Security (cs.CR), [INFO.INFO-CR] Computer Science [cs]/Cryptography and Security [cs.CR]
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], FOS: Computer and information sciences, [INFO.INFO-MM] Computer Science [cs]/Multimedia [cs.MM], [INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing, Cryptography and Security, [INFO.INFO-NE] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE], Digital Forensics, Multimedia Forensics, Splicing Detection, Computer Vision, Deep Learning, Out of distribution generalization, Neural networks, Signal Processing, Machine Learning (cs.LG), Machine Learning, [INFO.INFO-CY] Computer Science [cs]/Computers and Society [cs.CY], Artificial Intelligence (cs.AI), [INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV], Artificial Intelligence, Cryptography and Security (cs.CR), [INFO.INFO-CR] Computer Science [cs]/Cryptography and Security [cs.CR]
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
