Downloads provided by UsageCounts
Digital Forensics Network (DF-Net) This is the official model for the DF-Net (IMVIP 2023). For method details, please refer to @inproceedings{Fischinger2023DFNet, title={DF-Net: The Digital Forensics Network for Image Forgery Detection}, author={David Fischinger and Martin Boyer}, journal={The 25th Irish Machine Vision and Image Processing conference. (IMVIP)}, year={2023} } The provided zip file DFNet.zip includes: - Documentation: README.md - Models: The DF-Net comprising the sub-models M1 and M2 (in TensorFlow format) - Docker-Image: Dockerfile and code to create an independent docker container to easily test the DF-Net and reproduce results on the benchmark dataset - Jupyter Notebook: Interactive way to test DF-Net and visualize examples - Testdata: Copy of the benchmark dataset CASIA* (due to space restrictions only a subset of the CASIA V1 benchmark dataset is used) *) [Dong et al., 2013] Dong, J., Wang, W., and Tan, T. (2013). Casia image tampering detection evaluation database. In IEEE China Summit Inter. Conf. Signal Info. Proc., pages 422–426. IEEE. HowTo: Test DF-Net in Docker container Download und extract the file DFNet.zip to an environment supporting Docker, ideally in a Linux-based system with GPU support ($ = command prompt): 0 - Install zip tool (if not already available): $ sudo apt install zip 1 - Download DFNet.zip $ wget https://zenodo.org/record/8142658/files/DFNet.zip 2 - Unpack zip: $ unzip DFNet.zip 3 - Change Directory: $ cd IMVIP_Supplementary_Material 4 - BUILD Docker Image $ ./scripts/build_image.sh 5 - RUN Docker Container with or without GPU $ ./scripts/run_container.sh or $ ./scripts/run_container_NO_GPU.sh 6 - Open Jupyter Notebook In a browser, open the link that was printed in your console (should look similar to http://127.0.0.1:8888/?token=73a6fd55dacf4984f616fe60838d64e96abba2087a6cfbda) On the webpage, click on the folder dfnet and then the Jupyter notebook IMVIP_Model_Evaluation.ipynb 7 - Execute Code Run all code blocks (by pressing shift+enter for each block). Examples from the CASIA benchmark dataset will be displayed on the screen. Parameter Setting To control the number of processed images (MAX_EXAMPLES) and define how many images are displayed (SHOW_NTH_RESULT), you can modify the corresponding variables at the bottom of the Jupyter page. SHOW_NTH_RESULT = 1 # defines how many results are shown: 1...show each result; n...only show n-th result MAX_EXAMPLES = 20 # defines number of images to process: np.infty...all images in folder are processed Folder Structure for Supplementary Material . ├── datasets │ │ │ └── benchmark │ ├── CASIA │ └── CASIA_GT ├── models │ ├── model1 │ └── model2 └── scripts == Licence == The Software is made available for academic or non-commercial purposes only. For commercial license pricing and support pricing, please contact: Martin Boyer Senior Research Engineer Center for Digital Safety & Security AIT Austrian Institute of Technology GmbH Giefinggasse 4 | 1210 Vienna | Austria martin.boyer [at] ait.ac.at == CITATION == Please cite: "DF-Net: The Digital Forensics Network for Image Forgery Detection", David Fischinger and Martin Boyer, IMVIP -The 25th Irish Machine Vision and Image Processing conference, (2023)
{"references": ["\"DF-Net: The Digital Forensics Network for Image Forgery Detection\", David Fischinger and Martin Boyer (2023), IMVIP -The 25th Irish Machine Vision and Image Processing conference", "\"Casia image tampering detection evaluation database.\", Dong, J., Wang, W., and Tan, T. (2013). In IEEE China Summit Inter. Conf. Signal Info. Proc., pages 422\u2013426. IEEE."]}
This work was co-funded by the European Union, Project 101083573 — GADMO
Image Forgery Detection, Image Manipulation, DF-Net
Image Forgery Detection, Image Manipulation, DF-Net
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 74 | |
| downloads | 8 |

Views provided by UsageCounts
Downloads provided by UsageCounts