
Backdoor attacks compromise deep neural networks by injecting them with covert, malicious behaviors during training, which attackers can later activate at test-time. As backdoors become more sophisticated, defenses struggle to catch up. This paper introduces a simple yet effective Backdoor Attack using Inpainting as a Trigger, dubbed BAIT. The attack's trigger relies on a randomly-drawn polygonal patch, filled via inpainting with an off-the-shelf generative adversarial network. Using BAIT, we show that several defenses, including common test-time input purification methods, can be bypassed by patch-based backdoors. To counter this, we propose four targeted defense strategies.
deep neural networks, backdoor attacks, [INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing, inpainting, integrity risk, adversarial machine learning
deep neural networks, backdoor attacks, [INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing, inpainting, integrity risk, adversarial machine learning
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
