
Intro This repo contains the official implementation for the paper#592 accepted at USENIX Security 2025 Cycle 2 - From Threat to Trust: Exploiting Attention Mechanisms for Attacks and Defenses in Cooperative Perception, including the attack SOMBRA and the defense LUCIA. Due to file count limit, the source code is zipped in SOMBRA_LUCIA.zip Dataset and Model Download Please visit the official website of OPV2V for latest dataset download instructions. Our evaluation is conducted on the test split of the data. Pretrained model weights can be downloaded from OpenCOOD repo. We included Attentive Fusion, CoAlign, Where2comm, and V2VAM in our evaluation. Environment Setup We use pixi for easier and faster env setup. More information can be found at here. We tested on CUDA 11.8 / 12.0. Please edit the pixi.toml file to change the pytorch-cuda version and spconv-cu118 version accordingly (e.g., spconv-cu120). Next, with pixi installed, simply run the following command to get the package installed (if not yet) in the virtual environment and activate it (to deactivate simply run exit). pixi shell Finally, setup and build the dependencies for OpenCOOD, NMS GPU version, and CoMamba (optional) using the following commands: pixi run opencood_setup pixi run nms_gpu_build Evaluation For evaluation, use python cp_attack.py with corresponding arguments. Use --help arguments to show all available arguments. Use --loss sombra for our attack SOMBRA, --loss pa for attack using the loss from prior art. Specify --defense for LUCIA and --robosac for ROBOSAC defense. For targeted object removal, you can specify the target object using --target_id followed by corresponding object ID in the dataset, or in/out for randomly sampled target within/beyond victim's Line-of-Sight, or random for just, a randomly sampled target. Example: python cp_attack.py --model_dir --model AttentiveFusion --data_dir --attack_mode mor --loss sombra (--defense) The detailed attack results would be saved under the same folder as the model weight. Case Study For the traffic jam case study, the dataset is zipped in traffic_jam_data.zip. The evaluation is done in two parts to save time on perturbation generation. First, run python cp_attack.py --model_dir --model --data_dir --attack_mode mor --loss sombra --save_perturb to save the perturbation generated with knowledge of only the attacker and victim's features. Next, rename the the folder that stores the pertubed attacker feature as `adv_feature`, and runs the following python case_study.py --model_dir --data_dir
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
