Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Washington State Uni...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://dx.doi.org/10.7273/000...
Master thesis . 2024
License: CC BY SA
Data sources: Datacite
versions View all 1 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Adversarial Examples in Embedded Systems

Authors: Sah, Ramesh;

Adversarial Examples in Embedded Systems

Abstract

Machine learning algorithms are used for inference and decision-making in embedded systems. Sensor data is used to train machine learning models for various smart functions of embedded and cyber-physical systems ranging from applications in healthcare and emergency response to autonomous vehicles and national security. However, recent studies have shown that machine learning models can be attacked by adding adversarial noise to their inputs. The perturbed inputs are called adversarial examples. Adversarial examples that attack a machine learning model trained in a source domain are often effective against the model trained in a target domain. This property of adversarial examples is called adversarial transferability. We present Adar, a computational framework for optimization-driven creation of adversarial examples. We investigate different methods for generating adversarial examples and study the vulnerability of activity recognition models to adversarial examples in the feature and signal domain. We also present our study of adversarial transferability in wearable systems from four distinct viewpoints: (1) transferability between machine learning models; (2) transferability across users/subjects of the embedded system; (3) transferability across sensor body locations; and (4) transferability across datasets used for model training. Through an extensive analysis based on real sensor data collected with human subjects, we found that simple evasion attacks can decrease the accuracy of a deep neural network from 95.1% to 3.4% and 93.1% to 16.8% in the case of a convolutional neural network. With adversarial training, the robustness of the deep neural network increased on the adversarial examples by 49.1% in the worst case, while the accuracy on clean samples decreased by 13.2%. In most cases, we found strong untargeted transferability, whereas targeted attacks were less successful, with success scores from 0% to 80%. The transferability of adversarial examples depends on many factors, such as the inclusion of data from all subjects, sensor body position, number of samples in the dataset, type of learning algorithm, and the distribution of source and target system datasets. The transferability of adversarial examples decreases sharply when the data distribution of the source and target system becomes more distinct.

Related Organizations
Keywords

Machine Learning, 000, Adversarial Examples, Sensor Systems, Embedded Systems, 004

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green