
doi: 10.7273/000005001
Machine learning algorithms are used for inference and decision-making in embedded systems. Sensor data is used to train machine learning models for various smart functions of embedded and cyber-physical systems ranging from applications in healthcare and emergency response to autonomous vehicles and national security. However, recent studies have shown that machine learning models can be attacked by adding adversarial noise to their inputs. The perturbed inputs are called adversarial examples. Adversarial examples that attack a machine learning model trained in a source domain are often effective against the model trained in a target domain. This property of adversarial examples is called adversarial transferability. We present Adar, a computational framework for optimization-driven creation of adversarial examples. We investigate different methods for generating adversarial examples and study the vulnerability of activity recognition models to adversarial examples in the feature and signal domain. We also present our study of adversarial transferability in wearable systems from four distinct viewpoints: (1) transferability between machine learning models; (2) transferability across users/subjects of the embedded system; (3) transferability across sensor body locations; and (4) transferability across datasets used for model training. Through an extensive analysis based on real sensor data collected with human subjects, we found that simple evasion attacks can decrease the accuracy of a deep neural network from 95.1% to 3.4% and 93.1% to 16.8% in the case of a convolutional neural network. With adversarial training, the robustness of the deep neural network increased on the adversarial examples by 49.1% in the worst case, while the accuracy on clean samples decreased by 13.2%. In most cases, we found strong untargeted transferability, whereas targeted attacks were less successful, with success scores from 0% to 80%. The transferability of adversarial examples depends on many factors, such as the inclusion of data from all subjects, sensor body position, number of samples in the dataset, type of learning algorithm, and the distribution of source and target system datasets. The transferability of adversarial examples decreases sharply when the data distribution of the source and target system becomes more distinct.
Machine Learning, 000, Adversarial Examples, Sensor Systems, Embedded Systems, 004
Machine Learning, 000, Adversarial Examples, Sensor Systems, Embedded Systems, 004
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
