<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Machine learning models are vulnerable to adversarial inputs that induce seemingly unjustifiable errors. As automated classifiers are increasingly used in industrial control systems and machinery, these adversarial errors could grow to be a serious problem. Despite numerous studies over the past few years, the field of adversarial ML is still considered alchemy, with no practical unbroken defenses demonstrated to date, leaving PHM practitioners with few meaningful ways of addressing the problem. We introduce turbidity detection as a practical superset of the adversarial input detection problem, coping with adversarial campaigns rather than statistically invisible one-offs. This perspective is coupled with ROCtheoreticdesign guidance that prescribes an inexpensive domain adaptation layer at the output of a deep learning model during an attack campaign. The result aims to approximate the Bayes optimal mitigation that ameliorates the detection model’s degraded health. A proactively reactive type of prognostics is achieved via Monte Carlo simulation of various adversarial campaign scenarios, by sampling from the model’s own turbidity distribution to quickly deploy the correct mitigation during a real-world campaign.
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, deep convolution neural network, Computer Vision and Pattern Recognition (cs.CV), adversarial, Computer Science - Computer Vision and Pattern Recognition, Machine Learning (stat.ML), TA213-215, binary classifier, Systems engineering, Machine Learning (cs.LG), Engineering machinery, tools, and implements, TA168, asset health management, Statistics - Machine Learning, Cryptography and Security (cs.CR)
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, deep convolution neural network, Computer Vision and Pattern Recognition (cs.CV), adversarial, Computer Science - Computer Vision and Pattern Recognition, Machine Learning (stat.ML), TA213-215, binary classifier, Systems engineering, Machine Learning (cs.LG), Engineering machinery, tools, and implements, TA168, asset health management, Statistics - Machine Learning, Cryptography and Security (cs.CR)
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |