
A major trend in Artificial Intelligence is the deployment of Machine Learning models even for highly constrained platforms such as low power 32-bit microcontrollers. However, the security of embedded Machine Learning systems is one of the most important issues to this massive deployment, more particularly for deep neural network-based systems. The difficulty comes from a complex twofold attack surface. First of all, an impressive amount of works demonstrate algorithmic flaws targeting the model’s integrity (e.g., adversarial examples) or the confidentiality and privacy of data and models (e.g., membership inference, model inversion). However, few works take into consideration the specificities of embedded models (e.g. quantization, pruning). Second, physical attacks (side-channel and fault injection analysis) represent upcoming and highly critical threats. Today, these two types of threats are considered separately. For the first time, the PICTURE project proposes to jointly analyze the algorithmic and physical threats in order to develop protection schemes bridging these two worlds and to promote a set of good practices enabling the design, development and deployment of more robust models. PICTURE gathers CEA Tech (LETI) and Ecole des Mines de Saint-Etienne (MSE, Centre de Microélectronique de Provence) as academic partners and IDEMIA and STMicroelectronics as industrial partners that will bring real, complete and critical use cases more particularly focused on Facial Recognition. To achieve its objectives, the consortium of PICTURE will precisely describe the different threat models targeting the integrity and the confidentiality of software implementation of neural network models on hardware targets from 32-bit microcontrollers (Cortex-M), dual architecture with Cortex-M and Cortex-A platforms to GPU platforms dedicated to embedded systems. Then, PICTURE aims at demonstrating and analyzing – for the first time – complex attacks combining algorithmic and physical attacks. On one hand, for integrity-based threats (i.e. fooling the prediction of a model) by combining principle of adversarial examples attacks and fault injection approaches. On the other hand, by studying the impact of the exploitation of side-channel leakages (side-channel analysis), even fault injection analysis associated to theoretical approaches to reverse engineer a model (model inversion) or to extract training data (membership inference attack). The development of new protection schemes will be achieved by the analysis of the relevance of state-of-the-art countermeasures against physical attacks (such an analysis has never been achieved at this scale). PICTURE will propose protections that will take place at different position within the traditional Machine Learning pipeline and more particularly training-based approaches that enable more robust models. Finally, PICTURE will present new evaluation methods to promote PICTURE results to academic and industrial actors. PICTURE aims at facilitating a shift in the way to consider ML models by putting security at the core of the development and deployment strategy and anticipate as well as influence future certification strategies.

A major trend in Artificial Intelligence is the deployment of Machine Learning models even for highly constrained platforms such as low power 32-bit microcontrollers. However, the security of embedded Machine Learning systems is one of the most important issues to this massive deployment, more particularly for deep neural network-based systems. The difficulty comes from a complex twofold attack surface. First of all, an impressive amount of works demonstrate algorithmic flaws targeting the model’s integrity (e.g., adversarial examples) or the confidentiality and privacy of data and models (e.g., membership inference, model inversion). However, few works take into consideration the specificities of embedded models (e.g. quantization, pruning). Second, physical attacks (side-channel and fault injection analysis) represent upcoming and highly critical threats. Today, these two types of threats are considered separately. For the first time, the PICTURE project proposes to jointly analyze the algorithmic and physical threats in order to develop protection schemes bridging these two worlds and to promote a set of good practices enabling the design, development and deployment of more robust models. PICTURE gathers CEA Tech (LETI) and Ecole des Mines de Saint-Etienne (MSE, Centre de Microélectronique de Provence) as academic partners and IDEMIA and STMicroelectronics as industrial partners that will bring real, complete and critical use cases more particularly focused on Facial Recognition. To achieve its objectives, the consortium of PICTURE will precisely describe the different threat models targeting the integrity and the confidentiality of software implementation of neural network models on hardware targets from 32-bit microcontrollers (Cortex-M), dual architecture with Cortex-M and Cortex-A platforms to GPU platforms dedicated to embedded systems. Then, PICTURE aims at demonstrating and analyzing – for the first time – complex attacks combining algorithmic and physical attacks. On one hand, for integrity-based threats (i.e. fooling the prediction of a model) by combining principle of adversarial examples attacks and fault injection approaches. On the other hand, by studying the impact of the exploitation of side-channel leakages (side-channel analysis), even fault injection analysis associated to theoretical approaches to reverse engineer a model (model inversion) or to extract training data (membership inference attack). The development of new protection schemes will be achieved by the analysis of the relevance of state-of-the-art countermeasures against physical attacks (such an analysis has never been achieved at this scale). PICTURE will propose protections that will take place at different position within the traditional Machine Learning pipeline and more particularly training-based approaches that enable more robust models. Finally, PICTURE will present new evaluation methods to promote PICTURE results to academic and industrial actors. PICTURE aims at facilitating a shift in the way to consider ML models by putting security at the core of the development and deployment strategy and anticipate as well as influence future certification strategies.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::6d02cf246f5251a13fe01f3fea9725c2&type=result"></script>');
-->
</script>