Powered by OpenAIRE graph
Found an issue? Give us feedback

IDEMIA ISF

IDEMIA IDENTITY & SECURITY FRANCE
Country: France
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
25 Projects, page 1 of 5
  • Funder: French National Research Agency (ANR) Project Code: ANR-20-CE23-0028
    Funder Contribution: 562,689 EUR

    Huge increase of collected data, storage capacity and computing power promote the field of Artificial Intelligence (AI) to the status of panacea to all problems. Indeed, neural networks improved the results in the fields challenging for the handcrafted algorithms previously. However, there is always a price to pay: number of its drawbacks remain unaddressed. In the real world, a decision system with AI can receive an input that is unlike anything it has seen during training. That can lead to the unpredictable behavior. Can we trust the output of such system for a particular input? In LIMPID project, we address this issue of confidence of AI output in the context of face recognition and face quality estimation in images. LIMPID concentrates on a challenge how to estimate the confidence to any response of AI algorithm. This approach can be used in a wide range of applications. LIMPID also proposes the analyses of the image features that highly contribute to the AI algorithm’s decision.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-22-CE39-0016
    Funder Contribution: 559,680 EUR

    The rise in the manipulation of images and voice represents a potential threat for crimes including disinformation campaigns, security fraud, extortion, online crimes against children, crypto jacking or illicit markets. Deepfake techniques are openly described and widely available; generating deepfakes is easy, and their quality has considerably improved. Therefore, it is challenging to detect them by mere visual analysis. As a consequence, there is an increasing need for deepfake detection tools. While several methods achieve good error rates under controlled scenarios, no dedicated tools are available for criminalistics experts. The goal of APATE is to deliver state-of-the-art methods to detect deepfakes. Instead of a “one size fits all” tool, the project aims at providing a toolbox of complementary techniques, based on the audio or visual parts of the video, by exploiting either low-level or semantic information, or by combining them in a multimodal manner. Each tool will address a different family of deepfakes, and will come with documentation detailing the use-cases, the known bias, the validation framework and how the results can be interpreted. The consortium includes criminalistics experts from the French National Scientific Police Service (SNPS), ensuring that the proposed toolbox is usable, properly described, and processes efficiently actual deepfakes found in criminal cases. In addition, the literature on deepfake generation will be continuously reviewed and analysed, to ensure that datasets corresponding to the latest deepfake generation techniques are available for the partners; a special concern will be to avoid overfitting to the learning databases. The consortium includes three research laboratories (Centre Borelli at ENS Paris Saclay, EPITA, LIX at Ecole Polytechnique), the SNPS, and IDEMIA, world leader in biometric recognition.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-24-CE23-0921
    Funder Contribution: 784,918 EUR

    The FAR-SEE project aims to study the issues of sampling bias, fairness, uncertainty and explicability of these features for Artificial Intelligence (AI)-based face recognition systems, with the aim of improving existing algorithms, revealing 'optimal' performance/fairness/explicability trade-offs and thus formulating the principles of operational regulation of these systems. The objectives are therefore manifold. 1) To develop methods for detecting/correcting selection biases during learning (described not only by 'sensitive' variables such as gender or age, but also by the physiognomy of individuals and image characteristics, e.g. brightness), for ensuring fairness (an acceptable level of performance disparity between 'sensitive groups') without deteriorating performance, and for assessing the uncertainty involved in measuring performance and fairness metrics. 2) Explain the nature of proven sampling biases, the uncertainty inherent in performance/equity measurement, and the level of inequity measured, so as to be able to improve the methods developed to achieve objective 1). 3) In the light of the trade-offs between uncertainty, performance, fairness and explicability, describe the nature of acceptable/operational regulatory constraints that reconcile the constraints to be met by facial recognition systems. The project brings together three complementary partners with long-standing collaborative experience. It will draw on the expertise of IDEMIA's R&D team in facial recognition technologies and its knowledge of regulatory issues, the skills of the LTCI laboratory at Télécom Paris in the field of trustworthy AI, and those of the I3 laboratory in questions of ethics and operational regulation of AI.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-20-CE39-0013
    Funder Contribution: 675,563 EUR

    A major trend in Artificial Intelligence is the deployment of Machine Learning models even for highly constrained platforms such as low power 32-bit microcontrollers. However, the security of embedded Machine Learning systems is one of the most important issues to this massive deployment, more particularly for deep neural network-based systems. The difficulty comes from a complex twofold attack surface. First of all, an impressive amount of works demonstrate algorithmic flaws targeting the model’s integrity (e.g., adversarial examples) or the confidentiality and privacy of data and models (e.g., membership inference, model inversion). However, few works take into consideration the specificities of embedded models (e.g. quantization, pruning). Second, physical attacks (side-channel and fault injection analysis) represent upcoming and highly critical threats. Today, these two types of threats are considered separately. For the first time, the PICTURE project proposes to jointly analyze the algorithmic and physical threats in order to develop protection schemes bridging these two worlds and to promote a set of good practices enabling the design, development and deployment of more robust models. PICTURE gathers CEA Tech (LETI) and Ecole des Mines de Saint-Etienne (MSE, Centre de Microélectronique de Provence) as academic partners and IDEMIA and STMicroelectronics as industrial partners that will bring real, complete and critical use cases more particularly focused on Facial Recognition. To achieve its objectives, the consortium of PICTURE will precisely describe the different threat models targeting the integrity and the confidentiality of software implementation of neural network models on hardware targets from 32-bit microcontrollers (Cortex-M), dual architecture with Cortex-M and Cortex-A platforms to GPU platforms dedicated to embedded systems. Then, PICTURE aims at demonstrating and analyzing – for the first time – complex attacks combining algorithmic and physical attacks. On one hand, for integrity-based threats (i.e. fooling the prediction of a model) by combining principle of adversarial examples attacks and fault injection approaches. On the other hand, by studying the impact of the exploitation of side-channel leakages (side-channel analysis), even fault injection analysis associated to theoretical approaches to reverse engineer a model (model inversion) or to extract training data (membership inference attack). The development of new protection schemes will be achieved by the analysis of the relevance of state-of-the-art countermeasures against physical attacks (such an analysis has never been achieved at this scale). PICTURE will propose protections that will take place at different position within the traditional Machine Learning pipeline and more particularly training-based approaches that enable more robust models. Finally, PICTURE will present new evaluation methods to promote PICTURE results to academic and industrial actors. PICTURE aims at facilitating a shift in the way to consider ML models by putting security at the core of the development and deployment strategy and anticipate as well as influence future certification strategies.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-FLJO-0003
    Funder Contribution: 483,988 EUR

    During the next Olympic Games in Paris 2024, France will face a major security challenge because of a series of sports events, relayed around the World, involving personalities and the public. The history of the Games and sports has unfortunately already left traces of painful events, which we have the responsibility not to allow to reproduce here. On the basis of a sociological study of these risks, and in order to answer this challenge, the GIRAFE project proposes to develop algorithmic crowd control solutions based on video streams covering all or part of the public areas. These algorithms will in particular be able to alert the authorities of areas where crowds can become of concern, to monitor the flow of crowds and to anticipate possible phenomena of congestion; but also to identify abnormal cases occurring within such crowds, such as suspicious strolling of an individual, a chase or the transport and abandonment of a baggage, and to track their perpetrators to a possible interpellation. The tools created by the project will be in the spirit of facilitating the intervention of law enforcement and optimizing the use of security personnel, whose resources are limited and critical for an event of this magnitude. The human operator will remain the sole decision-maker of the actions to be taken during a warning. The legal and societal aspects associated with these video treatments will be studied and taken into account, to ensure the respect of the French legal framework, the GDPR, and keep the festive spirit of the Games. The project's various innovative algorithms use complementary approaches to detect abnormal events and manage the movement of crowds, so as to ensure maximum detection of risk situations and to trace alerts as quickly as possible. The research of the project will be based on three main pillars: - the movements of crowds, to manage the flow and the abnormal behaviors within a very dense crowd (specific approach in very dense zone); - the detection of abnormal behaviors, of which the learning of scenes given in JOP 2024 will make it possible to identify cases out of the ordinary (generic approach); - detection of pedestrians and baggage, transverse to the two previous axes, since in addition to the case of abandoned parcels (specific approach in sparsely populated area) this axis will ensure the identification and monitoring of the suspect individual to his arrest by the police The algorithms resulting from these thematic axes will be integrated into a demonstrator that optimizes the real-time processing of these algorithms, ergonomic for the end users, and prioritizes the flow of video surveillance cameras classified at risk or as unusual. The demonstrator will also be connected to a "Command and Control" in order to present to the operator, in a clear and global way, the cases of crowd movements to manage. The whole will be tested under pre-operational conditions and will reach a sufficient level of maturity, TRL6, to allow its deployment and its evaluation on various Olympic sites or in various places of the city of Paris at the end of the project.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.