Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
7 Projects, page 1 of 1

  • CHIST-ERA
  • 2015
  • 2019

  • Funder: CHIST-ERA Project Code: AMIS
    Partners: UL, DEUSTO, Université d'Avignon LIA, AGH University of Science and Technology, Department of Telecommunications

    With the growth of the information in different media such as TV programs or internet, a new issue arises. How a user can access to the information which is expressed in a foreign language? The idea of the project is to develop a multilingual help system of understanding without any human being intervention. What we would like to do, is to help people understanding broadcasting news, presented in a foreign language and to compare it to the corresponding one available in the mother tongue of the user. The concept of understanding is approached in this project by giving access to any information whatever the language in which it is presented. In fact, with the development of internet and satellite TV, tens of thousands shows and broadcasting news are available in different languages, it turns out that even high educated people, do not speak more than two or three languages while the majority speaks only one, which makes this huge amount of information inaccessible. Consequently, the majority of TV and radio programs as well as information on internet are inaccessible for the majority of people. And yet, one would like to listen to news in his own language and compare it to what has been said on the same topic in another language. For instance, how the topic of AIDS is presented in SAUDI-ARABIA and in USA? What is the opinion of The Jerusalem-Post about Yasser-Arafat? And how it is presented in Al-Quds ? To access to various information and to make available different and sometimes opposite information, we propose to develop AMIS (Access to Multilingual Information and Opinions). As a result, AMIS will permit to have another side of story of an event. The understanding process is considered here to be the comprehension of the main ideas of a video. The best way to do that, is then to summarize the video for having access to the essential information. Henceforth, AMIS will focus on the most relevant information by summarizing it and by translating it to the user if necessary. Another aspect of AMIS is to compare two summaries produced by this system, from two languages on the same topic whatever their support is: video, audio or text and to present the difference between their contents in terms of information, sentiments, opinions, etc. Furthermore, the coverage of the web and social media will be exploited in order to strengthen or weaken the retrieved opinions. AMIS could be incorporated in a TV remote control or such as software associated to any internet browser. In conclusion AMIS will address the following research points: Text, audio and video summarization Automatic Speech Recognition (ASR) Machine Translation Cross-lingual sentiment analysis Achieving successful synergy between the previous research topics

  • Funder: CHIST-ERA Project Code: DYPOSIT
    Partners: University College Cork (Department of Computer Science), Lancaster University (Security Lancaster Research Centre), Katholieke Universiteit Leuven (DistriNet- iMINDS)

    The DYPOSIT project tackles the problem of large, shared CPS infrastructures under attack. In particular, the project responds to the critical need for dynamically formulating and adapting security policies, rapidly and on-demand, in the face of unfolding attacks on a shared CPS fabric integrating multiple applications run by a variety of stakeholders. DYPOSIT tackles this fundamental research problem through a novel dynamic policies approach rooted in a socio-technical understanding of the complexity and dynamics of shared CPS fabrics under attack. DYPOSIT’s approach is unique and transformative as it takes an inter-disciplinary view of reasoning about the security state of a CPS and formulating responses to CPS coming under attack. This is in sharp contrast to other approaches that remain largely focused on technical measures to provide security or solutions that cater for the resource-constrained nature of the devices employed in a CPS. Furthermore, DYPOSIT’s approach to dynamic policies offers a new perspective on the role of policies in large-scale CPS settings – transforming policies from simply a means to enforce pre-defined security properties to policies as living, evolving objects that play a central role in reasoning about the security state of such a CPS and responding to unfolding attacks. Managing the complexity of formulating and adapting policies dynamically in such a setting, while resolving conflicts, is a fundamental advance towards resilient shared CPS fabrics. DYPOSIT’s scientific advances are validated in an available realistic testbed, which is used to provide application scenarios depicting CPS under attack across a spectrum: highly-managed CPS such as those found in industrial control systems or future factories through to dynamically aggregated CPS, as in smart cities, large manufacturing plants or intelligent transportation systems.

  • Funder: CHIST-ERA Project Code: SECODE
    Partners: Sabanci University / Faculty of Engineering and Natural Sciences, Paris 8 University, INRIA, Université Catholique de Louvain / Cryptogroup, Institut Mines-Télécom/Télécom SudParis

    In this project, we specify and design error correction codes suitable for an efficient protection of sensitive information in the context of Internet of Things (IoT) and connected objects. Such codes mitigate passive attacks, like memory disclosure, and active attacks, like stack smashing. The innovation of this project is to leverage these codes for protecting against both cyber and physical attacks. The main advantage is a 360° coverage of attacks of the connected embedded systems, which is considered as a smart connected device and also a physical device. The outcome of the project is first a method to generate and execute cyber-resilient software, and second to protect data and its manipulation from physical threats like side-channel attacks. Theses results are demonstrated by using a smart sensor application with hardened embedded firmware and tamper-proof hardware platform.

  • Funder: CHIST-ERA Project Code: IGLU
    Partners: Inria Bordeaux Sud-Ouest / Flowers Team, University of Mons / Numediart Research Institute, University of Zaragoza, Université de Sherbrooke, Université de Lille 1, KTH Royal Institute of Technology

    Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life.

  • Funder: CHIST-ERA Project Code: COPES
    Partners: INRIA Grenoble Rhônes-Alpes, Imperial College London/Dept. Of Electrical Electronic Eng., KTH Royal Institute of Technology/School of Electrical Engineering, ETH Zurich/Information Technology and Electrical Engineering

    Smart meters have the capability to measure and record consumption data at a high time resolution and communicate such data to the energy provider. This provides the opportunity to better monitor and control the power grid and to enable demand response at the residential level. This not only improves the reliability of grid operations but also constitutes a key enabler to integrate variable renewable generation, such as wind or solar. However, the communication of high resolution consumption data also poses privacy risks as such data allows the utility, or a third party, to derive detailed information about consumer behavior. Hence, the main research objective of COPES is to develop new technologies to protect consumer privacy, while not sacrificing the “smartness”, i.e., advanced control and monitoring functionalities. The core idea is to overlay the original consumption pattern with additional physical consumption or generation, thereby hiding the consumer privacy sensitive consumption. The means to achieve this include the usage of storage, small scale distributed generation and/or elastic energy consumptions. Hence, COPES proposes and develops a radically new approach to alter the physical energy flow, instead of purely relying on encryption of meter readings, which provides protection against third party intruders but does not prevent the use of this data by the energy provider. In order to efficiently hide consumption information, intelligent decisions and strategies on when to charge/discharge the storage, which energy source to tap into, need to be made in real time. Therefore, in this project, algorithms based on and extending upon differential privacy, information and detection theoretic first principles that allow efficient use of physical capabilities to alter the overall consumption measured by the smart meters will be developed. Since these resources can also be used to minimize the electricity bill or increase the integration of renewables, trade-offs between these objectives and privacy will be studied and combined into a holistic privacy guaranteeing house energy management system. Implementations on multiple small test systems will serve as a proof of concept of the proposed methods.

  • Funder: CHIST-ERA Project Code: ATLANTIS
    Partners: IBE-UPF, VUB AI Lab, OFAI, LATTICE-CNRS, Sony (France)

    ATLANTIS attempts to understand and model the very first stages in grounded language learning, as we see in children until the age of three: how pointing or other symbolic gestures emerge from the ontogenetic ritualization of instrumental actions, how words are learned very fast in contextualized language games, and how the first grammatical constructions emerge from concrete sentences. This requires a global, computational theory of symbolic development that informs us about what forces motivate language development, what strategies are exploited in learner and caregiver interactions to come up with more complex compositional meanings, how new grammatical structures and novel interaction patterns and formed, and how the multitude of developmental pathways observed in humans lead to a full system of multi-modal communication skills. This ambitious aim is feasible because there have been very significant advances in humanoid robotics and in the development of sensory-motor competence recently, and the time is ripe to push all this to a higher level of symbolic intelligence, going beyond simple sensory-motor loops or pattern-based intelligence towards grounded semantics, and incremental, long-term, autonomous language learning.

  • Funder: CHIST-ERA Project Code: MUSTER
    Partners: EHU, KUL, ETH Zurich, University Pierre et Marie Curie (UPMC) Paris

    The MUSTER project is a fundamental pilot research project which introduces a new multi-modal framework for the machine-readable representation of meaning. The focus of MUSTER lies on exploiting visual and perceptual input in the form of images and videos coupled with textual modality for building structured multi-modal semantic representations for the recognition of objects and actions, and their spatial and temporal relations. The MUSTER project will investigate whether such novel multi-modal representations will improve the performance of automated understanding of human language. MUSTER starts from the current state-of-the-work platform for human language representation learning known as text embeddings, but introduces the visual modality to provide contextual world knowledge which text-only models lack while humans possess such knowledge when understanding language. MUSTER will propose a new pilot framework for joint representation learning from text and vision data tailored for spatial and temporal language processing. The constructed framework will be evaluated on a series of HLU tasks (i.e., semantic textual similarity and disambiguation, spatial role labeling, zero-shot learning, temporal action ordering) which closely mimic the processes of human language acquisition and understanding. MUSTER will rely on recent advances in multiple research disciplines spanning natural language processing, computer vision, machine learning, representation learning, and human language technologies, working together on building structured machine-readable multi-modal representations of spatial and temporal language phenomena.

Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
7 Projects, page 1 of 1
  • Funder: CHIST-ERA Project Code: AMIS
    Partners: UL, DEUSTO, Université d'Avignon LIA, AGH University of Science and Technology, Department of Telecommunications

    With the growth of the information in different media such as TV programs or internet, a new issue arises. How a user can access to the information which is expressed in a foreign language? The idea of the project is to develop a multilingual help system of understanding without any human being intervention. What we would like to do, is to help people understanding broadcasting news, presented in a foreign language and to compare it to the corresponding one available in the mother tongue of the user. The concept of understanding is approached in this project by giving access to any information whatever the language in which it is presented. In fact, with the development of internet and satellite TV, tens of thousands shows and broadcasting news are available in different languages, it turns out that even high educated people, do not speak more than two or three languages while the majority speaks only one, which makes this huge amount of information inaccessible. Consequently, the majority of TV and radio programs as well as information on internet are inaccessible for the majority of people. And yet, one would like to listen to news in his own language and compare it to what has been said on the same topic in another language. For instance, how the topic of AIDS is presented in SAUDI-ARABIA and in USA? What is the opinion of The Jerusalem-Post about Yasser-Arafat? And how it is presented in Al-Quds ? To access to various information and to make available different and sometimes opposite information, we propose to develop AMIS (Access to Multilingual Information and Opinions). As a result, AMIS will permit to have another side of story of an event. The understanding process is considered here to be the comprehension of the main ideas of a video. The best way to do that, is then to summarize the video for having access to the essential information. Henceforth, AMIS will focus on the most relevant information by summarizing it and by translating it to the user if necessary. Another aspect of AMIS is to compare two summaries produced by this system, from two languages on the same topic whatever their support is: video, audio or text and to present the difference between their contents in terms of information, sentiments, opinions, etc. Furthermore, the coverage of the web and social media will be exploited in order to strengthen or weaken the retrieved opinions. AMIS could be incorporated in a TV remote control or such as software associated to any internet browser. In conclusion AMIS will address the following research points: Text, audio and video summarization Automatic Speech Recognition (ASR) Machine Translation Cross-lingual sentiment analysis Achieving successful synergy between the previous research topics

  • Funder: CHIST-ERA Project Code: DYPOSIT
    Partners: University College Cork (Department of Computer Science), Lancaster University (Security Lancaster Research Centre), Katholieke Universiteit Leuven (DistriNet- iMINDS)

    The DYPOSIT project tackles the problem of large, shared CPS infrastructures under attack. In particular, the project responds to the critical need for dynamically formulating and adapting security policies, rapidly and on-demand, in the face of unfolding attacks on a shared CPS fabric integrating multiple applications run by a variety of stakeholders. DYPOSIT tackles this fundamental research problem through a novel dynamic policies approach rooted in a socio-technical understanding of the complexity and dynamics of shared CPS fabrics under attack. DYPOSIT’s approach is unique and transformative as it takes an inter-disciplinary view of reasoning about the security state of a CPS and formulating responses to CPS coming under attack. This is in sharp contrast to other approaches that remain largely focused on technical measures to provide security or solutions that cater for the resource-constrained nature of the devices employed in a CPS. Furthermore, DYPOSIT’s approach to dynamic policies offers a new perspective on the role of policies in large-scale CPS settings – transforming policies from simply a means to enforce pre-defined security properties to policies as living, evolving objects that play a central role in reasoning about the security state of such a CPS and responding to unfolding attacks. Managing the complexity of formulating and adapting policies dynamically in such a setting, while resolving conflicts, is a fundamental advance towards resilient shared CPS fabrics. DYPOSIT’s scientific advances are validated in an available realistic testbed, which is used to provide application scenarios depicting CPS under attack across a spectrum: highly-managed CPS such as those found in industrial control systems or future factories through to dynamically aggregated CPS, as in smart cities, large manufacturing plants or intelligent transportation systems.

  • Funder: CHIST-ERA Project Code: SECODE
    Partners: Sabanci University / Faculty of Engineering and Natural Sciences, Paris 8 University, INRIA, Université Catholique de Louvain / Cryptogroup, Institut Mines-Télécom/Télécom SudParis

    In this project, we specify and design error correction codes suitable for an efficient protection of sensitive information in the context of Internet of Things (IoT) and connected objects. Such codes mitigate passive attacks, like memory disclosure, and active attacks, like stack smashing. The innovation of this project is to leverage these codes for protecting against both cyber and physical attacks. The main advantage is a 360° coverage of attacks of the connected embedded systems, which is considered as a smart connected device and also a physical device. The outcome of the project is first a method to generate and execute cyber-resilient software, and second to protect data and its manipulation from physical threats like side-channel attacks. Theses results are demonstrated by using a smart sensor application with hardened embedded firmware and tamper-proof hardware platform.

  • Funder: CHIST-ERA Project Code: IGLU
    Partners: Inria Bordeaux Sud-Ouest / Flowers Team, University of Mons / Numediart Research Institute, University of Zaragoza, Université de Sherbrooke, Université de Lille 1, KTH Royal Institute of Technology

    Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life.

  • Funder: CHIST-ERA Project Code: COPES
    Partners: INRIA Grenoble Rhônes-Alpes, Imperial College London/Dept. Of Electrical Electronic Eng., KTH Royal Institute of Technology/School of Electrical Engineering, ETH Zurich/Information Technology and Electrical Engineering

    Smart meters have the capability to measure and record consumption data at a high time resolution and communicate such data to the energy provider. This provides the opportunity to better monitor and control the power grid and to enable demand response at the residential level. This not only improves the reliability of grid operations but also constitutes a key enabler to integrate variable renewable generation, such as wind or solar. However, the communication of high resolution consumption data also poses privacy risks as such data allows the utility, or a third party, to derive detailed information about consumer behavior. Hence, the main research objective of COPES is to develop new technologies to protect consumer privacy, while not sacrificing the “smartness”, i.e., advanced control and monitoring functionalities. The core idea is to overlay the original consumption pattern with additional physical consumption or generation, thereby hiding the consumer privacy sensitive consumption. The means to achieve this include the usage of storage, small scale distributed generation and/or elastic energy consumptions. Hence, COPES proposes and develops a radically new approach to alter the physical energy flow, instead of purely relying on encryption of meter readings, which provides protection against third party intruders but does not prevent the use of this data by the energy provider. In order to efficiently hide consumption information, intelligent decisions and strategies on when to charge/discharge the storage, which energy source to tap into, need to be made in real time. Therefore, in this project, algorithms based on and extending upon differential privacy, information and detection theoretic first principles that allow efficient use of physical capabilities to alter the overall consumption measured by the smart meters will be developed. Since these resources can also be used to minimize the electricity bill or increase the integration of renewables, trade-offs between these objectives and privacy will be studied and combined into a holistic privacy guaranteeing house energy management system. Implementations on multiple small test systems will serve as a proof of concept of the proposed methods.

  • Funder: CHIST-ERA Project Code: ATLANTIS
    Partners: IBE-UPF, VUB AI Lab, OFAI, LATTICE-CNRS, Sony (France)

    ATLANTIS attempts to understand and model the very first stages in grounded language learning, as we see in children until the age of three: how pointing or other symbolic gestures emerge from the ontogenetic ritualization of instrumental actions, how words are learned very fast in contextualized language games, and how the first grammatical constructions emerge from concrete sentences. This requires a global, computational theory of symbolic development that informs us about what forces motivate language development, what strategies are exploited in learner and caregiver interactions to come up with more complex compositional meanings, how new grammatical structures and novel interaction patterns and formed, and how the multitude of developmental pathways observed in humans lead to a full system of multi-modal communication skills. This ambitious aim is feasible because there have been very significant advances in humanoid robotics and in the development of sensory-motor competence recently, and the time is ripe to push all this to a higher level of symbolic intelligence, going beyond simple sensory-motor loops or pattern-based intelligence towards grounded semantics, and incremental, long-term, autonomous language learning.

  • Funder: CHIST-ERA Project Code: MUSTER
    Partners: EHU, KUL, ETH Zurich, University Pierre et Marie Curie (UPMC) Paris

    The MUSTER project is a fundamental pilot research project which introduces a new multi-modal framework for the machine-readable representation of meaning. The focus of MUSTER lies on exploiting visual and perceptual input in the form of images and videos coupled with textual modality for building structured multi-modal semantic representations for the recognition of objects and actions, and their spatial and temporal relations. The MUSTER project will investigate whether such novel multi-modal representations will improve the performance of automated understanding of human language. MUSTER starts from the current state-of-the-work platform for human language representation learning known as text embeddings, but introduces the visual modality to provide contextual world knowledge which text-only models lack while humans possess such knowledge when understanding language. MUSTER will propose a new pilot framework for joint representation learning from text and vision data tailored for spatial and temporal language processing. The constructed framework will be evaluated on a series of HLU tasks (i.e., semantic textual similarity and disambiguation, spatial role labeling, zero-shot learning, temporal action ordering) which closely mimic the processes of human language acquisition and understanding. MUSTER will rely on recent advances in multiple research disciplines spanning natural language processing, computer vision, machine learning, representation learning, and human language technologies, working together on building structured machine-readable multi-modal representations of spatial and temporal language phenomena.

Send a message
How can we help?
We usually respond in a few hours.