Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
3,389,575 Projects

  • OA Publications Mandate: No

  • Funder: National Science Foundation Project Code: 9626375
    more_vert
  • Funder: National Science Foundation Project Code: 1801166
    more_vert
  • Funder: Swiss National Science Foundation Project Code: 143355
    Funder Contribution: 445,000
    more_vert
  • Funder: UK Research and Innovation Project Code: 10032716
    Funder Contribution: 299,749 GBP

    Vivacity is already a leading provider of sensor and signal control systems for transport infrastructure. This project will expand on the art of the possible with simulation, unlocking the potential for deeper integration between simulation and sensor data, as well as simulation and control systems.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X020207/1
    Funder Contribution: 77,704 GBP

    Machine learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) have notably contributed to the development of recommender systems (playlist generators for video and music contents, content recommenders for social media and web services platforms, etc.), several types of recognition (e.g. face, image, speech), and self-driving cars, among many others. Using deep neural networks (DNNs), researchers have achieved higher accuracy than human participants in image recognition, have predicted the biomolecular target of a drug or which environmental chemicals are of serious concern to the human health, winning the Merck Molecular Activity Challenge, and the 2014 Tox21 data challenge, respectively. Despite their success across several fields, there have been a few recent cases where these approaches have drastically failed. For example, take the recent case of Uber's self-driving car that killed a pedestrian or IBM Watson's AI (Watson for Oncology), which gave potentially fatal cancer treatment recommendations. Understanding what went wrong is not an easy task, as explainability remains a core challenge in AI. The lack of explainability becomes especially crucial whenever AI is used, e.g. by governments, public and private sectors, to make decisions having an impact on human and behavioural sciences in general, since wrong or misleading decisions or the inability to understand their mechanisms may lead to dramatic consequences in many areas (medical treatment, retail and products supply, etc.). To make the results produced by powerful AI tools more interpretable, reliable and accountable, they should explain how and why a particular decision was made, e.g. which attributes were important in the decision making and with which confidence. There have been several efforts to improve the explainability of AI, most of them focusing on enhancing the explainability and transparency of DNNs, see, e.g. the Policy briefing "Explainable AI: The basic" from the Royal Society (https://royalsociety.org/ai-interpretability). This project contributes to this effort from a different perspective. Our goal is to perform AI-informed decision making driven by Decision Field Theory (DFT), proposing a new set of what we call AI-informed DFT-driven decision-making models. Such models integrate human behaviour with AI by combining stochastic processes coming from DFT with ML tools and have the unique feature of having interpretable parameters. On the one hand, we will generalise the class of DFT models to reproduce characteristics and behaviour of interest and run ML and inferential approaches (mainly likelihood-free based) to estimate the underlying interpretable DFT model parameters. On the other hand, we will use black-box DNN models as proxy (i.e. approximating) models of the interpretable DFT models (with a reversed role with respect to Table 1 of the above-mentioned policy briefing) and use them to learn the processes of interest and make informed predictions (i.e. decisions) driven by DFT. Hence, by using AI to learn these processes, estimating their parameters and making predictions, we will shed light on explaining to the end user why and how a particular decision was made, a crucial feature of interpretable AI-informed decision-making models.

    more_vert
  • Funder: UK Research and Innovation Project Code: 1972034

    This project will build on the work of the C14-BIC project and associated research on biofilms and alkaliphilic microorganisms and extend it to biofilm formation on a range of other materials such as concrete and cements relevant to the nuclear fuel cycle and radioactive waste disposal. These materials are important since they are common construction materials and are extensively used in the immobilisation of radioactive wastes. More specifically, the ability of biofilms to modify surface chemistries, retard the release of contaminants and survive in microsites and cracks within these materials, will be the main focus of the project.

    more_vert
  • Funder: Fundação para a Ciência e a Tecnologia, I.P. Project Code: PTDC/SAU-BEB/099954/2008
    Funder Contribution: 199,367 EUR
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01DA021394-05
    Funder Contribution: 282,620 USD
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01CA080288-05
    Funder Contribution: 398,999 USD
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01EY004900-27
    Funder Contribution: 248,091 USD
    more_vert
Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
3,389,575 Projects
  • Funder: National Science Foundation Project Code: 9626375
    more_vert
  • Funder: National Science Foundation Project Code: 1801166
    more_vert
  • Funder: Swiss National Science Foundation Project Code: 143355
    Funder Contribution: 445,000
    more_vert
  • Funder: UK Research and Innovation Project Code: 10032716
    Funder Contribution: 299,749 GBP

    Vivacity is already a leading provider of sensor and signal control systems for transport infrastructure. This project will expand on the art of the possible with simulation, unlocking the potential for deeper integration between simulation and sensor data, as well as simulation and control systems.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X020207/1
    Funder Contribution: 77,704 GBP

    Machine learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) have notably contributed to the development of recommender systems (playlist generators for video and music contents, content recommenders for social media and web services platforms, etc.), several types of recognition (e.g. face, image, speech), and self-driving cars, among many others. Using deep neural networks (DNNs), researchers have achieved higher accuracy than human participants in image recognition, have predicted the biomolecular target of a drug or which environmental chemicals are of serious concern to the human health, winning the Merck Molecular Activity Challenge, and the 2014 Tox21 data challenge, respectively. Despite their success across several fields, there have been a few recent cases where these approaches have drastically failed. For example, take the recent case of Uber's self-driving car that killed a pedestrian or IBM Watson's AI (Watson for Oncology), which gave potentially fatal cancer treatment recommendations. Understanding what went wrong is not an easy task, as explainability remains a core challenge in AI. The lack of explainability becomes especially crucial whenever AI is used, e.g. by governments, public and private sectors, to make decisions having an impact on human and behavioural sciences in general, since wrong or misleading decisions or the inability to understand their mechanisms may lead to dramatic consequences in many areas (medical treatment, retail and products supply, etc.). To make the results produced by powerful AI tools more interpretable, reliable and accountable, they should explain how and why a particular decision was made, e.g. which attributes were important in the decision making and with which confidence. There have been several efforts to improve the explainability of AI, most of them focusing on enhancing the explainability and transparency of DNNs, see, e.g. the Policy briefing "Explainable AI: The basic" from the Royal Society (https://royalsociety.org/ai-interpretability). This project contributes to this effort from a different perspective. Our goal is to perform AI-informed decision making driven by Decision Field Theory (DFT), proposing a new set of what we call AI-informed DFT-driven decision-making models. Such models integrate human behaviour with AI by combining stochastic processes coming from DFT with ML tools and have the unique feature of having interpretable parameters. On the one hand, we will generalise the class of DFT models to reproduce characteristics and behaviour of interest and run ML and inferential approaches (mainly likelihood-free based) to estimate the underlying interpretable DFT model parameters. On the other hand, we will use black-box DNN models as proxy (i.e. approximating) models of the interpretable DFT models (with a reversed role with respect to Table 1 of the above-mentioned policy briefing) and use them to learn the processes of interest and make informed predictions (i.e. decisions) driven by DFT. Hence, by using AI to learn these processes, estimating their parameters and making predictions, we will shed light on explaining to the end user why and how a particular decision was made, a crucial feature of interpretable AI-informed decision-making models.

    more_vert
  • Funder: UK Research and Innovation Project Code: 1972034

    This project will build on the work of the C14-BIC project and associated research on biofilms and alkaliphilic microorganisms and extend it to biofilm formation on a range of other materials such as concrete and cements relevant to the nuclear fuel cycle and radioactive waste disposal. These materials are important since they are common construction materials and are extensively used in the immobilisation of radioactive wastes. More specifically, the ability of biofilms to modify surface chemistries, retard the release of contaminants and survive in microsites and cracks within these materials, will be the main focus of the project.

    more_vert
  • Funder: Fundação para a Ciência e a Tecnologia, I.P. Project Code: PTDC/SAU-BEB/099954/2008
    Funder Contribution: 199,367 EUR
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01DA021394-05
    Funder Contribution: 282,620 USD
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01CA080288-05
    Funder Contribution: 398,999 USD
    more_vert
  • Funder: National Institutes of Health Project Code: 5R01EY004900-27
    Funder Contribution: 248,091 USD
    more_vert