Powered by OpenAIRE graph
Found an issue? Give us feedback

INESC-ID

Instituto de Engenharia de Sistemas e Computadores Investigação e Desenvolvimento
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
3 Projects, page 1 of 1
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CHR4-0005
    Funder Contribution: 296,842 EUR

    Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability). Explainability considers how AI can be understood by human users. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods. CIMPLE will draw on models of human creativity, both in manipulating and understanding information, to design more understandable, reconfigurable and personalisable explanations. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

  • Funder: French National Research Agency (ANR) Project Code: ANR-23-MRS2-0002
    Funder Contribution: 35,000 EUR

    The Espace-Dev laboratory of the University of Guiana wishes to set up a Twinning project with three objectives: to increase skills in the intelligent management of the solar resource using artificial intelligence, to build a European scientific network and to improve support for the management of European projects. French Guiana, by its geographical location has a strong solar potential. Several aspects are essential for the proper management of this potential: estimation of sunshine, prediction of the availability of the resource, integration of the energy produced into the energy distribution system, etc. The research team has understood the interest of exploiting this resource, particularly in order to increase the energy transition and the energy independence of the territory. Thus, research is conducted in the laboratory not only on the estimation of irradiance from satellite images, which allow to obtain the irradiance at any location covered by the satellite, but also on the prediction of irradiance and energy production of photovoltaic power plants using methods of artificial intelligence. The setting up of a Twinning project will enable the university to network with leading European institutions. Three European institutions have been targeted for their concordance with the laboratory's project but also for their complementarity. The research center for energy, environment and technology, a research organization in Spain, will provide expertise in the estimation of irradiance from satellite images and the processing of time series. The University of Freiburg will be involved in the field of artificial intelligence. The institute of systems engineering and informatics: research and development in Lisbon with whom the collaboration will focus on the prediction of the production of photovoltaic plants. The networking with these institutions will allow the team to increase its skills in the above-mentioned fields. This network will help the research team to improve its skills in the selected themes and to increase its competitiveness on the European level. In addition, this project will allow the development of research management skills within the newly created European project support unit within the university. In addition, this project will enable the development of research management skills within the university's emerging European project support unit. The Twinning project will have scientific impacts at the local and regional level. The recognition of the laboratory by companies, whose field of action is solar energy, will allow the construction of CIFRE theses and expert missions. Moreover, the territory will radiate on the Guiana Shield and will become a reference in the themes addressed by the project.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-003

    Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quicky understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.