Powered by OpenAIRE graph
Found an issue? Give us feedback

WLT

WEBLYZARD TECHNOLOGY GMBH
Country: Austria
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
10 Projects, page 1 of 2
  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-003

    Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quicky understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

    visibility72
    visibilityviews72
    downloaddownloads65
    Powered by Usage counts
    more_vert
  • Funder: European Commission Project Code: 101070305
    Overall Budget: 3,991,270 EURFunder Contribution: 3,991,270 EUR

    Explainable Artificial Intelligence (AI) is key to achieving a human-centred and ethical development of digital and industrial solutions. ENEXA builds upon novel and promising results in knowledge representation and machine learning to develop scalable, transparent and explainable machine learning algorithms for knowledge graphs. The project focuses on knowledge graphs because of their critical role as enabler of new solutions across domains and industries in Europe. Some of the existing machine learning approaches for knowledge graphs are known to already provide guarantees with respect to their completeness and correctness. However, they are still impossible or impractical to deploy on real-world data due to the scale, incompleteness and inconsistency of knowledge graphs in the wild. We devise approaches that maintain formal guarantees pertaining to completeness and correctness while being able to exploit different representations of knowledge graphs in a concurrent fashion. With our new methods, we plan to achieve significant advances in the efficiency and scalability of machine learning, especially on knowledge graphs. A supplementary innovation of ENEXA lies in its approach to explainability. Here, we focus on devising human-centred explainability techniques based on the concept of co-construction, where human and machine enter a conversation to jointly produce human-understandable explanations. Three use cases on business software services, geospatial intelligence and data-driven brand communication have been chosen to apply and validate this new approach. Given their expected growth rates, these sectors will play a major role in future European data value chains.

    visibility14
    visibilityviews14
    downloaddownloads3
    Powered by Usage counts
    more_vert
  • Funder: European Commission Project Code: 619706
    visibility7
    visibilityviews7
    downloaddownloads11
    Powered by Usage counts
    more_vert
  • Funder: European Commission Project Code: 780656
    Overall Budget: 3,489,650 EURFunder Contribution: 3,489,650 EUR

    Re-purposing and re-using digital content is of vital importance to broadcasters and other stakeholders in European media value chains. High initial production or acquisition costs need to be recouped, but the abundance of online channels creates a thin viewer market for original content, especially on niche topics. Live and on-demand viewing is now spread across Smart TVs, Web and mobile applications, social media and other emerging platforms (to be referred to as “vectors”). This introduces an important challenge: How should broadcasters decide when, in what form and on which vector(s) to deliver which content? We propose the Trans-Vector Platform (TVP) to address this challenge and help media companies gain a competitive advantage through guided content re-purposing and re-publication, on the fly and across vectors. The TVP requires novel methods to extract metadata, predict patterns in the topic-vector-audience matrix, and apply these patterns to enhance and re-purpose content - across vectors and according to predicted audience interests. Thus, ReTV will advance the state of the art in video analysis, video augmentation and annotation, content and audience metrics, prediction and recommendation models, visual analytics. The results of this extensive research will be tested and validated together with a regional public broadcaster (RBB), a national TV archive (NISV) and an OTT TV distributor operating in multiple EU markets (Zattoo). ReTV will offer them a better match between content and viewers across vectors, time and five EU languages (English, German, French, Spanish, Dutch). Impact will be measured via cross-vector deployment and viewer engagement. Automated re-purposing and more accurate targeting in terms of relevance and appropriate representation, tailored to upcoming events and specific vector audiences, will drive growth, lower costs, and increase the competitiveness of broadcasters and other professional stakeholders in European media value chains.

    visibility7K
    visibilityviews7,252
    downloaddownloads1,998
    Powered by Usage counts
    more_vert
  • Funder: European Commission Project Code: 687786
    Overall Budget: 3,765,710 EURFunder Contribution: 3,115,740 EUR

    In video veritas, if we divert the old Latin saying: In video, there is truth! The digital media revolution and the convergence of social media with broadband wired and wireless connectivity are bringing breaking news to online video platforms; and, news organisations delivering information by Web streams and TV broadcast often rely on user-generated recordings of breaking and developing news events shared by social media to illustrate the story. However, in video there is also deception. Access to increasingly sophisticated editing and content management tools, and the ease in which fake information spreads in electronic networks requires reputable news outlets to carefully verify third-party content before publishing it, reducing their ability to break news quickly while increasing costs in times of tight budgets. InVID will build a platform providing services to detect, authenticate and check the reliability and accuracy of newsworthy video files and video content spread via social media. This platform will enable novel newsroom applications for broadcasters, news agencies, web pure-players, newspapers and publishers to integrate social media content into their news output without struggling to know if they can trust the material or how they can reach the user to ask permission for re-use. It will ensure that verified and rights-cleared video content is readily available for integration into breaking and developing news reports. Validated by real customer pilots, InVID will help protecting the news industry from distributing fakes, falsehoods, lost reputation and ... lawsuits. The InVID platform and applications will be validated and qualified through several development and validation cycles. They will be pilot-tested by three leading institutions in the European news industry ecosystem: AFP (the French News Agency), DW (Deutsche Welle), and APA (the Austria Press Agency), and will create new exploitation possibilities for all consortium members.

    visibility14K
    visibilityviews14,447
    downloaddownloads14,183
    Powered by Usage counts
    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.