
The main objective of the lorAI project is to upgrade the Kempelen Institute of Intelligent Technologies (KInIT) to a leading R&I institution in low resource artificial intelligence (LRAI) in Slovakia and Europe (with help of advanced partners – top European AI research institutes ADAPT, DFKI and CERTH). The core of lorAI’s impact stands on three pillars: 1) nurturing talent and personal capacities, 2) pursuing research excellence in low resource AI (LRAI), and 3) innovation and technology transfer with elaborated outreach programmes. LRAI represents a global research challenge yet is also relevant to the EU region by developing efficient AI operating with limited resources to increase AI availability (for society and industry) and environmental sustainability. Despite tackling the low complexity and computational costs of AI, Centre of Excellence will explore AI applications for the NLP and Green environment (i.e., domains of energy usage optimization, anomaly detection). The activities of the CoE are grouped into ten focused Excellence programs, which, in synergy, contribute to expected impacts overseen by the call and destination by increasing research capacities and competence, funding acquisition capacities, excellent research with societal and economic impact, engagement of industry and wider R&I ecosystem, and internationalization. Aligned to Slovak and EU strategies (e.g., Slovak Smart Specialization, Coordinated Plan on AI, Green Deal), the lorAI project builds on a unique opportunity of ongoing programs (e.g., EUs Recovery and Resilience), importance of AI and environmental challenges, to multiply effects of the measures resulting in improving Slovak R&I culture and stimulate reforms. Thanks to the CoE governance model, autonomy is guaranteed. The diversified budget reduces unexpected disruptions and with cross-cutting themes (international, interdisciplinary, cross-sectoral, transfer oriented) ensures long-term sustainability.
The aim of the DisAI project is to enhance the scientific excellence of KInIT and the consortium partners in trustworthy AI and multimodal natural language processing and multilingual language technologies to combat disinformation. Disinformation spreading threatens European democratic values and tackling it is a European level priority. As the amount of disinformation grows, AI and in particular language technologies have a crucial role in detecting disinformation. To meet the European goals, it is increasingly important to develop tools and methods for disinformation combating also for low resource languages. KInIT, the widening partner from Slovakia, has taken a leadership role in shaping the R&I landscape of Slovakia by boosting cross-sectorial, interdisciplinary, and international collaboration. The ambition of KInIT is to strengthen AI to combat disinformation, an area that especially in Slovak digital space contributes to the development of AI technologies and tools for low resource languages and growing awareness of online disinformation. To achieve scientific excellence of KInIT, the activities of the project aim to increase the research capacity and excellence of scientists at different career levels in multilingual and multimodal language technologies and trustworthy AI. Together with leading partners German Research Center for Artificial Intelligence, Centre for Research and Technology Hellas and Copenhagen University, a joint Research project on claim matching as well as networking and mobility activities will be organised. Importantly, transfer of know-how on innovation and creativity for facilitating industry-academia collaboration and boosting the skills of the research managers and administrative staff will be part of the capacity building programme.
Europe is implementing an AI strategy that seeks to create a research environment characterised by scientific excellence and consistent with the fundamental ethical values of its citizens. Part of this strategy foresees the consolidation of ongoing research activities through the creation and maintenance of an AI on-demand Platform that will act as a community resource for the research community, facilitating experimentation, knowledge sharing and the development of state-of-the-art solutions and technologies. AI4Europe builds on the work of AI4EU and multiple supporting projects (ICT-48/ICT-49), creating an open, impartial, and collaborative Platform, built by the European research community according to their needs. Equipped with the necessary hardware, the Platform will offer interoperable services, data, and tools from several related communities and provide solutions to facilitate research productivity, reproducibility, and collaboration. AI4Europe will establish and support mechanisms to foster exchange between academia and industry and ensure the Platform reaches out to and engages with the next generation of researchers and those in widening countries. The project will develop and implement a business model that will ensure the long-term technical and financial structures providing sustainability for the Platform beyond the lifetime of the project. AI4Europe will support the community to create a tool that will help position Europe as the place where the very best AI research is conducted.
Gaze is an important communication channel to be captured remotely which works even without language. It thus holds great potential for universal inclusive technologies. Eyes for information, communication, and understanding (Eyes4ICU) explores novel forms of gaze interaction that rely on current psychological theories and findings, computational modeling, as well as expertise in highly promising application domains. Its approach of developing inclusive technology by tracing gaze interaction back to its cognitive and affective foundations (a psychological challenge) results in better models to predict user behavior (a computational challenge). By integrating insights in application fields, gaze-based interaction can be employed in the wild. Accordingly, the proposed research is divided into three work packages, namely Understanding Users (WP1), Gaze Communication (WP2), and In The Wild (WP3). All three work packages are pursued from three different perspectives: a psychological empirical perspective, a computational modeling perspective, and an application perspective, ensuring a unified and aligned progress and concept. Along these lines, training is also divided into three packages of Empirical Research Methods (WP4), Computational Modeling (WP5), and Transferable Skills (WP6). Consequently, the consortium is composed of groups working in psychological, computing, and application fields. All Beneficiaries are experts in using eye tracking in their respective areas ensuring best practices and optimal facilities for research and training. A variety of Associated Partners from the whole chain of eye tracking services ensures for applicability, practical relevance, and career opportunities by contributing to supervision, training, and research. This will advance communication by eye tracking as a field and result in European standards for gaze-based communication in a variety of domains disseminated through research and application.
The media sector is exposed to and undergoing continuous innovations that occur at a pace never seen before and have a non-negligible impact on citizens, democracy and a society as whole. A significant booster becomes a generative Artificial Intelligence, which already plays and will continue to play a critical role (in a positive as well as negative meaning) in creating and spreading information. Especially in next-generation social media, which refer to the anticipated evolution towards more AI-based decentralised and immersive virtual environments (like fediverses and metaverses), generative AI can become the most prominent enabler of disinformation growth accompanied by a lack of trusted information. Media professionals are not, however, currently well-equipped with supporting tools nor knowledge to operate in such already emerging environments. As a result, there is a tremendous need for innovative (AI-based) solutions ensuring media freedom and pluralism, delivering credible and truthful information as well as combating highly disinformative content. The main goal of the AI-CODE project is to evolve state-of-the-art research results (tools, technologies, and know-how) from the past and ongoing EU-funded research projects focused on disinformation to a novel ecosystem of services that will proactively support media professionals in trusted information production through AI. First, the project aims to identify, analyse, and understand future developments of next-generation social media in the context of rapid development of generative Artificial Intelligence and how such a combination can impact the (dis)information space. Second, the project aims to provide media professionals with novel AI-based services to coach them how to work in emerging digital environments and how to utilise generative AI effectively and credibly, to detect new forms of content manipulation, as well as to assess the reputation and credibility of sources and their content.