
Healthcare is the fasted growing EU27 expenditure. Personalised medicine, comprising tailored approaches for prevention, diagnosis, monitoring and treatment is essential to reduce the burden of disease and improve the quality of life. Integration of multiple data types (multimodal data) into artificial intelligence models is required for the development of accurate and personalised interventions. This is particularly true for the inclusion of genomic data, which is information-rich and individual-specific, and more routinely available as the cost of sequencing continues to fall. Multimodal data integration is complex due to privacy & governance requirements, the presence of multiple standards, distinct data formats, and underlying data complexity and volume. NextGen tools will remove barriers in data integration several cardiovascular use cases. NextGen deliverables will include tooling for multimodal data integration and research portability, extension of secure federated analytics to genomic computation, more effective federated learning over distributed infrastructures, more effective and accessible tools for genomic data analysis; improved clinical efficiency of variant prioritisation; scalable genomic data curation; and improved data discoverability and data management. A comprehensive gap analysis of the existing landscape, factoring ongoing initiatives will ensure NextGen deliverables are forward-looking and complementary. NextGen embedded governance framework and robust regulatory processes will ensure secure multi-jurisdictional multiomic multimodal data access aligned with initiatives including “1+ Million Genomes” and the European Health Data Space. Several real-world pilots will demonstrate the effectiveness of NextGen tools and will be integrated in the NextGen Pathfinder network of five collaborating clinical sites as a self-contained data ecosystem and comprehensive proof of concept.
Edge computing offers many technical advantages, e.g., reduced latency, secure decentralized processing and storage, scalability at lower complexity, versatility to adapt the changes in resources and applications, and increased reliability. Edge computing can dramatically boost services and applications by supporting artificial intelligence (AI) natively, instead of relying on the cloud. Edge computing supporting AI is the only technology that will enable many of the long-awaited game changers: Industry4.0 and smart manufacturing, 5G, IoT, self-driving vehicles, remote robotics for healthcare, and machine vision among others. The BRAINE project’s overall aim is to boost the development of the Edge framework focusing on energy efficient hardware and AI empowered software, capable of processing Big Data at the Edge, supporting security, data privacy, and sovereignty. The approach is to build a seamless Edge MicroDataCenter interlinked with AI enabled network interface cards. BRAINE employs a visionary utilization method for edge resources management and network-edge workload distribution. Predicting resource availability and workload demand, identifying trends, and taking proactive actions are all aspects of these novel methods. The impact of BRAINE encompasses advances in the European video distribution ecosystem, improving data processing at the network edge, and proving integrated AI for applications. These will lead to unprecedented savings in performance and energy efficiency: 2x performance/Watt and 50% energy savings; 71% latency reduction for an acceleration centric EMDC; 80% space and maintenance reduction, 99.999% fault tolerance with level 5 autonomy (autonomous driving, robotics, mission-critical system); and significantly faster infrastructure installation and deployment. BRAINE boosts EU’s position in the intelligent edge computing field and enables growth across many sectors, e.g., manufacturing, smart healthcare, surveillance, satellite navigation.
VERGE will tackle evolution of edge computing from three perspectives: “Edge for AI”, “AI for Edge” and security, privacy and trustworthiness of AI for Edge. “Edge for AI” defines a flexible, modular and converged Edge platform that is ready to support distributed AI at the edge. This is achieved by unifying lifecycle management and closed-loop automation for cloud-native applications, MEC and network services, while fully exploiting multi-core and multi-accelerator capabilities for ultra-high computational performance. “AI for Edge” enables dynamic function placement by managing and orchestrating the underlying physical, network, and compute resources. Application-specific network and computational KPIs will be assured in an efficient and collision-free manner, taking Edge resource constraints in to account. Security, privacy and trustworthiness of AI for Edge are addressed to ensure security of the AI-based models against adversarial attacks, privacy of data and models, and transparency in training and execution by providing explanations for model decisions improving trust in models. VERGE will verify the three perspectives through delivery of 7 demonstrations across two use cases - XR-driven Edge-enabled industrial B5G applications across two separate Arçelik sites in Turkey, and Edge-assisted Autonomous Tram operation in Florence. VERGE will disseminate results to academia, industry and the wider stakeholder community through liaisons and contributions to relevant standardization bodies and open sources, a series of demonstrations showing progression through TRLs and by creating an open dataspace for enabling public access to the datasets generated by the project.
The increasing need for cloud services at the edge (edge–services) is caused by the rapidly growing quantity and capabilities of connected and interacting edge devices exchanging vast amounts of data. This poses different challenges to cloud computing architectures at the edge, such as i) ability to provide end-to-end transaction resiliency of applications broken down in distributions of microservices; ii) creating reliability and stability of automation in cloud management under increasing complexity iii) secure and timely handling of the increasing and latency sensitive flow (east-west) of sensitive data and applications; iv)need for explainable AI and transparency of the increasing automation in edge-services platform by operators, software developers and end-users. ACES will solve these challenges by infused autopoiesis and cognition on different levels of cloud management to empower with AI different functionalities such as: workload placement, service and resource management, data and policy management. ACES key outcomes will be: i) autopoiesis cognitive cloud-edge framework; ii) awareness tools, AI/ML agents for workload placement, service and resource management, data and policy management, telemetry and monitoring; iii) agents safeguarding stability in situations of extreme load and complexity; iv) swarm technology-based methodology and implementation for orchestration of resources in the edge; v) edge-wide workload placement and optimization service; vi) an app store for classification, storage, sharing and rating of AI models used in ACES. ACES will be demonstrated and validated in 3 scenarios demanding for support of highly decentralised computing, ability to take autonomic decisions, reducing costs of cloud-edge management and increasing their efficiency ,thus reducing impact on environment. To foster the uptake of ACES outcomes beyond its lifespan, different activities are foreseen to drive adoption to a wider network of stakeholders in key sectors
CAPE (European Open Compute Architecture for Powerful Edge) aims to redefine the landscape of edge-cloud computing infrastructures by developing the EdgeMicroDataCenters (EMDC's) and eHPS as a 'new unit of computing' for data-dense edge environments. The project designs and showcase an innovative, open hardware platform that is dynamically composable via CXL to answer the end user needs. EMDC and eHPS provides an open high density platform for heterogeneous computing units (XPU), RISC-V architectures all based on industry-standard form factor, COM-HPC that is supported by a robust ecosystem of Original Equipment Manufacturers (OEMs) within Europe, ensuring wide accessibility and adoption. To allow end users to be digital sovereign e.g. manage the governance of data, AI models, applications deployed across an ‘edge-first’ edge to cloud continuum, CAPE will employ a cloud-agnostic overlay known as Infrastructure from Code (IfC). This innovative approach abstracts the complexities inherent in diverse cloud computing infrastructures and services, empowering software developers to deploy applications effortlessly across the edge-to-cloud continuum. This is achieved without necessitating extensive knowledge of the underlying cloud infrastructure, enabling deployments across on-premise and off-premise, public and private cloud environments with minimal complexity. CAPE's solution will be validated in 3 use cases: the management of intelligent electric energy microgrids, edge AI and satellite communications. All usecases will evaluate RISC-V (EPI) and CXL solutions. Each usecase will be evaluated on technical, economical and sustainability aspects and benchmarked against legacy hardware in local clouds against edge-optimized data centers.