Advanced search in
Searching FieldsTerms
Any field
13 Projects, page 1 of 2

  • 2023

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-ACAI-003
    Partners: UCD, T J Seebeck Department of Electronics, Tallinn University of Technology, IEMN

    Wireless biomedical sensors should dramatically reduce the costs and risks associated with personal health care while being more and more exploited by telemedicine and efficient e-health systems. However, because of the large power consumption of continuous wireless transmission, the battery life of the sensors is reduced for long-term use. Sub-Nyquist continuous-time discrete-amplitude (CTDA) sampling approaches using level-crossing analogto-digital converters (ADCs) have been developed to reduce the sampling rate and energy consumption of the sensors. However, traditional machine learning techniques and architectures are not compatible with the non-uniform sampled data obtained from levelcrossing ADCs. This project aims to develop analog algorithms, circuits, and systems for the implementation of machine learning techniques in CTDA sampled data in wireless biomedical sensors. This “near-sensor computing” approach, will help reduce the wireless transmission rate and therefore the power consumption of the sensor. The output rate of the CTDA is directly proportional to the activity of the analog signal at the input of the sensor. Therefore, artificial intelligence hardware that processes CTDA data should consume significantly less energy. For demonstration purposes, a prototype biomedical sensor for the detection and classification of sleep apnea will be developed using integrated circuit prototypes and a commercially available analog front-end interface. The sensor will acquire electrocardiogram and bioimpedance signals from the subject and will use data fusion techniques and machine learning techniques to achieve high accuracy.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-012
    Partners: Halmstad University, Jagiellonian University, Centre d'Enseignement de Recherche et d'Innovation Systèmes Numériques, INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência

    The XPM project aims to integrate explanations into Artificial Intelligence (AI) solutions within the area of Predictive Maintenance (PM). Real-world applications of PM are increasingly complex, with intricate interactions of many components. AI solutions are a very popular technique in this domain, and especially the black-box models based on deep learning approaches are showing very promising results in terms of predictive accuracy and capability of modelling complex systems. However, the decisions made by these black-box models are often difficult for human experts to understand – and therefore to act upon. The complete repair plan and maintenance actions that must be performed based on the detected symptoms of damage and wear often require complex reasoning and planning process, involving many actors and balancing different priorities. It is not realistic to expect this complete solution to be created automatically – there is too much context that needs to be taken into account. Therefore, operators, technicians and managers require insights to understand what is happening, why it is happening, and how to react. Today’s mostly black-box AI does not provide these insights, nor does it support experts in making maintenance decisions based on the deviations it detects. The effectiveness of the PM system depends much less on the accuracy of the alarms the AI raises than on the relevancy of the actions operators perform based on these alarms. In the XPM project, we will develop several different types of explanations (anything from visual analytics through prototypical examples to deductive argumentative systems) and demonstrate their usefulness in four selected case studies: electric vehicles, metro trains, steel plant and wind farms. In each of them, we will demonstrate how the right explanations of decisions made by AI systems lead to better results across several dimensions, including identifying the component or part of the process where the problem has occurred; understanding the severity and future consequences of detected deviations; choosing the optimal repair and maintenance plan from several alternatives created based on different priorities; and understanding the reasons why the problem has occurred in the first place as a way to improve system design for the future.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-005
    Partners: LABORATOIRE INFORMATIQUE IMAGE INTERACTION, university of La Rochelle, Concordia University, SU, Université du Québec à Montréal

    Since the emergence of Cloud Computing and the associated Over-The-Top (OTT) valueadded service providers more than a decade ago, the architecture of the communication infrastructure − namely the Internet and the (mobile) telecommunication infrastructure – keep improving with computing, caching and networking services becoming more coupled. OTTs are moving from being purely cloud-based to being more distributed and residing close to the edge, a concept known to be “Fog Computing”. Network operators and telecom vendors advertise the “Mobile Edge Computing (MEC)” capabilities they may offer within their 5G Radio-Access and Core Networks. Lately, the GAFAM (Google, Apple, Facebook, Amazon and Microsoft) came into the play as well offering what is known as Smart Speakers (Amazon Echo, Apple HomePod and Google Home), which can also serve as IoT hubs with “Mist/Skin Computing” capabilities. While these have an important influence on the underlying network performances, such computing paradigms are still loosely coupled with each other and with the underlying communication and data storage infrastructures, e.g., even for the forthcoming 5G systems. It is expected that a tight coupling of computing platforms with the networking infrastructure will be required in post-5G networks, so that a large number of distributed and heterogeneous devices belonging to different stakeholders communicate and cooperate with each other in order to execute services or store data in exchange for a reward. This is what we call here the smart collaborative computing, caching and networking paradigm. The objective of SCORING project is to develop and analyse this new paradigm by targeting the following research challenges, which are split into five different strata: At the computing stratum: Proactive placement of computing services, while taking into account users mobility as well as per-computing-node battery status and computing load; At the storage stratum: Proactive placement of stores and optimal caching of contents/functions, while taking into account the joint networking and computing constraints; At the software stratum: Efficient management of micro-services in such a multi-tenant distributed realm, by exploiting the Information-Centric Networking principles to support both name and compute function resolution; At the networking stratum: Enforcement of dynamic routing policies, using Software Defined Networking (SDN), to satisfy the distributed end-user computation requirements and their Quality of Experience (QoE); At the resource management stratum: Design of new network-economic models to support service offering in an optimal way, while considering the multi-stakeholder feature of the collaborative computing, caching and networking paradigm proposed in this project. Smartness will be brought here by using adequate mathematical tools used in combination for the design of each of the five strata: machine learning (proactive placement problems), multi-objective optimization, graph theory and complex networks (information-centric design of content and micro-services caching) and game theory (network-economics model). Demonstration of the feasibility of the proposed strata on a realistic and integrated testbed as well as on an integrated simulation platform (based on available open-source network-simulation toolkits), will be one of the main goals of the project. The test-bed will be built by exploiting different virtualization (VM/Containers) technologies to deploy compute and storage functions within a genuine networking architecture. Last but not least, all building blocks forming the realistic and integrated test-bed, on the one hand, and the integrated simulation platform, on the other hand, will be made available to the research community at the end of the project as open source software.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-CES-004
    Partners: ETH Zurich, TLÜ, UAB, FCiências.ID

    The urgency to cut energy-related greenhouse gas emissions is recognised by EU policy. Efforts to do so, however, are hindered by the limitations of software used to generate and assess national energy transition pathways. These tools generally overlook social issues and environmental sustainability in favour of a techno-economic worldview, where an optimal solution is determined by cost minimisation. Yet when it comes to the practical on-the-ground implementation of such pathways, real-world concerns come to the forefront. Such concerns are both environmental (e.g. land and resource use) and social (e.g. what trade-offs are important to local stakeholders). No workable solutions to integrate both of these into techno-economic energy system modelling software exist. We address this by developing and testing a novel digital workflow to integrate humans into scenario design while accurately modelling the relevant technical, economic and environmental constraints. With this project, we plant the seeds for locally desirable, environmentally friendly and implementable energy transition pathways. We will (1) develop an automated approach that generates a scenario space: a wide range of alternatives that go beyond an economically optimal solution; (2) integrate the computation of social and environmental constraints into these alternatives; and (3) build an interactive system where experts and members of the public can feed their preferences into the generation of alternatives, co-creating and interactively visualising results. We hypothesise that this will permit the generation of clean energy pathways that embrace social and environmental sustainability, therefore engendering broader public support. Doing so involves methodological development, software implementation, and experimentation with pilot studies in Portugal. This work is done in a focussed consortium of three partners. ETHZ is a leading centre of high-resolution energy system modelling, ICTA-UAB is a leading centre in the development of integrated sustainability assessment methods, and FC.ID contributes its unique expertise combining modelling with participatory action research.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-BDSI-003
    Partners: Research and Education Laboratory in Information Technologies (Athens Information Technology), University of Oulu, LUT, Seat (Spain), Trinity College DublinTCD, CTTC

    "The Internet of Things (IoT) is creating a new structure of awareness – a cybernetic one – upon physical processes. Industries of different kinds are expected to join soon this revolution, leading to the so-called Factories of the Future or Industry 4.0. Our considered IoT-based industrial cyber-physical system (CPS) works in three generic steps: 1) Large data acquisition / dissemination: A physical process is monitored by sensors that pre-process the (assumed large) collected data and send the processed information to an intelligent node (e.g. aggregator, central controller); 2) Big data fusion: The intelligent node uses artificial intelligence (e.g. machine learning, data clustering, pattern recognition, neural networks) to convert the received (""big"") data to useful information to guide short-term operational decisions related to the physical process; 3) Big data analytics: The physical process together with the acquisition and fusion steps can be virtualized, building then a cyber-physical process, whose dynamic performance can be analysed and optimized through visualization (if human intervention is available) or artificial intelligence (if the decisions are automatic) or a combination thereof. We will focus on how to optimize the prediction, detection and respective interventions of rare events in industrial processes based on these three steps. Our proposed general framework, which relies on an IoT network, aims at ultra-reliable detection / prevention of rare events related to a pre-determined industrial physical process (modelled by a particular signal). The framework will be process-independent, but the actual solution will be designed case-by-case. We will consider the CPS working as a complex system so that these three steps, which operate with relative autonomy, are strongly interrelated. For example, the way the sensors measure the signal related to the physical process will affect what is the best data fusion algorithm, which in turn will generate a certain awareness of the physical process that will form the basis of the proposed data analytics procedure. As proof-of-concept, our approach will be applied to predictive maintenance in an automotive industrial plant from SEAT in Spain, in the Nokia base-station factory at Oulu and in the LUT laboratory of control engineering and digital systems. "

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-002
    Partners: UPC, FORTH, Queen’s University Belfast, WUT, IRISA

    The DiPET project investigates models and techniques that enable distributed stream processing applications to seamlessly span and redistribute across fog and edge computing systems. The goal is to utilize devices dispersed through the network that are geographically closer to users to reduce network latency and to increase the available network bandwidth. However, the network that user devices are connected to is dynamic. For example, mobile devices connect to different base stations as they roam, and fog devices may be intermittently unavailable for computing. In order to maximally leverage the heterogeneous compute and network resources present in these dynamic networks, the DiPET project pursues a bold approach based on transprecise computing. Transprecise computing states that computation need not always be exact and proposes a disciplined trade-off of precision against accuracy, which impacts on computational effort, energy efficiency, memory usage and communication bandwidth and latency. Transprecise computing allows to dynamically adapt the precision of computation depending on the context and available resources. This creates new dimensions to the problem of scheduling distributed stream applications in fog and edge computing environments and will lead to schedules with superior performance, energy efficiency and user experience. The DiPET project will demonstrate the feasibility of this unique approach by developing a transprecise stream processing application framework and transprecision-aware middleware. Use cases in video analytics and network intrusion detection will guide the research and underpin technology demonstrators.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-ACAI-004
    Partners: University of Southampton, University of Seville, University of Zurich and ETH Zurich, Graz University of Technology, IBM Research - Zurich

    Contemporary AI applications often rely on deep learning, which implies heavy computational loads with current technology. However, there is a growing demand for low-power autonomously learning AI systems that are employed “in the field”. We will investigate in this project options for learning in low-power unconventional hardware that is based on spiking neural networks (SNNs) implemented in analog neuromorphic hardware combined with nano-scale memristive synaptic devices. Hence, the envisioned computational paradigm combines the three most promising avenues for minimizing energy consumption in hardware: (1) analog neuromorphic computation, (2) spike-based communication, and (3) memristive analog memory. Experts in each of these fields will collaborate on the development of a functional prototype system. We will in particular consider recurrent SNNs (RSNNs) as their internal recurrent dynamics render them more suitable for real-world AI applications that have temporal input and demand some form of short-term memory. We will adapt a recently developed training algorithm such that it can be used to optimize SNNs in neuromorphic hardware with memristive synapses. “In the field” applications often demand online adaptation of such systems, which often necessitates hardware-averse training procedure. To overcome this problem, we will investigate the applicability of “learning to learn” (L2L) to spiking memristive neuromorphic hardware. In an initial optimization, the hardware is trained to become a good learner for the target application. Here, arbitrarily complex learning algorithms can be used on a host system with the hardware “in the loop”. In the application itself, simpler algorithms – that can be easily implemented in neuromorphic hardware – provide adaptation of the hardware RSNNs. In summary, the goal of this project is to build versatile and adaptive low-power small size neuromorphic AI machinery based on SNNs with memristive synapses using L2L. We will deliver an experimental system in a real-world robotics environment to provide a proof of concept.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-004
    Partners: University of Oulu / Information Technology and Electrical Engineering, EURECOM/ Communications Systems, Universitat Polytechnica de Catalunya / Computer Architecture, Huawei / Math. And Algorithmic Sciences Lab, Paris, StreamOwl, AUEB-RC/Dept. of Informatics

    The LeadingEdge project will deliver a novel and holistic framework to efficiently cope with unresolved challenges in edge computing ecosystems, regarding dynamic resource provisioning to multiple coexisting services amidst unknown service- and system-level dynamics. The project approach is three-faceted; it will optimize intra-service resource provisioning, inter-service resource coordination, and user perceived quality of experience (QoE). First, at service level, we will develop a framework, grounded on first principles, for opportunistic use of edge and cloud computation, bandwidth and cache resources according to instantaneous resource availability, mobility, connectivity, service resource requirements and service demand. Our approach will rely on solid online-learning theories such as online convex optimization (OCO), and transfer learning and it will eliminate our inherent inability to predict demand, mobility, and other dynamic processes that affect resource allocation. It will also use extreme-value theory and stochastic optimization towards a full-fledged study of the latency-reliability trade-off that is fundamental for mission-critical services. Proof-of-concept (PoC) validation will be provided through, (i) a real-time image recognition tool as part of a video analytics procedure, (ii) two alternative video quality assessment solutions with different degree of complexity and different configurations of edge/client or cloud resources. After service-level optimization, at a second level, we will develop a system-level AIempowered service orchestrator based on reinforcement learning and context awareness for service orchestration in terms of network slicing and service chain placement, such that instantaneous service-level requirements are fulfilled. The (OAI) and software platforms will be used as real-time experimentation environments with full 4G/5G functionalities for service orchestration to place services, direct traffic from users to servers, and measure latency and other QoE metrics. Finally, at user level, we will leverage the community-network infrastructure of as an edge network to deploy services at scale in a controlled manner and to directly measure their impact on user QoE. The outcome of these latter user-level studies will be continually fed back to and guide the service- and the system-level optimization. The project results are envisioned to be transformational for edge computing and to create durable impact through enabling game-changing services. This ambitious objective will be pursued with a balanced consortium of complementary expertise, consisting of 3 universities, a research center, a SME, and a large industry, overall spanning 4 countries.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-003
    Partners: UCL, ÉTS, Queen’s University Belfast, INRIA, NTUA

    The potential offered by the abundance of sensors, actuators and communications in IoT era is hindered by the limited computational capacity of local nodes, making the distribution of computing in time and space a necessity. Several key challenges need to be addressed in order to optimally and jointly exploit the network, computing, and storage resources, guaranteeing at the same time feasibility for time-critical and mission-critical tasks. Our research takes upon these challenges by dynamically distributing resources when the demand is rapidly time varying. We first propose an analytic mathematical dynamical modelling of the resources, offered workload, and networking environment, that incorporates phenomena met in wireless communications, mobile edge computing data centres, and network topologies. We also propose a new set of estimators for the workload and resources time-varying profiles that continuously update the model parameters. Building on this framework, we aim to develop novel resource allocation mechanisms that take explicitly into account service differentiation and context-awareness, and most importantly, provide formal guarantees for well-defined QoS/QoE metrics. Our research goes well beyond the state of the art also in the design of control algorithms for cyber-physical systems (CPS), by incorporating resource allocation mechanisms to the decision strategy itself. We propose a new generation of controllers, driven by a co-design philosophy both in the network and computing resources utilization. This paradigm has the potential to cause a quantum leap in crucial fields in engineering, e.g., Industry 4.0, collaborative robotics, logistics, multi-agent systems etc. To achieve these breakthroughs, we utilize and combine tools from Automata and Graph theory, Machine Learning, Modern Control Theory and Network Theory, fields where the consortium has internationally leading expertise. Although researchers from Computer and Network Science, Control Engineering and Applied Mathematics have proposed various approaches to tackle the above challenges, our research constitutes the first truly holistic, multidisciplinary approach that combines and extends recent, albeit fragmented results from all aforementioned fields, thus bridging the gap between efforts of different communities. Our developed theory will be extensively tested on available experimental testbed infrastructures of the participating entities. The efficiency of the overall proposed framework will be tested and evaluated under three complex use cases involving mobile autonomous agents in IoT environments: (i) distributed remote path planning of a group of mobile robots with complex specifications, (ii) rapid deployment of mobile agents for distributed computing purposes in disaster scenarios and (iii) mobility-aware resource allocation for crowded areas with pre-defined performance indicators to reach.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-ACAI-001
    Partners: Lodz University of Technology / Institute of Applied Computer Science, VTT, LNE : Laboratoire national de métrologie et d’essais

    The AIR project targets at a reconfigurable and programmable integrated circuit based on analogue neural network for analysis of the short and middle range radar (24GHz/60GHz) system for new applications in remote and privacy-preserving monitoring of vital signs (heartbeat, respiration). The project will also develop and publicly release the evaluation kit (scenarios, metrics, protocols, datasets) to objectively evaluate the performance of the developed analogue and digital artificial intelligence (AI) systems featuring movement detection methods. The project constitutes a holistic approach to assessment of analogue computing in a novel application of AI to radar based monitoring, which includes: development of AI algorithm suitable for analogue hardware implementation as well as its software counterparts optimized to run on various digital platforms (CPU, GPU, FPGA) for comprehensive comparison; design of an analogue neural network-based circuit to efficiently perform the analysis and classification of the radar signal, and its implementation in CMOS chip; evaluation and benchmarking of the chip performance versus the software/digital realizations of the algorithm. The project will be conducted by a consortium of three partners with complementary expertise. The team at the Institute of Applied Computer Science of the Lodz University of Technology, Poland has an extensive experience in development of AI and machine learning algorithms, and their applications to signal processing and classification in real-life scenarios. The research group at VTT Technical Research Centre of Finland has experience in development of millimetre wave radar systems and its applications to vital signs monitoring, as well as in development of ultra-low power circuits for bio-inspired signal processing and their implementations with both standard CMOS and emerging nanotechnologies. The team at Laboratoire national de métrologie et d’essais has an unique and unparalleled experience in evaluation of AI-based ICT systems and specializes in the design of experimental plans and evaluation metrics, and in the specification and qualification of reference data.

Send a message
How can we help?
We usually respond in a few hours.