Powered by OpenAIRE graph
Found an issue? Give us feedback


Country: United Kingdom


9 Projects, page 1 of 2
  • Funder: EC Project Code: 645212
    Overall Budget: 3,053,640 EURFunder Contribution: 3,053,640 EUR

    Datacentre traffic is experiencing 2-digit growth challenging the scalability of current network architectures. The new concept of disaggregation exacerbates bandwidth and latency demands whereas emerging cloud business opportunities urge for reliable inter-datacenter networking. PROJECT will develop an end-to-end solution extending from the datacenter architecture and optical subsystem design to the overlaying control plane and application interfaces. PROJECT hybrid electronic-optical network architecture scales linearly with the number of datacenter hosts, offers Ethernet granularity and saves up to 94% power and 30% cost. It consolidates compute and storage networks over a single, Ethernet optical TDMA network. Low latency, hardware-level dynamic re-configurability and quasi-deterministic QoS are supported in view of disaggregated datacenter deployment scenarios. A fully functional control plane overlay will be developed comprising an SDN controller along with its interfaces. The southbound interface abstracts physical layer infrastructure and allows dynamic hardware-level network reconfigurability. The northbound interface links the SDN controller with the application requirements through an Application Programming Interface. PROJECT innovative control plane enables Application Defined Networking and merges hardware and software virtualization over the hybrid optical infrastructure. It also integrates SDN modules and functions for inter-datacenter connectivity, enabling dynamic bandwidth allocation based on the needs of migrating VMs as well as on existing Service Level Agreements for transparent networking among telecom and datacenter operator’s domains. Fully-functional network subsystems will be prototyped: a 400Gb/s hybrid Top-of-Rack switch, a 50Gb/s electronic-optical smart Network Interface Card and a fast optical pod switch. PROJECT concept will be demonstrated in the lab and in its operational environment for both intra- and inter-datacenter scenario

  • Funder: EC Project Code: 801101
    Overall Budget: 3,989,490 EURFunder Contribution: 3,989,490 EUR

    Maestro will build a data-aware and memory-aware middleware framework that addresses ubiquitous problems of data movement in complex memory hierarchies and at many levels of the HPC software stack. Though HPC and HPDA applications pose a broad variety of efficiency challenges, it would be fair to say that the performance of both has become dominated by data movement through the memory and storage systems, as opposed to floating point computational capability. Despite this shift, current software technologies remain severely limited in their ability to optimise data movement. The Maestro project addresses what it sees as the two major impediments of modern HPC software: 1. Moving data through memory was not always the bottleneck. The software stack that HPC relies upon was built through decades of a different situation – when the cost of performing floating point operations (FLOPS) was paramount. Several decades of technical evolution built a software stack and programming models highly fit for optimising floating point operations but lacking in basic data handling functionality. We characterise the set of technical issues at missing data-awareness. 2. Software rightfully insulates users from hardware details, especially as we move higher up the software stack. But HPC applications, programming environments and systems software cannot make key data movement decisions without some understanding of the hardware, especially the increasingly complex memory hierarchy. With the exception of runtimes, which treat memory in a domain-specific manner, software typically must make hardware-neutral decisions which can often leave performance on the table . We characterise this issue as missing memory-awareness. Maestro proposes a middleware framework that enables memory- and data-awareness.

  • Funder: EC Project Code: 800999
    Overall Budget: 3,997,120 EURFunder Contribution: 3,997,120 EUR

    The landscape for High Performance Computing is changing with the proliferation of enormous volumes of data created by scientific instruments and sensors, in addition to data from simulations. This data needs to be stored, processed and analysed, and existing storage system technologies in the realm of extreme computing need to be adapted to achieve reasonable efficiency in achieving higher scientific throughput. We started on the journey to address this problem with the SAGE project. The HPC use cases and the technology ecosystem is now further evolving and there are new requirements and innovations that are brought to the forefront. It is extremely critical to address them today without “reinventing the wheel” leveraging existing initiatives and know-how to build the pieces of the Exascale puzzle as quickly and efficiently as we can. The SAGE paradigm already provides a basic framework to address the extreme scale data aspects of High Performance Computing on the path to Exascale. Sage2 (Percipient StorAGe for Exascale Data Centric Computing 2) intends to validate a next generation storage system building on top of the already existing SAGE platform to address new use case requirements in the areas of extreme scale computing scientific workflows and AI/deep learning leveraging the latest developments in storage infrastructure software and storage technology ecosystem. Sage2 aims to provide significantly enhanced scientific throughput, improved scalability, and, time & energy to solution for the use cases at scale. Sage2 will also dramatically increase the productivity of developers and users of these systems. This proposal is aligned to FETHPC-02 - 2017:part (c). Sage2 provides a highly performant and resilient, QoS capable multi tiered storage system, with data layouts across the tiers managed by the Mero Object Store, which is capable of handling in-transit/in-situ processing of data within the storage system, accessible through the Clovis API.

  • Funder: EC Project Code: 955811
    Overall Budget: 7,995,950 EURFunder Contribution: 3,997,980 EUR

    IO-SEA aims to provide a novel data management and storage platform for exascale computing based on hierarchical storage management (HSM) and on-demand provisioning of storage services. The platform will efficiently make use of storage tiers spanning NVMe and NVRAM at the top all the way down to tape-based technologies. System requirements are driven by data intensive use-cases, in a very strict co-design approach. The concept of ephemeral data nodes and data accessors is introduced that allow users to flexibly operate the system, using various well-known data access paradigms, such as POSIX namespaces, S3/Swift Interfaces, MPI-IO and other data formats and protocols. These ephemeral resources eliminate the problem of treating storage resources as static and unchanging system components – which is not a tenable proposition for data intensive exascale environments. The methods and techniques are applicable to exascale class data intensive applications and workflows that need to be deployed in highly heterogeneous computing environments. Critical aspects of intelligent data placement are considered for extreme volumes of data. This ensures that the right resources among the storage tiers are used and accessed by data nodes as close as possible to compute nodes – optimising performance, cost, and energy at extreme scale. Advanced IO instrumentation and monitoring features will be developed to that effect leveraging the latest advancements in AI and machine learning to systematically analyse the telemetry records to make smart decisions on data placement. These ideas coupled with in-storage-computation remove unnecessary data movements within the system. The IO-SEA project (EuroHPC-2019-1 topic b) has connections to the DEEP-SEA (topic d) and RED-SEA (topic c) project. It leverages technologies developed by the SAGE, SAGE2 and NextGEN-IO projects, and strengthens the TLR of the developed products and technologies.

  • Funder: EC Project Code: 642963
    Overall Budget: 3,803,410 EURFunder Contribution: 3,803,410 EUR

    The consortium of this European Training Network (ETN) "BigStorage: Storage-based Convergence between HPC and Cloud to handle Big Data” will train future data scientists in order to enable them and us to apply holistic and interdisciplinary approaches for taking advantage of a data-overwhelmed world, which requires HPC and Cloud infrastructures with a redefinition of storage architectures underpinning them - focusing on meeting highly ambitious performance and energy usage objectives. There has been an explosion of digital data, which is changing our knowledge about the world. This huge data collection, which cannot be managed by current data management systems, is known as Big Data. Techniques to address it are gradually combining with what has been traditionally known as High Performance Computing. Therefore, this ETN will focus on the convergence of Big Data, HPC, and Cloud data storage, ist management and analysis. To gain value from Big Data it must be addressed from many different angles: (i) applications, which can exploit this data, (ii) middleware, operating in the cloud and HPC environments, and (iii) infrastructure, which provides the Storage, and Computing capable of handling it. Big Data can only be effectively exploited if techniques and algorithms are available, which help to understand its content, so that it can be processed by decision-making models. This is the main goal of Data Science. We claim that this ETN project will be the ideal means to educate new researchers on the different facets of Data Science (across storage hardware and software architectures, large-scale distributed systems, data management services, data analysis, machine learning, decision making). Such a multifaceted expertise is mandatory to enable researchers to propose appropriate answers to applications requirements, while leveraging advanced data storage solutions unifying cloud and HPC storage facilities.

Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.