Powered by OpenAIRE graph
Found an issue? Give us feedback

NVIDIA Limited (UK)

NVIDIA Limited (UK)

14 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/V001310/1
    Funder Contribution: 284,103 GBP

    Advances in Artificial Intelligence (AI) and Machine Learning (ML) have enabled the scientific community to advance the frontiers of knowledge by learning from complex, large-scale experimental datasets. With the scientific community generating huge amounts of data from observatories to large-scale experimental facilities, AI for Science at Exascale is on the horizon. However, in the absence of systematic approaches to evaluate AI models and AI algorithms at exascale, the AI for Science community, and, in fact, the general AI community, are facing a major barrier ahead. This proposal aims to setup a working group with an overarching goal of identifying the scope and plans for developing AI benchmarks to enable the development of AI for Science at Exascale, in ExCALIBUR - Phase II. Although AI Benchmarking is becoming a well-explored topic, a number of issues are still to be addressed, including, but not limited to: a) There are no efforts aimed at AI benchmarking at exascale, particularly for science; b) A range of scientific problems involving real-world large-scale scientific datasets, such as those from experimental facilities or observatories, are largely ignored in benchmarking; and c) It is worth having benchmarks to serve as a catalogue of techniques offering template solutions to different types of scientific problems. In this proposal, when scoping the development of an AI benchmark suite, we will aim to address these issues. In developing a vision, a scope and a plan for this significant challenge, the working group will not only engage with the community of scientists from a number of disciplines, and industry, but will also engineer a scalable and functional AI benchmark, so as to learn and embed the practical aspects of developing an AI benchmark into the vision, scope, and plan. The exemplary benchmark will focus on removing noise from images, which is a common issue across multiple disciplines including, life sciences, material sciences and astronomy. The specific problems from each of these disciplines are, removing noise from cryogenic electron microscopic (cryo-em) datasets, denoising X-Ray tomographic images, and minimising the noise from weak lensing images, respectively.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/P020259/1
    Funder Contribution: 5,000,210 GBP

    The Peta-5 proposal from the University of Cambridge brings together 15 world-leading HPC system and application experts from 10 different institutions to lead the creation of a breakthrough HPC and data analytics capability that will deliver significant National impact to the UK research, industry and health sectors. Peta-5 aims to make a significant contribution towards the establishment and sustainability of a new EPSRC Tier 2 HPC network. The Cambridge Tier 2 Centre working in collaboration with other Tier 1, Tier 2 and Tier 3 stakeholders aims to form a coherent, coordinated and productive National e-Infrastructure (Ne-I) ecosystem. This greatly strengthened computational research support capability will enable a significant increase in computational and data centric research outputs, driving growth in both academic research discovery and the wider UK knowledge economy. The Peta-5 system will be one of the largest heterogeneous data intensive HPC systems available to EPSRC research in the UK. In order to create the critical mass in terms of system capability and capacity needed to make an impact at National level Cambridge have pooled funding and equipment resources from the University, STFC DiRAC and this EPSRC Tier 2 proposal to create a total capital equipment value of £11.5M; the request to EPSRC is £5M. The University will guarantee to cover all operational costs of the system for 4 years from the service start date, with the option to run for a fifth year to be discussed. Cambridge will ensure that 80% of the EPSRC funded element of Peta-5 is deployed on EPSRC research projects, with 65% of the EPSRC funded element of Peta-5 being made available to any UK EPSRC funded project free of charge by use of a light weight resource allocation committee, 15% going to Cambridge EPSRC research and 20% being sold to UK industry to drive the UK knowledge economy. The Peta-5 system will be the most capable HPC system in operation in the UK when it enters service in May 2017. In total Peta-5 will provide 3 petaflops (PF) of sustained performance derived from 3 heterogeneous compute elements, 1PF Intel X86, 1PF Intel KNL and 1PF NIVIDIA Pascal GPU (Peta-1) connected via a Pb/s HPC fabric (Peta-2) to an extreme I/O solid state storage pool (Peta-3), a petascale data analytics (Machine Learning + Hadoop) pool (Peta-4) and a large 15 PB tiered storage solution (Peta-5), all under a single execution environment. This creates a new HPC capability in the UK specifically designed to meet the requirements of both affordable petascale simulation and data intensive workloads combined with complex data analytics. It is the combination of these features which unlocks a new generation of computational science research. The core science justification for the Peta-5 service is based on three broad science themes: Materials Science and Computational Chemistry; Computational Engineering and Smart Cities; Health Informatics. These themes were chosen as they represent significant EPSRC research areas, which demonstrate large benefit from the data intensive HPC capability of Peta-5. The service will clearly be valuable for many other areas of heterogeneous computing and Data Intensive science. Hence a fourth horizontal thematic of "Heterogeneous - Data Intensive Science" is included. Initial theme allocation in the RAC will be: Materials 30%, Engineering 30%, Health, 20%, Heterogeneous - Data Intensive 20%. The Peta-5 facility will drive research discovery and impact at national level, creating the largest and most cost effective petascale HPC resource in the UK, bringing petascale simulation within the reach of a wide range of research projects and UK companies. Also Peta-5 is the first UK HPC system specifically designed for large scale machine learning and data analytics, combining the areas of HPC and Big Data, promising to unlock both knowledge and economic benefit from the Big Data revolution.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026607/1
    Funder Contribution: 2,671,810 GBP

    How can we trust autonomous computer-based systems? Autonomous means "independent and having the power to make your own decisions". This proposal tackles the issue of trusting autonomous systems (AS) by building: experience of regulatory structure and practice, notions of cause, responsibility and liability, and tools to create evidence of trustworthiness into modern development practice. Modern development practice includes continuous integration and continuous delivery. These practices allow continuous gathering of operational experience, its amplification through the use of simulators, and the folding of that experience into development decisions. This, combined with notions of anticipatory regulation and incremental trust building form the basis for new practice in the development of autonomous systems where regulation, systems, and evidence of dependable behaviour co-evolve incrementally to support our trust in systems. This proposal is in consortium with a multi-disciplinary team from Edinburgh, Heriot-Watt, Glasgow, KCL, Nottingham and Sussex, bringing together computer science and AI specialists, legal scholars, AI ethicists, as well as experts in science and technology studies and design ethnography. Together, we present a novel software engineering and governance methodology that includes: 1) New frameworks that help bridge gaps between legal and ethical principles (including emerging questions around privacy, fairness, accountability and transparency) and an autonomous systems design process that entails rapid iterations driven by emerging technologies (including, e.g. machine learning in-the-loop decision making systems) 2) New tools for an ecosystem of regulators, developers and trusted third parties to address not only functionality or correctness (the focus of many other Nodes) but also questions of how systems fail, and how one can manage evidence associated with this to facilitate better governance. 3) Evidence base from full-cycle case studies of taking AS through regulatory processes, as experienced by our partners, to facilitate policy discussion regarding reflexive regulation practices.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V062077/1
    Funder Contribution: 5,086,410 GBP

    Powered by data, Industrial Digital Technologies (IDTs) such as artificial intelligence and autonomous robots, can be used to improve all aspects of manufacturing and supply of products along supply chains to the customer. Many companies are embracing these technologies but uptake within the pharmaceutical sector has not been as rapid. The Medicines Made Smarter Data Centre (MMSDC) looks to address the key challenges which are slowing digitalisation, and adoption of IDTs that can transform processes to deliver medicines tailored to patient needs. Work will be carried out across five integrated platforms designed by academic and industrial researcher teams. These are: 1) The Data Platform, 2) Autonomous MicroScale Manufacturing Platform, 3) Digital Quality Control Platform, 4) Adaptive Digital Supply Platform, and 5) The MMSDC Network & Skills Platform. Platform 1 addresses one of the sector's core digitalisation challenges - a lack of large data sets and ways to access such data. The MMSDC data platform will store and analyse data from across the MMSDC project, making it accessible, searchable and reusable for the medicines manufacturing community. New approaches for ensuring consistently high-quality data, such as good practice guides and standards, will be developed alongside data science activities which will identify what the most important data are and how best to use them with IDTs in practice. Platform 2 will accelerate development of medicine products and manufacturing processes by creating agile, small-scale production facilities that rapidly generate large data sets and drive research. Robotic technologies will be assembled to create a unique small-scale medicine manufacturing and testing system to select drug formulations and processes to produce stable products with the desired in-vitro performance. Integrating several IDTs will accelerate drug product manufacture, significantly reducing experiments and dramatically reducing development time, raw materials and associated costs. Platform 3 focusses on the digitalisation of Quality Control (QC) aspects of medicines development which is important for ensuring a medicine's compliance with regulatory standards and patient safety requirements. Currently, QC checks are carried out after a process has been completed possibly spotting problems after they have occurred. This approach is inefficient, fragmented, costly (>20% of total production costs) and time consuming. The digital QC platform will research how to transform QC by utilising rich data from IDTs to confirm in real time product and process compliance. Platform 4 will generate new understanding on future supply chain needs of medicines to support adoption of adaptive digital supply chains for patient-centric supply. IDTs make smaller scale, autonomous factory concepts viable that support more flexible and distributed manufacture and supply. Supply flexibility and agility extends to scale, product variety, and shorter lead-times (from months to days) offering a responsive patient-centric or rapid replenishment operating model. Finally, technology developments closer to the patient, such as diagnostics provide visibility on patient specific needs. Platform 5 will establish the MMSDC Network & Skills Platform. This Network will lead engagement and collaboration across key stakeholder groups involved in medicines manufacturing and investments. The Network brings together the IDT-using community and other relevant academic and industrial groups to share developments across pharmaceuticals and broader digital manufacturing sectors ensuring cross-sector diffusion of MMSDC research. Existing strategic networks will support MMSDC and act as gateways for IDT dissemination and uptake. The lack of appropriate skills in the workforce has been highlighted as a key barrier to IDT adoption. An MMSDC priority is to identify skills needs and with partners develop and deliver training to over 100 users

    more_vert
  • Funder: UK Research and Innovation Project Code: MR/X011585/1
    Funder Contribution: 400,618 GBP

    In early years of computational pathology, the algorithms were mainly focussed on segmentation and identification of objects such as nuclei, glands, ducts, vessels, and other patterns which are of interest to pathologists in every day clinical practice. The concept was to assist pathologists in identifying patterns which are difficult to eyeball over the huge landscape of cancer tissue in a whole slide image (WSI). The advent of modern CPath algorithms based on deep learning (DL) found that there are hidden features which humans usually ignore due to inattentional blindness. Therefore, CPath has moved beyond identification and classification of individual patterns within a whole slide image (WSI) towards WSI-level or case-level diagnosis, mutation and therapeutic response prediction discovering new morphological patterns, tissue phenotypes, even surpassing pathologist performance in some cases. On the other hand, DL algorithms are usually considered to be a black box due to lack of interpretability of the learnt features which makes it difficult to understand the biology of different diseases. One of the reasons is our inability to analyse huge landscapes in the tumour microenvironment (TME) where the WSIs are divided into small patches before analysis due to hardware limitations and complex DL architectures required for the analysis of images from different stains and modalities. The challenge is the gigapixel size of the WSIs containing the landscape of cancer which on one hand compels exploration but at the same time faces technological challenges. Due to tumour heterogeneity these small patches are usually not representative of the WSIs. Therefore, we need to develop techniques which can analyse WSIs without dividing them into smaller patches keeping the spatial information intact. This not only allows to overcome tumour heterogeneity limitations but helps in identifying heterogenous regions and embedded spatial relationships linked to patient outcome and other clinical variables. These techniques should be able to overcome the practical limitations of the hardware, invariant to the input stains and should be able to help with interpretability and biological understanding of the TME. The algorithmic limitations are currently being tackled by WSI-level weakly supervised labels or compressed representations. These approaches have some major drawbacks e.g., these approaches discard the essential spatial information required to incorporate cell-to-cell interactions in clinically significant regions during compression and are focussed mostly on identification or classification of disease into sub-categories where the DL model is treated as a black box. Analysis of TME at the cellular level is important to understand mechanisms in cancer where tumour heterogeneity plays a significant role. Multiplexed Immunofluorescence (MxIF) images provide additional data to subtype individual cells on the same tissue section which is not currently possible with existing brightfield approaches. There have been recent advances in whole slide image fluorescence imaging which allow scanning of WSIs with multiple markers. Therefore, we need stain and modality agnostic approaches which can analyse WSIs without losing spatial information at the cellular level so the rich data can be mined for better understanding of cancer. We propose to build on existing technology and utilise the extracted information to understand TME interactions at the whole slide image level. In this project, we will develop stain agnostic techniques to analyse and identify patterns in whole slide images (WSIs) by creating HistoMaps which can be directly related to biologically meaningful and clinically relevant parameters i.e., mutations, survival and response to therapy linking histology landscapes to clinical variables for better understanding of cancer helping oncologists to make informed decisions on therapeutic interventions and assisting pharma to develop new targets.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.