
The DART project aims to pioneer a ground-breaking capability to enhance the performance and energy efficiency of reconfigurable hardware accelerators for next-generation computing systems. This capability will be achieved by a novel foundation for a transformation engine based on heterogeneous graphs for design optimisation and diagnosis. While hardware designers are familiar with transformations by Boolean algebra, the proposed research promotes a design-by-transformation style by providing, for the first time, tools which facilitate experimentation with design transformations and their regulation by meta-programming. These tools will cover design space exploration based on machine learning, and end-to-end tool chains mapping designs captured in multiple source languages to heterogeneous reconfigurable devices targeting cloud computing, Internet-of-Things and supercomputing. The proposed approach will be evaluated through a variety of benchmarks involving hardware acceleration, and through codifying strategies for automating the search of neural architectures for hardware implementation with both high accuracy and high efficiency.
Bayesian inference is a process which allows us to extract information from data. The process uses prior knowledge articulated as statistical models for the data. We are focused on developing a transformational solution to Data Science problems that can be posed as such Bayesian inference tasks. An existing family of algorithms, called Markov chain Monte Carlo (MCMC) algorithms, offer a family of solutions that offer impressive accuracy but demand significant computational load. For a significant subset of the users of Data Science that we interact with, while the accuracy offered by MCMC is recognised as potentially transformational, the computational load is just too great for MCMC to be a practical alternative to existing approaches. These users include academics working in science (e.g., Physics, Chemistry, Biology and the social sciences) as well as government and industry (e.g., in the pharmaceutical, defence and manufacturing sectors). The problem is then how to make the accuracy offered by MCMC accessible at a fraction of the computational cost. The solution we propose is based on replacing MCMC with a more recently developed family of algorithms, Sequential Monte Carlo (SMC) samplers. While MCMC, at its heart, manipulates a single sampling process, SMC samplers are an inherently population-based algorithm that manipulates a population of samples. This makes SMC samplers well suited to the task of being implemented in a way that exploits parallel computational resources. It is therefore possible to use emerging hardware (e.g., Graphics Processor Units (GPUs), Field Programmable Gate Arrays (FPGAs) and Intel's Xeon Phis as well as High Performance Computing (HPC) clusters) to make SMC samplers run faster. Indeed, our recent work (which has had to remove some algorithmic bottlenecks before making the progress we have achieved) has shown that SMC samplers can offer accuracy similar to MCMC but with implementations that are better suited to such emerging hardware. The benefits of using an SMC sampler in place of MCMC go beyond those made possible by simply posing a (tough) parallel computing challenge. The parameters of an MCMC algorithm necessarily differ from those related to a SMC sampler. These differences offer opportunities for SMC samplers to be developed in directions that are not possible with MCMC. For example, SMC samplers, in contrast to MCMC algorithms, can be configured to exploit a memory of their historic behaviour and can be designed to smoothly transition between problems. It seems likely that by exploiting such opportunities, we will generate SMC samplers that can outperform MCMC even more than is possible by using parallelised implementations alone. Our interactions with users, our experience of parallelising SMC samplers and the preliminary results we have obtained when comparing SMC samplers and MCMC make us excited about the potential that SMC samplers offer as a "New Approach for Data Science". Our current work has only begun to explore the potential offered by SMC samplers. We perceive significant benefit could result from a larger programme of work that helps us understand the extent to which users will benefit from replacing MCMC with SMC samplers. We propose a programme of work that combines a focus on users' problems with a systematic investigation into the opportunities offered by SMC samplers. Our strategy for achieving impact comprises multiple tactics. Specifically, we will: use identified users to act as "evangelists" in each of their domains; work with our hardware-oriented partners to produce high-performance reference implementations; engage with the developer team for Stan (the most widely-used generic MCMC implementation); work with the Industrial Mathematics Knowledge Transfer Network and the Alan Turing Institute to engage with both users and other algorithmic developers.
Vascular disease is the most common precursor to ischaemic heart disease and stroke, which are two of the leading causes of death worldwide. Advances in endovascular intervention in recent years have transformed patient survival rates and post-surgical quality of life. Compared to open surgery, it has the advantages of faster recovery, reduced need for general anaesthesia, reduced blood loss and significantly lower mortality. However, endovascular intervention involves complex manoeuvring of pre-shaped catheters to reach target areas in the vasculature. Some endovascular tasks can be challenging for even highly-skilled operators. The use of robot assisted endovascular intervention aims to address some of these difficulties, with the added benefit of allowing the operator to remotely control and manipulate devices, thus avoiding exposure to X-ray radiation. The purpose of this work is to develop a new robot-assisted endovascular platform, incorporating novel device designs with improved human-robot control. It builds on our strong partnership with industry aiming to develop the next generation robots that are safe, effective, and accessible to general NHS populations.
The arrival of exascale computers will open new frontiers in our ability to simulate highly complex engineered and natural systems. This will create new opportunities for the design and optimisation of new, highly integrated engineered systems for the future. It will also allow the development of 'digital twins' of complex natural systems, such has the human body and coastal/river regions, that will allow is to explore and manage engineering-led interventions in personalised healthcare and management of the natural environment. The exascale computers of the future will be highly parallel with hundreds of thousands, or millions of processes, working collectively. Exploiting this remarkable level of parallelism will require dramatic advances in the mathematics, numerical methods, software engineering and software tools that underpin simulation, and will depend on experts in each of these areas coming together. The simulation of the different but tightly coupled physical processes that characterises complex engineered and natural systems poses additional challenges of coordinating the simulation of multiple processes, such as the noise created by an airflow flow around a moving structure under the influence of a magnetic field, or the fluid, solid, electrical and chemical interactions a human body. This project brings together of working group of experts from computer science, mathematics and engineering to address the challenge of how to simulate coupled physical process at a system level on future exascale systems. It will also address how to integrate into the simulation process the vast quantity of data that can be collected from real systems, how to assess uncertainties and how to interpret the vast quantities of data that exascale simulations will generate. The working group will formulate roadmaps for enabling research for exascale computing, and support research software engineer training for exascale-ready software skills.
Lung cancer is a challenging disease to diagnose and treat, and is the most common cause of cancer death in both men and women worldwide. Five year survival rates remain poor at 9.0%, and on a global basis, the 2012 statistics suggest that lung cancer was responsible for 1.59 million deaths. A particular difficulty is that most lung cancers are diagnosed at a late stage, with about 75% of patients having advanced disease at the time of diagnosis. Identification of patients with lung cancer at an earlier stage is therefore vital if outcomes are to be improved. CT screening can identify possible cancerous nodules in the lung, but biopsy and histology, in which a tissue sample is examined under a microscope, is then required for diagnosis. The standard procedure to extract the tissue sample is trans-thoracic biopsy, in which a needle is inserted through the chest wall, typically under CT image guidance. This provides good diagnostic results, but is associated with complications, especially pneumothoraces (collapsed lung) which occurs in 15% of cases. More recently, technical advances have allowed biopsy to be performed through a bronchoscope, reducing the risk of complications and allowing the procedure to be performed during routine examination sessions. However, success is highly operator dependent and for remote, small nodules, the diagnostic rate (the yield) is poor. This is due to a number of factors, including the complexity of the bronchial tree, patient motion due to breathing (particularly at distal segments), poor ergonomics, and the large diameter of bronchoscopes prohibiting access beyond fourth generation bronchial segments (the fourth level of 'splitting' in the bronchial tree). The purpose of the REBOT project is to develop a robot-guided endobronchial probe that will allow access to the deepest reaches of the lung. It will be introduced through a working channel of a bronchoscope, making it highly compatible with current procedures. The probe will have integrated optical coherence tomography (OCT) and fluorescence imaging to allow multi-modal visualisation of the morphological and cellular details of the airways. Optical coherence tomography will provide 3D images to a depth of 1-2 mm into the tissue, while fluorescence imaging will provide high resolution surface imaging. These real-time imaging techniques will be used to help navigate the probe to the correct location for extraction of the biopsy tissue sample, increasing the chances of a successful diagnosis.