
The current 'gold standard' for diagnosis and grading of many diseases (including most solid tumours) is largely based on an expert histopathologist's visual microscopic assessment of an extremely thin (only a few micrometers thick) section of the suspicious tissue specimen glued to a glass slide. This practice has remained more or less the same for several decades, and results in subjective and variable diagnosis. However, the recent uptake of digital slide scanners by some diagnostic pathology laboratories in the UK marks a new revolution in pathology practice in the NHS trusts, with our local NHS trust being the first one in the country to use digitally scanned images of tissue slides for routine diagnostics. The digital slide scanner produces a multi-gigapixel whole-slide image (WSI) for each histology slide, with each image containing rich information about tens of thousands of different kinds of cells and their spatial relationships with each other. This project aims to introduce a novel paradigm for analytics and computerised profiling of tissue microenvironment. We will develop sophisticated tools for image analytics in order to reveal spatial trends and patterns associated with disease sub-groups (for example, patient groups whose cancer is likely to advance more aggressively) and deploy those tools for clinical validation at our local NHS trust. This will be made possible by further advancing recent developments made in our group, such as those allowing us to recognise individual cells of different kinds in the WSIs consequently enabling us to paint a colourful picture of the tissue microenvironment which we term as the 'histology landscape'. Understanding and analysing the tissue microenvironment is not only crucial to assessing the grade and aggressiveness of disease and for predicting its course, it can also help us better understand how genomic alterations manifest themselves as structural changes in the tissue microenvironment. We will develop tools and techniques to extract patterns and trends found in the spatial structure and the 'social' interplay of different cells or colonies of cells found in the complex histology landscapes. Our goal is to establish the effective use of image analytics for understanding the histology landscape in a quantitative and systematic manner, facilitating the discovery of image-based markers of disease progression and survival that are intuitive, biologically meaningful, and clinically relevant - eventually leading to optimal selection of treatment option(s) customised to individual patients. This project will analyse real image data and associated clinical and genomics data from patient cohorts for colorectal cancer as a case study. The research staff on this project will work closely with clinical collaborators to ensure the biological significance and clinical relevance of spatial trends and patterns found in the data. In collaboration with our industrial partner Intel, we will test and demonstrate the effectiveness of our methods in a clinical setting potentially leading to better healthcare provision for patients and potential cost savings for the NHS.
We will improve patient health and medical research by maximising the use of vast amounts of human data being generated in the NHS. But there are two obstacles: (i) inter-related clinical and research datasets are dispersed across numerous computer systems making them hard to integrate; (ii) there is a serious shortage of computational expertise as applied to clinical research. As part of the UK's healthcare strategy to overcome these limitations, we have assembled a world-class consortium of institutions and scientists, including UCL Partners (containing NHS Trusts treating >6 million patients), Francis Crick Institute, Sanger Institute and European Bioinformatics Institute. Close links with the NHS (through Farr and Genomics England) will allow information exchange for health and disease progression. We have also engaged leading companies like GSK and Intel. We will use the MRC funds for two purposes: 1. Create a powerful eMedLab data centre. We will build a computer cluster that allows us to store, integrate and analyse genetic, patient and electronic health records. By co-locating in a single centre, we eliminate delays and security risks that occur when information is transmitted. Research Technologists supplied by the partners will install and maintain the infrastructure and software environment. 2. Expand scientific and technical expertise in UK Medical Bioinformatics through a Research & Training Academy. Basic and clinical scientists, and bioinformaticians will be trained to perform world-leading computational biomedical science. We will train in the whole range of skills involved in medical bioinformatics research with taught courses, seminars, workshops and informal discussion. To coordinate research activities across partners, we will establish Academy Labs, which are flexible, semi-overlapping groupings of academic and industrial researchers to share insights and plan activities in areas of common analytical challenges. The Academy will provide a mechanism for information and skills exchange across the traditional boundaries of disease types. These will enable existing projects in 3 disease domains in which we have unique strengths: rare diseases, cardiovascular diseases and cancer. Rare: We house 31/70 Nationally Commissioned Highly Specialised Services; ~0.5M of the 6M of our patients have a rare disease, including >50% of those treated at Great Ormond Street Hospital. >200 research teams generate large quantities of genetic, imaging (eg, 3D facial reconstructions), and clinical information (eg, patient records). Cardiovascular: We also lead genomic, imaging, and health informatics programmes in cardiovascular disease with contributions to projects like UK10k project and host multiple national cardiovascular registries through the National Institute for Cardiovascular Outcomes Research. These are linked to primary and hospital clinical care records through Farr@UCLP with current cohort sizes of ~2M people. Cancer: We also have particular clinical expertise in some of the most difficult to treat cancer types and we host major international data resources. These include individuals recruited to the TRACERx study of lung cancer, 8,500 women with abnormal cervical smears in whom methylation patterns of the HPV16 genome predict progression to high-grade precursor disease, and one of the largest sarcoma biobanks in the world. Ultimately, this bid will allow us to use new computational approaches to (i) link patient records and research data in order to understand the pathogenesis of disease, (ii) use genomic, imaging and clinical data to identify diagnostic, prognostic and predictive biomarkers to guide therapy, predict outcome and increase recruitment to clinical trials based on stratified populations and (iii) translate new IP by engagement with the pharmaceutical industry.
Bringing advanced computer vision to edge devices such as robots, consumer electronics, or sensor networks is challenging due to the constraints of power, size and communication bandwidth under which they often operate. We propose vertically integrated research into the paradigm of on-sensor computer vision, where sensing and processing are unified into single chip which produces abstract, information-rich output rather than images. We aim to demonstrate that on-sensor computer vision can be much more powerful and general than seen in previous research, and that the correct hardware design, software framework and algorithm choices permit switchable or even simultaneous computation of a broad set of vision competences (such as motion estimation, segmentation and scene classification) on a single device. We propose to work on the design of on-sensor computer vision systems through a programme of work from pixel-processing architecture design and microelectronic hardware implementation, through software platform development, to unified algorithm design and experimentation to determine how to use this hardware in a full application. This will enable camera devices that not only capture images, but have a powerful built-in vision capability to understand what they are looking at, and ultimately can go from light to decision on a single sensor/processor chip, with unprecedented speed, low power consumption and small footprint. We hope to open up a new class of edge applications where cameras can be much more efficient and independent, or for smart cameras to be used in ways never previously considered.
The water, energy and food systems (the WEF) of the planet are under strain, sometimes described as the "perfect storm". They are all intrinsically linked and inter-dependent (the nexus), and humanity needs to plot a course to ensure sustainability and in an ideal world, equity of access to resources. The WEFWEBs project will examine the data and evidence for the water, energy and food systems and their interactions and dependencies within the local, regional and national environment. We need to maintain a balance between the three sometimes opposing directions that our primary systems are moving in to ensure that we safeguard our ecosystems, while still being able to live sustainably, in a world where demands are increasing. To study these systems and their dependencies and interactions, we need to bring together a multitude of different disciplines from the physical, environmental computational and mathematical sciences, with economics, social science, psychology and policy. Each of the three systems needs to be studied through the data that exists concerning their flows, resources and impacts, but also through individual and civic understanding of the systems. We will collect, synthesise and assimilate existing data, and models with new data that will be collected using new sensing technology and social media. We will examine each of the multiple dimensions of the nexus in three place based studies where we can explore and examine the outputs from data analysis, process and network models, and social perceptions. This project delivers multiple dynamic WEF nexus maps with spatial level spanning the dimensions of the problem, reflecting current status and changes, and the interactions in the primary systems in space and time. There is currently no critically systemic, participatory, multi-stakeholder mapping of the entire multi-scale WEF nexus for the UK and this project offers innovation in terms of the multi-disciplinarity and variety of methods including systemic intervention, data analytics and crowd sourcing techniques to mapping the WEF nexus. Ultimately, WEFWEBs will provide a better understanding to citizens and policy makers alike of the effects of choices and decisions to be made.
The achievements of modern research and their rapid progress from theory to application are increasingly underpinned by computation. Computational approaches are often hailed as a new third pillar of science - in addition to empirical and theoretical work. While its breadth makes computation almost as ubiquitous as mathematics as a key tool in science and engineering, it is a much younger discipline and stands to benefit enormously from building increased capacity and increased efforts towards integration, standardization, and professionalism. The development of new ideas and techniques in computing is extremely rapid, the progress enabled by these breakthroughs is enormous, and their impact on society is substantial: modern technologies ranging from the Airbus 380, MRI scans and smartphone CPUs could not have been developed without computer simulation; progress on major scientific questions from climate change to astronomy are driven by the results from computational models; major investment decisions are underwritten by computational modelling. Furthermore, simulation modelling is emerging as a key tool within domains experiencing a data revolution such as biomedicine and finance. This progress has been enabled through the rapid increase of computational power, and was based in the past on an increased rate at which computing instructions in the processor can be carried out. However, this clock rate cannot be increased much further and in recent computational architectures (such as GPU, Intel Phi) additional computational power is now provided through having (of the order of) hundreds of computational cores in the same unit. This opens up potential for new order of magnitude performance improvements but requires additional specialist training in parallel programming and computational methods to be able to tap into and exploit this opportunity. Computational advances are enabled by new hardware, and innovations in algorithms, numerical methods and simulation techniques, and application of best practice in scientific computational modelling. The most effective progress and highest impact can be obtained by combining, linking and simultaneously exploiting step changes in hardware, software, methods and skills. However, good computational science training is scarce, especially at post-graduate level. The Centre for Doctoral Training in Next Generation Computational Modelling will develop 55+ graduate students to address this skills gap. Trained as future leaders in Computational Modelling, they will form the core of a community of computational modellers crossing disciplinary boundaries, constantly working to transfer the latest computational advances to related fields. By tackling cutting-edge research from fields such as Computational Engineering, Advanced Materials, Autonomous Systems and Health, whilst communicating their advances and working together with a world-leading group of academic and industrial computational modellers, the students will be perfectly equipped to drive advanced computing over the coming decades.