
Understanding and characterising the behaviour of fluids is fundamental to numerous industrial and environmental challenges with wide-ranging societal impact. The CDT in Fluid Dynamics at Leeds will provide the next generation of highly trained graduates with the technical and professional skills and knowledge needed to tackle such problems. Fluid processes are critical to both economic productivity and the health and environmental systems that affect our daily lives. For example, at the microscale, the flow of liquid through the nozzle of an ink-jet printer controls the quality of the printed product, whilst the flow of a coolant around a microprocessor determines whether or not the components will overheat. At the large scale, the atmospheric conditions of the Earth depend upon the flow of gases in the atmosphere and their interaction with the land and oceans. Understanding these processes allows short term weather forecasting and long term climate prediction; both are crucial for industry, government and society to plan and adapt their environments. Fluid flows, and their interactions with structures, are also important to the performance of an array of processes and products that we take for granted in our everyday lives: gas and water flow to our homes, generation of electricity, fuel efficiency of vehicles, the comfort of our workplaces, the diagnosis and treatment of diseases, and the manufacture of most of the goods that we buy. Understanding, predicting and controlling Fluid Dynamics is key to reducing costs, increasing performance and enhancing the reliability of all of these processes and products. Our CDT draws on the substantial breadth and depth of our Fluid Dynamics research expertise at the University of Leeds. We will deliver an integrated MSc/PhD programme in collaboration with external partners spanning multiple sectors, including energy, transport, environment, manufacturing, consultancy, defence, computing and healthcare, who highlight their need for skilled Fluid Dynamicists. Through a combination of taught courses, team projects, professional skills training, external engagement and an in-depth PhD research project we will develop broad and deep technical expertise plus the team-working and problem-solving skills to tackle challenges in a trans-disciplinary manner. We will recruit and mentor a diverse cohort from a range of science and engineering backgrounds and provide a vibrant and cohesive training environment to facilitate peer-to-peer support. We will build strengths in mathematical modelling, computational simulation and experimental measurement, and through multi-disciplinary projects co-supervised by academics from different Schools, we will enable students to undertake a PhD project that both strengthens and moves them beyond their UG discipline. Our students will be outward facing with opportunities to undertake placements with industry partners or research organisations overseas, to participate in summer schools and study challenges and to lead outreach activities, becoming ambassadors for Fluid Dynamics. Industry and external engagement will be at the heart of the CDT: all MSc team projects will be challenges set and mentored by industry (with placements embedded); each student will have the opportunity for user engagement in their PhD project (from sponsorship, external supervision and access to facilities, to mentoring); and our partners will be actively involved in overseeing our strategic direction, management and professional training. Many components will be provided by or with our partners, including research software engineering, responsible innovation, commercial awareness and leadership.
What accurately describes such real-world processes as fluid flow mechanisms, or chemical reactions for the manufacture of industrial products? What mathematical formalism enables practitioners to guarantee a specific physical behaviour or motion of a fluid, or to maximise the yield of a particular substance? The answer lies in the important scientific field of PDE-constrained optimisation. PDEs are mathematical tools called partial differential equations. They enable us to model and predict the behaviour of a wide range of real-world physical systems. From the optimisation point-of-view, a particularly important set of such problems are those in which the dynamics may be controlled in some desirable way, for instance by applying forces to a domain in which fluid flow takes place, or inserting chemical reactants at certain rates. By influencing a system in this way, we are able to generate an optimised outcome of a real-world process. It is hence essential to study and understand PDE-constrained optimisation problems. The possibilities offered by such problems are immense, influencing groundbreaking research in applied mathematics, engineering, and the experimental sciences. Crucial real-world applications for such problems arise in fluid dynamics, chemical and biological mechanisms, weather forecasting, image processing including medical imaging, financial markets and option pricing, and many others. Although a great deal of theoretical work has been undertaken for such problems, it has only been in the past decade or so that a focus has been placed on solving them accurately and robustly on a computer, by tackling the matrix systems of equations which result. Much of the research underpinning this proposal involves constructing powerful iterative methods accelerated by 'preconditioners', which are built by approximating the relevant matrix in an accurate way, such that the preconditioner is much cheaper to apply than solving the matrix system itself. Applying our methodology can then open the door to scientific challenges which were previously out of reach, by only storing and working with matrices that are tiny compared to the systems being solved overall. Recently, PDE-constrained optimisation problems have found crucial applicability to problems from data analysis. This is due to the vast computing power that is available today, meaning that there exists the potential to store and work with huge-scale datasets arising from commercial records, online news sites, or health databases, for example. In turn, this has led to a number of applications of data-driven processes being successfully modelled by optimisation problems constrained by PDEs. It is essential that algorithms for solving problems from these applications of data science can keep pace with the explosion of data which arises from real-world processes. Our novel numerical methods for solving the resulting huge-scale matrix systems aim to do exactly this. In this project, we will examine PDE-constrained optimisation problems under the presence of uncertain data, image processing problems, bioinformatics applications, and deep learning processes. For each problem, we will devise state-of-the-art mathematical models to describe the process, for which we will then construct potent iterative solvers and preconditioners to tackle the resulting matrix systems. Our new algorithms will be validated theoretically and numerically, whereupon we will then release an open source code library to maximise their applicability and impact on modern optimisation and data science problems.
The achievements of modern research and their rapid progress from theory to application are increasingly underpinned by computation. Computational approaches are often hailed as a new third pillar of science - in addition to empirical and theoretical work. While its breadth makes computation almost as ubiquitous as mathematics as a key tool in science and engineering, it is a much younger discipline and stands to benefit enormously from building increased capacity and increased efforts towards integration, standardization, and professionalism. The development of new ideas and techniques in computing is extremely rapid, the progress enabled by these breakthroughs is enormous, and their impact on society is substantial: modern technologies ranging from the Airbus 380, MRI scans and smartphone CPUs could not have been developed without computer simulation; progress on major scientific questions from climate change to astronomy are driven by the results from computational models; major investment decisions are underwritten by computational modelling. Furthermore, simulation modelling is emerging as a key tool within domains experiencing a data revolution such as biomedicine and finance. This progress has been enabled through the rapid increase of computational power, and was based in the past on an increased rate at which computing instructions in the processor can be carried out. However, this clock rate cannot be increased much further and in recent computational architectures (such as GPU, Intel Phi) additional computational power is now provided through having (of the order of) hundreds of computational cores in the same unit. This opens up potential for new order of magnitude performance improvements but requires additional specialist training in parallel programming and computational methods to be able to tap into and exploit this opportunity. Computational advances are enabled by new hardware, and innovations in algorithms, numerical methods and simulation techniques, and application of best practice in scientific computational modelling. The most effective progress and highest impact can be obtained by combining, linking and simultaneously exploiting step changes in hardware, software, methods and skills. However, good computational science training is scarce, especially at post-graduate level. The Centre for Doctoral Training in Next Generation Computational Modelling will develop 55+ graduate students to address this skills gap. Trained as future leaders in Computational Modelling, they will form the core of a community of computational modellers crossing disciplinary boundaries, constantly working to transfer the latest computational advances to related fields. By tackling cutting-edge research from fields such as Computational Engineering, Advanced Materials, Autonomous Systems and Health, whilst communicating their advances and working together with a world-leading group of academic and industrial computational modellers, the students will be perfectly equipped to drive advanced computing over the coming decades.
Given a Fortran program which evaluates numerically a scalar output y = f(x) from a vector x of input values, we are frequently interested in evaluating the gradient vector g = f '(x) whose components are the derivatives (sensitivities) dy/dx.Automatic Differentiation is a set of techniques for automatically transforming the program for evaluating f into a program for evaluating f '. In particular the adjoint, or reverse, mode of Automatic Differentiation can produce numerical values for all components of the gradient g at a computational cost of about three evaluations of f, even if there are millions of components in x and g. This is done by using the chain rule from calculus (but applied to floating point numerical values, rather than to symbolic expressions) so as to evaluate numerically the sensitivity of the output with respect to each floating point calculation performed. However, doing this requires making the program to run backwards, since these sensitivities must be evaluated starting with dy/dy = 1 and ending with dy/dx = g, which is the reverse order to the original calculation. It also requires the intermediate values calculated by f to be either stored on the forward pass, or recomputed on the reverse pass by the adjoint program. Phase II of the CompAD project has already produced the first industrial strength Fortran compiler in the world able to perform this adjoint transformation (and reverse program flow) automatically. Previous Automatic Differentiation tools used either overloading (which was hard to optimize) or source transformation (which could not directly utilize low level compiler facilities).The adjoint Fortran compiler produced by phase II is perfectly adequate for small to medium sized problems (up to a few hundred input variables), and meets the objectives of the second phase of the project. However even moderately large problems (many thousands of input variables) require the systematic use and placement of checkpoints, in order to manage efficiently the tradeoff between storage on the way forward and recomputation on the way back. With the present prototype, the user must place and manage these checkpoints explicitly. This is almost acceptable for experienced users with very large problems which they already understand well, but it is limiting and timeconsuming for users without previous experience of using Automatic Differentiation, and represents a barrier to the uptake of numerical methods based upon Automatic Differentiation. The objective of Phase III of the CompAD project is to automate the process of trading off storage and recomputation in a way which is close to optimal. Finding a tradeoff which is actually optimal is known to be an NP-hard problem, so we are seeking solutions which are almost optimal in a particular sense. Higher order derivatives (eg directional Hessians) can be generated automatically by feeding back into the compiler parts of its own output during the compilation process. We intend to improve the code transformation techniques used in the compiler to the point where almost optimally efficient higher order derivative code can be generated automatically in this way.A primary purpose of this project is to explore alternative algorithms and representations for program analysis and code transformation in order to solve certain hard problems and lay the groundwork for future progress with others. But we will be using some hard leading edge numerical applications from our industrial partners to guide and prove the new technology we develop, and the Fortran Compiler resulting from this phase of the project is designed to be of widespread direct use in Scientific Computing.
Lancaster University (LU) proposes a Centre for Doctoral Training (CDT) to develop international research leaders in statistics and operational research (STOR) through a programme in which cutting-edge industrial challenge is the catalyst for methodological advance. Our proposal addresses the priority area 'Statistics for the 21st Century' through research training in cutting-edge modelling and inference for large, complex and novel data structures. It crucially recognises that many contemporary challenges in statistics, including those arising from industry, also engage with constraint, optimisation and decision. The proposal brings together LU's academic strength in STOR (>50FTE) with a distinguished array of highly committed industrial and international academic partners. Our shared vision is a CDT that produces graduates capable of the highest quality research with impact and equipped with an array of leadership and other skills needed for rapid career progression in academia or industry. The proposal builds on the strengths of an existing EPSRC-funded CDT that has helped change the culture in doctoral training in STOR through an unprecedented level of engagement with industry. The proposal takes the scale and scientific ambition of the Centre to a new level by: * Recruiting and training 70 students, across 5 cohorts, within a programme drawing on industrial challenge as the catalyst for research of the highest quality; * Ensuring all students undertake research in partnership with industry: 80% will work on doctoral projects jointly supervised and co-funded by industry; all others will undertake industrial research internships; * Promoting a culture of reproducible research under the mentorship and guidance of a dedicated Research Software Engineer (industry funded); * Developing cross-cohort research-clusters to support collaboration on ambitious challenges related to major research programmes; * Enabling students to participate in flagship research activities at LU and our international academic partners. The substantial growth in data-driven business and industrial decision-making in recent years has signalled a step change in the demand for doctoral-level STOR expertise and has opened the skills gap further. The current CDT has shown that a cohort-based, industrially engaged programme attracts a diverse range of the very ablest mathematically trained students. Without STOR-i, many of these students would not have considered doctoral study in STOR. We believe that the new CDT will continue to play a pivotal role in meeting the skills gap. Our training programme is designed to do more than solve a numbers problem. There is an issue of quality as much as there is one of quantity. Our goal is to develop research leaders who can innovate responsibly and secure impact for their work across academic, scientific and industrial boundaries; who can work alongside others with different skills-sets and communicate effectively. An integral component of this is our championing of ED&I. Our external partners are strongly motivated to join us in achieving these outcomes through STOR-i's cohort-based programme. We have little doubt that our graduates will be in great demand across a wide range of sectors, both industrial and academic. Industry will play a key role in the CDT. Our partners are helping to co-design the programme and will (i) co-fund and co-supervise doctoral projects, (ii) lead a programme of industrial problem-solving days and (iii) play a major role in leadership development and a range of bespoke training. The CDT benefits from the substantial support of 10 new partners (including Morgan Stanley, ONS Data Science Campus, Rolls Royce, Royal Mail, Tesco) and continued support from 5 existing partners (including ATASS, BT, NAG, Shell), with many others expected to contribute.