###### 1,173 Projects, page 1 of 118

#### Loading

- Project . 2013 - 2016Funder: UKRI Project Code: EP/K009850/1Funder Contribution: 158,970 GBPPartners: University of Oxford
We are in the midst of an information revolution, where advances in science and technology, as well as the day-to-day operation of successful organisations and businesses, are increasingly reliant on the analyses of data. Driving these advances is a deluge of data, which is far outstripping the increase in computational power available. The importance of managing, analysing, and deriving useful understanding from such large scale data is highlighted by high-profile reports by McKinsey and The Economist as well as other outlets, and by the EPSRC's recent ICT priority of "Towards an Intelligent Information Infrastructure". Bayesian analysis is one of the most successful family of methods for analysing data, and one now widely adopted in the statistical sciences as well as in AI technologies like machine learning. The Bayesian approach offers a number of attractive advantages over other methods: flexibility in constructing complex models from simple parts; fully coherent inferences from data; natural incorporation of prior knowledge; explicit modelling assumptions; precise reasoning of uncertainties over model order and parameters; and protection against overfitting. On the other hand, there is a general perception that they can be too slow to be practically useful on big data sets. This is because exact Bayesian computations are typically intractable, so a range of more practical approximate algorithms are needed, including variational approximations, sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). MCMC methods arguably form the most popular class of Bayesian computational techniques, due to their flexibility, general applicability and asymptotic exactness. Unfortunately, MCMC methods do not scale well to big data sets, since they require many iterations to reduce Monte Carlo noise, and each iteration already involves an expensive sweep through the whole data set. In this project we propose to develop the theoretical foundations for a new class of MCMC inference procedures that can scale to billions of data items, thus unlocking the strengths of Bayesian methods for big data. The basic idea is to use a small subset of the data during each parameter update iteration of the algorithm, so that many iterations can be performed cheaply. This introduces excess stochasticity in the algorithm, which can be controlled by annealing the update step sizes towards zero as the number of iterations increases. The resulting algorithm is a cross between an MCMC and a stochastic optimization algorithm. An initial exploration of this procedure, which we call stochastic gradient Langevin dynamics (SGLD), was initiated by us recently (Welling and Teh, ICML 2011). Our proposal is to lay the mathematical foundations for understanding the theoretical properties of such stochastic MCMC algorithms, and to build on these foundations to develop more sophisticated algorithms. We aim to understand the conditions under which the algorithm is guaranteed to converge, and the type and speed of convergence. Using this understanding, we aim to develop algorithmic extensions and generalizations with better convergence properties, including preconditioning, adaptive and Riemannian methods, Hamiltonian Monte Carlo methods, Online Bayesian learning methods, and approximate methods with large step sizes. These algorithms will be empirically validated on real world problems, including large scale data analysis problems for text processing and collaborative filtering which are standard problems in machine learning, and large scale data from ID Analytics, a partner company interested in detecting identity theft and fraud.

- Project . 2015 - 2016Funder: UKRI Project Code: EP/M014134/1Funder Contribution: 97,162 GBPPartners: NTU
Asphalt pavements are the most commonly road pavements in the UK. Preserving them in a proper state fundamentally affects the economy and quality of life. However, their surveillance and maintenance are cost and time intensive, and asphalt concrete still has to be replaced after 15 years of use. Applying induction heating into the road could make pavements last much longer by stimulating the asphalt`s property of self-healing. Experimental results have found that a crack can be fully induction-healed, for at least 5 times. The efficiency of self-healing, however, depends on the temperature of the material and the temperature should be concentrated in the cracks alone. Thus, the challenge of this research is to discover how to apply energy only locally into the cracks without dispersing energy into undesired spaces. With this purpose, experimental and mathematical models of asphalt concrete self-healing under induction heating will be developed. This research will serve to understand the relationships between induction heating, the particles used to heat the mixture, the heat flow through asphalt concrete and its effect on asphalt self-healing. We will discover the type of particles, intensities and frequencies of induction heating which are more appropriate for healing, how to concentrate the heat in the damaged areas and the relationship between the amount of energy induced and the healing of asphalt concrete.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/K003445/1Funder Contribution: 100,347 GBPPartners: QUB
The transformation of communication and computing technologies in terms of accessibility, ubiquity, mobility and coverage, has enabled new opportunities for personalised on-demand communication (e.g. Facebook, Twitter). This is in addition to new market places for e-commerce and e-businesses, personalised platforms for e-governments and a vast range of new user-centric applications and services. The number of mobile apps (iPhone, Android) taking advantage of the cloud infrastructure has risen beyond several hundred thousand, reshaping the way we communicate and socialise. This shift in communication technology and services has also led to the emergence of unforeseen types of security and privacy threats with social, economic and political incentives, resulting in major research challenges in terms of the protection and security of information assets in storage and transmission. Therefore, Digital Security is vital in ensuring the UK is a safe place to do business, can act as a source of competitive advantage for foreign direct investment and provide a platform for SMEs and large corporations alike to develop products that use or supply this security market. Recent years have seen a massive growth in malware, fuelled by the evolution of the Internet and the migration from malware written by hobbyists to professionally devised malware developed by rogue corporations and organized criminals, primarily targeted for financial or political gain. In 2010, Symantec identified more than 240 million new malicious programs; albeit that many of these are variants of existing malware. Another report, suggests that the actual malware family count is between 1,000 and 3,000. The detection of malware is a major and ongoing problem. The battle against malware has escalated over the past decade as malware has evolved from simple programs that had little ability to evade detection, the main objective of which was to cause havoc, to more complex programs that target profit and deploy sophisticated evasion techniques. The focus of the NIMBUS network of researchers is to act as a catalyst to develop a balanced programme of both blue skies research and near term applied research that will assist in the fight against cyber crime in the UK. Malware related cyber threats are global in nature, hence it is essential that an international approach is taken to address these issues. Only a global network of centers of excellence is expected to provide the essential breadth and depth of know-how and the necessary critical mass of specialist competencies for resolving major Cyber Security challenges. NIMBUS will act as the UK's interface to international engagements with research networks in Europe, US and Asia.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/I004130/2Funder Contribution: 322,634 GBPPartners: University of Edinburgh
In homotopy theory, topological spaces (i.e. shapes) are regarded as being the same if we can deform continuously from one to the other. Algebraic varieties are spaces defined by polynomial equations, often over the complex numbers; studying their homotopy theory means trying to tell which topological spaces can be deformed continuously to get algebraic varieties, or when a continuous map between algebraic varieties can be continuously deformed to a map defined by polynomials.If the polynomials defining a variety are rational numbers (i.e. fractions), this automatically gives the complex variety a group of symmetries, called the Galois group. Although these symmetries are not continuous (i.e. nearby points can be sent far apart), they preserve something called the etale topology. This is an abstract concept which looks somewhat unnatural, butbehaves well enough to preserve many of the topological features of the variety. Part of my project will involve investigating how the Galois group interacts with the etale topology. I also study algebraic varieties in finite and mixed characteristics. Finite characteristics are universes in which the rules of arithmetic are modified by choosing a prime number p, and setting it to zero. For instance, in characteristic 3 the equation 1+1+1=0 holds. In mixed characteristic, p need not be 0, but the sequence 1,p, p^2, p^3 ... converges to 0.Although classical geometry of varieties does not make sense in finite and mixed characteristics, the etale topology provides a suitable alternative, allowing us to gain much valuable insight into the behaviour of the Galois group. This is an area which I find fascinating, as much topological intuition still works in contexts far removed from real and complex geometry. Indeed, many results in complex geometry have been motivated by phenomena observed in finite characteristic.Moduli spaces parametrise classes of geometric objects, and can themselves often be given geometric structures, similar to those of algebraic varieties. This structure tends to misbehave at points parametrising objects with a lot of symmetry. To obviate this difficulty, algebraic geometers work with moduli stacks, which parametrise the symmetries as well as the objects. Sometimes the symmetries can themselves have symmetries and so on, giving rise to infinity stacks.Usually, the dimension of a moduli stack can be calculated by naively counting the degrees of freedom in defining the geometric object it parametrises. However, the space usually contains singularities (points where the space is not smooth), and regions of different dimensions. Partially inspired by ideas from theoretical physics, it has been conjectured that every moduli stack can be extended to a derived moduli stack, which would have the expected dimension, but with some of the dimensions only virtual. Extending to these virtual dimensions also removes the singularities, a phenomenon known as hidden smoothness . Different classification problems can give rise to the same moduli stack, but different derived moduli stacks. Much of my work will be to try to construct derived moduli stacks for a large class of problems. This has important applications in algebraic geometry, as there are many problems for which the moduli stacks are unmanageable, but which should become accessible using derived moduli stacks. I will also seek to investigate the geometry and behaviour of derived stacks themselves.A common thread through the various aspects of my project will be to find ways of applying powerful ideas and techniques from a branch of topology, namely homotopy theory, in contexts where they would not, at first sight, appear to be relevant.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/K023349/1Funder Contribution: 1,780,200 GBPPartners: University of Liverpool, Lancashire Teaching Hospitals NHS Trust, STFC - Laboratories
This proposal brings together a critical mass of scientists from the Universities of Cardiff, Lancaster, Liverpool and Manchester and clinicians from the Christie, Lancaster and Liverpool NHS Hospital Trusts with the complementary experience and expertise to advance the understanding, diagnosis and treatment of cervical, oesophageal and prostate cancers. Cervical and prostate cancer are very common and the incidence of oesophageal is rising rapidly. There are cytology, biopsy and endoscopy techniques for extracting tissue from individuals who are at risk of developing these diseases. However the analysis of tissue by the standard techniques is problematic and subjective. There is clearly a national and international need to develop more accurate diagnostics for these diseases and that is a primary aim of this proposal. Experiments will be conducted on specimens from all three diseases using four different infrared based techniques which have complementary strengths and weaknesses: hyperspectral imaging, Raman spectroscopy, a new instrument to be developed by combining atomic force microscopy with infrared spectroscopy and a scanning near field microscope recently installed on the free electron laser on the ALICE accelerator at Daresbury. The latter instrument has recently been shown to have considerable potential for the study of oesophageal cancer yielding images which show the chemical composition with unprecedented spatial resolution (0.1 microns) while hyperspectral imaging and Raman spectroscopy have been shown by members of the team to provide high resolution spectra that provide insight into the nature of cervical and prostate cancers. The new instrument will be installed on the free electron laser at Daresbury and will yield images on the nanoscale. This combination of techniques will allow the team to probe the physical and chemical structure of these three cancers with unprecedented accuracy and this should reveal important information about their character and the chemical processes that underlie their malignant behavior. The results of the research will be of interest to the study of cancer generally particularly if it reveals feature common to all three cancers. The infrared techniques have considerable medical potential and to differing extents are on the verge of finding practical applications. Newer terahertz techniques also have significant potential in this field and may be cheaper to implement. Unfortunately the development of cheap portable terahertz diagnositic instruments is being impeded by the weakness of existing sources of terahertz radiation. By exploiting the terahertz radiation from the ALICE accelerator, which is seven orders of magnitude more intense that conventional sources, the team will advance the design of two different terahertz instruments and assess their performance against the more developed infrared techniques in cancer diagnosis. However before any of these techniques can be used by medical professionals it is essential that their strengths and limitations of are fully understood. This is one of the objectives of the proposal and it will be realised by comparing the results of each technique in studies of specimens from the three cancers that are the primary focus of the research. This will be accompanied by developing data basis and algorithms for the automated analysis of spectral and imaging data thus removing subjectivity from the diagnostic procedure. Finally the team will explore a new approach to monitoring the interactions between pathogens, pharmaceuticals and relevant cells or tissues at the cellular and subcellular level using the instruments deployed on the free electron laser at Daresbury together with Raman microscopy. If this is successful, it will be important in the longer term in developing new treatments for cancer and other diseases.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/K006010/1Funder Contribution: 39,395 GBPPartners: Royal Holloway University of London
This proposal addresses the challenge "How do we make better security decisions?". Specifically we propose to develop new approaches to decision support based on mathematical game theory. Our work will support professionals who are designing secure systems and also those charged with determining if systems have an appropriate level of security -- in particular, systems administrators. We will develop techniques to support human decision making and techniques which enable well-founded security design decisions to be made. We recognise that the emerging trend away from corporate IT systems towards a Bring-Your-Own-Device (BYOD) culture will bring new challenges and changes to the role of systems administrator. However, even in this brave new world, companies will continue to have core assets such as the network infrastructure and the corporate database which will need the same kind of protection. It is certainly to be expected that some of the attacks will now originate from inside the corporate firewall rather than from outside. Our team will include researchers from the Imperial College Business School who will help us to ensure that our models are properly reflecting these new threats. Whilst others have used game theoretic approaches to answer these questions, much of the previous work has been more or less ad hoc. As such the resulting security decisions may be based on unsound principles. In particular, it is common to use abstractions without giving much consideration to the relationship between properties of the abstract model and the real system. We will develop a new game theoretic framework which enables a precise analysis of these relationships and hence provides a more robust decision support tool.

- Project . 2015 - 2016Funder: UKRI Project Code: EP/M003620/1Funder Contribution: 161,541 GBPPartners: University of Warwick
The Warwick EPSRC mathematics symposium is organised annually by the University of Warwick with the support of the EPSRC for the benefit of the mathematical sciences community in the UK. It brings leading national and international experts together with UK researchers in a year-long programme of research activities focused on an emerging theme in the mathematical sciences. The proposed symposium for the 2015-16 academic year will concentrate on the theme of "Fluctuation-driven phenomena and large deviations". In very general terms, the symposium will constitute an interdisciplinary focus on understanding the consequences of the interplay between stochasticity and nonlinearity, a recurrent challenge in many areas of the mathematical sciences, engineering and industry. Stochastic processes play a fundamental role in the mathematical sciences, both as tools for constructing models and as abstract mathematical structures in their own right. When nonlinear interactions between stochastic processes are introduced, however, the rigorous understanding of the resulting equations in terms of stochastic analysis becomes very challenging. Mean field theories are useful heuristics which are commonly employed outside of mathematics for dealing with this problem. Mean field theories in one way or another usually involve replacing random variables by their mean and assuming that fluctuations about the mean are approximately Gaussian distributed. In some cases, such models provide a good description of the original system and can be rigorously justified. In many cases they do not. Understanding the latter case, where mean-field models fail, is the central challenge of this symposium. We use "fluctuation driven phenomena" as a generic term to describe the kinds of effects which are observed when mean field theories fail. The challenges stem from the fact that the rich phenomenology of deterministic nonlinear dynamics (singularities, nonlinear resonance, chaos and so forth) is reflected in the stochastic context by a variety of interesting and sometimes unintuitive behaviours: long range correlations, strongly non-Gaussian statistics, coherent structures, absorbing state phase transitions, heavy-tailed probability distributions and enhanced probabilities of large deviations. Such phenomena are found throughout mathematics, both pure and applied, the physical, biological and engineering sciences as well as presenting particular problems to industrialists and policymakers. Contemporary problems such as the forecasting of extreme weather events, the design of marine infrastructure to withstand so-called "rogue waves", quantifying the probability of fluctuation driven transitions or "tipping points" in the climate system or estimating the redundancy required to ensure that infrastructure systems are resilient to shocks all require a step change in our ability to model and predict such fluctuation-driven phenomena. The programme of research activities constituting this symposium will therefore range from the very theoretical to the very applied. At the theoretical end we have random matrix theory which has recently emerged as a powerful tool for analysing the statistics of stochastic processes which are strongly non-Gaussian without the need to go via perturbative techniques developed in the physical sciences such as the renormalisation group. At the applied end we have questions of existential importance to the insurance industry such as how to cost the risk of extreme natural disasters and quantify their interaction with risks inherent in human-built systems. In between we have research on the connections between large deviation theory and nonequilibrium statistical mechanics, extreme events in the Earth sciences, randomness in the biological sciences and the latest numerical algorithms for computing rare events, a topic which has seen strong growth recent years.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/J019844/1Funder Contribution: 263,385 GBPPartners: KCL
Organic molecular monolayers at surfaces often constitute the central working component in nanotechnologies such as sensors, molecular electronics, smart coatings, organic solar cells, catalysts, medical devices, etc. A central challenge in the field is to achieve controlled creation of desired 2D molecular architectures at surfaces. Within this context, the past decade has witnessed a real and significant step-change in the 'bottom-up' self-organisation of 2D molecular assemblies at surfaces. The enormous variety and abundance of molecular structures formed via self-oeganisation has now critically tipped the argument strongly in favour of a 'bottom-up' construction strategy, which harnesses two powerful attributes of nanometer-precision (inaccessible to top-down methods) and highly parallel fabrication (impossible with atomic/molecular manipulation). Thus, bottom-up molecular assembly at surfaces holds the real possibility of becoming a dominating synthesis protocol in 21st century nanotechnologies Uniquely, the scope and versatility of these molecular architectures at 2D surfaces have been directly captured at the nanoscale via imaging with scanning probe microscopies and advanced surface spectroscopies. At present, however, the field is largely restricted to a 'make and see' approach and there is scarce understanding of any of the parameters that ultimately control molecular surface assembly. For example: (1) molecular assemblies at surfaces show highly polymorphic behaviour, and a priori control of assembly is practically non-existent; (2) little is understood of the influence and balance of the many interactions that drive molecular recognition and assembly (molecule-molecule interactions including dispersion, directional H-bonding and strong electrostatic and covalent interactions); (3) the role of surface-molecule interactions is largely uncharted even though they play a significant role in the diffusion of molecules and their subsequent assembly; (4), there is ample evidence that the kinetics of self-assembly is the major factor in determining the final structure, often driving polymorphic behaviour and leading to widely varied outcomes, depending on the conditions of formation; (5) a gamut of additional surface phenomena also also influence assembly e.g. chemical reactions between molecules, thermally activated internal degrees of freedom of molecules, surface reconstructions and co-assembly via coordinating surface atoms. The main objective of this project is to advance from experimental phenomena-reporting to knowledge-based design, and its central goal is to identify the role played by thermodynamic, entropic, kinetic and chemical factors in dictating molecular organisation at surfaces under given experimental conditions. To address this challenge requires a two-pronged approach in which ambitious and comprehensive theory development is undertaken alongside powerful imaging and spectroscopic tools applied to the same systems. This synergy of experiment and theory is absolutely essential to develop a fundamental understanding, which would enable a roadmap for controlled and engineered self-assembly at surfaces to be proposed that would, ultimately, allow one to 'dial up' a required structure at will. Four important and qualitatively different classes of assembly at surfaces will be studied: Molecular Self-Assembly; Hierarchical Self-Assembly; Metal-Organic Self Assembly; and, on-surface Covalent Assembly.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/J009733/1Funder Contribution: 406,787 GBPPartners: University of Rome III (Tre), UNIME, BU, University of Glasgow, National University Paris ENS
The peculiar behaviour of liquid and supercooled water has been baffling science for at least 236 years and is still seen as a major challenge facing chemistry today (Whitesides & Deutch, Nature 469, 21 (2011)). It was suggested that such strange behaviour might be caused by thermodynamic transitions, possibly even a second critical point. This second critical point would terminate a coexistence line between low- and high-density amorphous phases of water. Unfortunately, this second critical point (if it exists) and the associated polyamorphic liquid-liquid transition is difficult to study as it is thought to lie below the homogeneous nucleation temperature in a region known as "no man's land" (Angell, Science 319, 582 (2008)). In recent preliminary femtosecond optical Kerr-effect spectroscopy experiments, we have shown that water in concentrated eutectic solutions forms nanometre scale pools in which it retains many if not most of its bulk liquid characteristics. Most importantly, such solutions can be cooled to below 200 K without crystallisation (typically forming a glass at lower temperatures) allowing one to explore "no man's land" in detail for the first time. Preliminary experiments combining femtosecond spectroscopy with NMR diffusion measurements have shown that water in these pools undergoes a liquid-liquid transition as predicted for bulk water. Hence, it is proposed to use such nanopools as nanometre scale laboratories for the study of liquid and glassy water. A wide-ranging international collaboration has been set up to be able to study different critical aspects of the structure and dynamics of water. This includes cryogenic viscosity measurements, large dynamic-range (femtosecond to millisecond) optical Kerr-effect experiments, pulsed field gradient NMR, dielectric relaxation spectroscopy, terahertz time-domain spectroscopy, infrared pump-probe spectroscopy, and two-dimensional infrared spectroscopy. To ensure maximum impact of the experimental work, it is critical to have strong ties with experts in the theory and simulation of water and its thermodynamic behaviour. We have arranged collaboration with two international theory groups covering different aspects of the proposed work. Although the proposed research is relatively fundamental in nature, it will have impact as described in more detail elsewhere. The research addresses EPSRC priorities in nanoscience (supramolecular structures in liquids), energy (proton transport and liquid structuring in electrolytes for batteries and fuel cells), life sciences (the role of water in and on biomolecules), and the chemistry-chemical engineering interface (the role of the structuring of water in crystal nucleation). Our strong links with theory collaborators will ensure that fundamental insights will indeed propagate to the 'users' of such information. The close working relationship between the PI and CI has made Glasgow a centre of excellence in advanced femtosecond spectroscopy. This project exploits this expertise and international collaborations to immerse PDRAs and PGRSs in internationally leading research using state-of-the-art previously funded equipment.

- Project . 2010 - 2016Funder: UKRI Project Code: TS/I002170/1Funder Contribution: 477,743 GBPPartners: University of London
This project develops an approach, genomic selection, to increase the rate at which varieties of Spring barley are developed. This is a very important crop in national agriculture, particularly for the malting, brewing and distilling industries. It is important that the rate with which improved varieties are created is increased so that more effort can be placed by breeders on improving disease resistance while maintaining or increasing grain yield and grain quality, which remain of greatest importance to growers and end users.Genomic selection represents a way of predicting traits purely from genetic markers rather than by direct measurement. These predictions require that a set of plants is first measured for the target traits so that the effect of each marker can be estimated. However, after that, selection can occur for several generations purely on markers.Direct measurement of many traits can take much longer than a single growing season: seed must first be bulked up over several generations to provide a sufficient quantity for yield trials. In contrast, marker data can be collected within the generation time of any crop and is therefore much faster than conventional selection.Other approaches to plant breeding using genetic molecular markers have been in use for many years. In these, a very small numbers of markers with strong evidence of an affect on a trait are first identified. These are then tracked through the breeding programme. Genomic selection differs in that all available markers are used to predict traits: the more markers the better. The inclusion of all markers gives more accurate prediction of overall trait values even though the precise involvement of each marker is known with less certainty.Our study has four themes. Firstly, throughout the life of the project, we shall develop new statistical methods to establish relationships between very high numbers of genetic markers and traits. The methods we develop will be more focussed on the problems of plant breeding: most methods to date have been targeted at animal breeding. Secondly, we shall test methods which are available now using historical data available from to an existing Spring barley scheme. Results will be used immediately to make selections within this scheme. We expect to register new varieties from these selections within the five year life of the project.Next, we shall use results from the analysis of the historical data together with any early methodological developments we make to create crosses specifically to exploit genomic selection. These crosses may not necessarily be the typical crosses between two parents which are commonly used by breeders but may involve more complicated crossing schemes involving, for example four parents. Within the life of the project, we shall test whether this approach gives a greater response to selection that achieved by more conventional breeding, but there will be insufficient time to resister a new variety.Finally, we shall integrate results and methods from the first three phases to completely redesign the breeding programme to get the greatest advantage out of genomic selection.In short, we plan to develop a new approach to Spring barley breeding .Genomic selection could result in a fundamental change to the way crops are bred and enable targets for increased food production and environmental sustainability to be met. Compared to other temperate crops, Spring barley has a short generation time which make it well suited to develop and test these ideas, which may also be applicable to other crops.

###### 1,173 Projects, page 1 of 118

#### Loading

- Project . 2013 - 2016Funder: UKRI Project Code: EP/K009850/1Funder Contribution: 158,970 GBPPartners: University of Oxford
We are in the midst of an information revolution, where advances in science and technology, as well as the day-to-day operation of successful organisations and businesses, are increasingly reliant on the analyses of data. Driving these advances is a deluge of data, which is far outstripping the increase in computational power available. The importance of managing, analysing, and deriving useful understanding from such large scale data is highlighted by high-profile reports by McKinsey and The Economist as well as other outlets, and by the EPSRC's recent ICT priority of "Towards an Intelligent Information Infrastructure". Bayesian analysis is one of the most successful family of methods for analysing data, and one now widely adopted in the statistical sciences as well as in AI technologies like machine learning. The Bayesian approach offers a number of attractive advantages over other methods: flexibility in constructing complex models from simple parts; fully coherent inferences from data; natural incorporation of prior knowledge; explicit modelling assumptions; precise reasoning of uncertainties over model order and parameters; and protection against overfitting. On the other hand, there is a general perception that they can be too slow to be practically useful on big data sets. This is because exact Bayesian computations are typically intractable, so a range of more practical approximate algorithms are needed, including variational approximations, sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). MCMC methods arguably form the most popular class of Bayesian computational techniques, due to their flexibility, general applicability and asymptotic exactness. Unfortunately, MCMC methods do not scale well to big data sets, since they require many iterations to reduce Monte Carlo noise, and each iteration already involves an expensive sweep through the whole data set. In this project we propose to develop the theoretical foundations for a new class of MCMC inference procedures that can scale to billions of data items, thus unlocking the strengths of Bayesian methods for big data. The basic idea is to use a small subset of the data during each parameter update iteration of the algorithm, so that many iterations can be performed cheaply. This introduces excess stochasticity in the algorithm, which can be controlled by annealing the update step sizes towards zero as the number of iterations increases. The resulting algorithm is a cross between an MCMC and a stochastic optimization algorithm. An initial exploration of this procedure, which we call stochastic gradient Langevin dynamics (SGLD), was initiated by us recently (Welling and Teh, ICML 2011). Our proposal is to lay the mathematical foundations for understanding the theoretical properties of such stochastic MCMC algorithms, and to build on these foundations to develop more sophisticated algorithms. We aim to understand the conditions under which the algorithm is guaranteed to converge, and the type and speed of convergence. Using this understanding, we aim to develop algorithmic extensions and generalizations with better convergence properties, including preconditioning, adaptive and Riemannian methods, Hamiltonian Monte Carlo methods, Online Bayesian learning methods, and approximate methods with large step sizes. These algorithms will be empirically validated on real world problems, including large scale data analysis problems for text processing and collaborative filtering which are standard problems in machine learning, and large scale data from ID Analytics, a partner company interested in detecting identity theft and fraud.

- Project . 2015 - 2016Funder: UKRI Project Code: EP/M014134/1Funder Contribution: 97,162 GBPPartners: NTU
Asphalt pavements are the most commonly road pavements in the UK. Preserving them in a proper state fundamentally affects the economy and quality of life. However, their surveillance and maintenance are cost and time intensive, and asphalt concrete still has to be replaced after 15 years of use. Applying induction heating into the road could make pavements last much longer by stimulating the asphalt`s property of self-healing. Experimental results have found that a crack can be fully induction-healed, for at least 5 times. The efficiency of self-healing, however, depends on the temperature of the material and the temperature should be concentrated in the cracks alone. Thus, the challenge of this research is to discover how to apply energy only locally into the cracks without dispersing energy into undesired spaces. With this purpose, experimental and mathematical models of asphalt concrete self-healing under induction heating will be developed. This research will serve to understand the relationships between induction heating, the particles used to heat the mixture, the heat flow through asphalt concrete and its effect on asphalt self-healing. We will discover the type of particles, intensities and frequencies of induction heating which are more appropriate for healing, how to concentrate the heat in the damaged areas and the relationship between the amount of energy induced and the healing of asphalt concrete.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/K003445/1Funder Contribution: 100,347 GBPPartners: QUB
The transformation of communication and computing technologies in terms of accessibility, ubiquity, mobility and coverage, has enabled new opportunities for personalised on-demand communication (e.g. Facebook, Twitter). This is in addition to new market places for e-commerce and e-businesses, personalised platforms for e-governments and a vast range of new user-centric applications and services. The number of mobile apps (iPhone, Android) taking advantage of the cloud infrastructure has risen beyond several hundred thousand, reshaping the way we communicate and socialise. This shift in communication technology and services has also led to the emergence of unforeseen types of security and privacy threats with social, economic and political incentives, resulting in major research challenges in terms of the protection and security of information assets in storage and transmission. Therefore, Digital Security is vital in ensuring the UK is a safe place to do business, can act as a source of competitive advantage for foreign direct investment and provide a platform for SMEs and large corporations alike to develop products that use or supply this security market. Recent years have seen a massive growth in malware, fuelled by the evolution of the Internet and the migration from malware written by hobbyists to professionally devised malware developed by rogue corporations and organized criminals, primarily targeted for financial or political gain. In 2010, Symantec identified more than 240 million new malicious programs; albeit that many of these are variants of existing malware. Another report, suggests that the actual malware family count is between 1,000 and 3,000. The detection of malware is a major and ongoing problem. The battle against malware has escalated over the past decade as malware has evolved from simple programs that had little ability to evade detection, the main objective of which was to cause havoc, to more complex programs that target profit and deploy sophisticated evasion techniques. The focus of the NIMBUS network of researchers is to act as a catalyst to develop a balanced programme of both blue skies research and near term applied research that will assist in the fight against cyber crime in the UK. Malware related cyber threats are global in nature, hence it is essential that an international approach is taken to address these issues. Only a global network of centers of excellence is expected to provide the essential breadth and depth of know-how and the necessary critical mass of specialist competencies for resolving major Cyber Security challenges. NIMBUS will act as the UK's interface to international engagements with research networks in Europe, US and Asia.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/I004130/2Funder Contribution: 322,634 GBPPartners: University of Edinburgh
In homotopy theory, topological spaces (i.e. shapes) are regarded as being the same if we can deform continuously from one to the other. Algebraic varieties are spaces defined by polynomial equations, often over the complex numbers; studying their homotopy theory means trying to tell which topological spaces can be deformed continuously to get algebraic varieties, or when a continuous map between algebraic varieties can be continuously deformed to a map defined by polynomials.If the polynomials defining a variety are rational numbers (i.e. fractions), this automatically gives the complex variety a group of symmetries, called the Galois group. Although these symmetries are not continuous (i.e. nearby points can be sent far apart), they preserve something called the etale topology. This is an abstract concept which looks somewhat unnatural, butbehaves well enough to preserve many of the topological features of the variety. Part of my project will involve investigating how the Galois group interacts with the etale topology. I also study algebraic varieties in finite and mixed characteristics. Finite characteristics are universes in which the rules of arithmetic are modified by choosing a prime number p, and setting it to zero. For instance, in characteristic 3 the equation 1+1+1=0 holds. In mixed characteristic, p need not be 0, but the sequence 1,p, p^2, p^3 ... converges to 0.Although classical geometry of varieties does not make sense in finite and mixed characteristics, the etale topology provides a suitable alternative, allowing us to gain much valuable insight into the behaviour of the Galois group. This is an area which I find fascinating, as much topological intuition still works in contexts far removed from real and complex geometry. Indeed, many results in complex geometry have been motivated by phenomena observed in finite characteristic.Moduli spaces parametrise classes of geometric objects, and can themselves often be given geometric structures, similar to those of algebraic varieties. This structure tends to misbehave at points parametrising objects with a lot of symmetry. To obviate this difficulty, algebraic geometers work with moduli stacks, which parametrise the symmetries as well as the objects. Sometimes the symmetries can themselves have symmetries and so on, giving rise to infinity stacks.Usually, the dimension of a moduli stack can be calculated by naively counting the degrees of freedom in defining the geometric object it parametrises. However, the space usually contains singularities (points where the space is not smooth), and regions of different dimensions. Partially inspired by ideas from theoretical physics, it has been conjectured that every moduli stack can be extended to a derived moduli stack, which would have the expected dimension, but with some of the dimensions only virtual. Extending to these virtual dimensions also removes the singularities, a phenomenon known as hidden smoothness . Different classification problems can give rise to the same moduli stack, but different derived moduli stacks. Much of my work will be to try to construct derived moduli stacks for a large class of problems. This has important applications in algebraic geometry, as there are many problems for which the moduli stacks are unmanageable, but which should become accessible using derived moduli stacks. I will also seek to investigate the geometry and behaviour of derived stacks themselves.A common thread through the various aspects of my project will be to find ways of applying powerful ideas and techniques from a branch of topology, namely homotopy theory, in contexts where they would not, at first sight, appear to be relevant.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/K023349/1Funder Contribution: 1,780,200 GBPPartners: University of Liverpool, Lancashire Teaching Hospitals NHS Trust, STFC - Laboratories
This proposal brings together a critical mass of scientists from the Universities of Cardiff, Lancaster, Liverpool and Manchester and clinicians from the Christie, Lancaster and Liverpool NHS Hospital Trusts with the complementary experience and expertise to advance the understanding, diagnosis and treatment of cervical, oesophageal and prostate cancers. Cervical and prostate cancer are very common and the incidence of oesophageal is rising rapidly. There are cytology, biopsy and endoscopy techniques for extracting tissue from individuals who are at risk of developing these diseases. However the analysis of tissue by the standard techniques is problematic and subjective. There is clearly a national and international need to develop more accurate diagnostics for these diseases and that is a primary aim of this proposal. Experiments will be conducted on specimens from all three diseases using four different infrared based techniques which have complementary strengths and weaknesses: hyperspectral imaging, Raman spectroscopy, a new instrument to be developed by combining atomic force microscopy with infrared spectroscopy and a scanning near field microscope recently installed on the free electron laser on the ALICE accelerator at Daresbury. The latter instrument has recently been shown to have considerable potential for the study of oesophageal cancer yielding images which show the chemical composition with unprecedented spatial resolution (0.1 microns) while hyperspectral imaging and Raman spectroscopy have been shown by members of the team to provide high resolution spectra that provide insight into the nature of cervical and prostate cancers. The new instrument will be installed on the free electron laser at Daresbury and will yield images on the nanoscale. This combination of techniques will allow the team to probe the physical and chemical structure of these three cancers with unprecedented accuracy and this should reveal important information about their character and the chemical processes that underlie their malignant behavior. The results of the research will be of interest to the study of cancer generally particularly if it reveals feature common to all three cancers. The infrared techniques have considerable medical potential and to differing extents are on the verge of finding practical applications. Newer terahertz techniques also have significant potential in this field and may be cheaper to implement. Unfortunately the development of cheap portable terahertz diagnositic instruments is being impeded by the weakness of existing sources of terahertz radiation. By exploiting the terahertz radiation from the ALICE accelerator, which is seven orders of magnitude more intense that conventional sources, the team will advance the design of two different terahertz instruments and assess their performance against the more developed infrared techniques in cancer diagnosis. However before any of these techniques can be used by medical professionals it is essential that their strengths and limitations of are fully understood. This is one of the objectives of the proposal and it will be realised by comparing the results of each technique in studies of specimens from the three cancers that are the primary focus of the research. This will be accompanied by developing data basis and algorithms for the automated analysis of spectral and imaging data thus removing subjectivity from the diagnostic procedure. Finally the team will explore a new approach to monitoring the interactions between pathogens, pharmaceuticals and relevant cells or tissues at the cellular and subcellular level using the instruments deployed on the free electron laser at Daresbury together with Raman microscopy. If this is successful, it will be important in the longer term in developing new treatments for cancer and other diseases.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/K006010/1Funder Contribution: 39,395 GBPPartners: Royal Holloway University of London
This proposal addresses the challenge "How do we make better security decisions?". Specifically we propose to develop new approaches to decision support based on mathematical game theory. Our work will support professionals who are designing secure systems and also those charged with determining if systems have an appropriate level of security -- in particular, systems administrators. We will develop techniques to support human decision making and techniques which enable well-founded security design decisions to be made. We recognise that the emerging trend away from corporate IT systems towards a Bring-Your-Own-Device (BYOD) culture will bring new challenges and changes to the role of systems administrator. However, even in this brave new world, companies will continue to have core assets such as the network infrastructure and the corporate database which will need the same kind of protection. It is certainly to be expected that some of the attacks will now originate from inside the corporate firewall rather than from outside. Our team will include researchers from the Imperial College Business School who will help us to ensure that our models are properly reflecting these new threats. Whilst others have used game theoretic approaches to answer these questions, much of the previous work has been more or less ad hoc. As such the resulting security decisions may be based on unsound principles. In particular, it is common to use abstractions without giving much consideration to the relationship between properties of the abstract model and the real system. We will develop a new game theoretic framework which enables a precise analysis of these relationships and hence provides a more robust decision support tool.

- Project . 2015 - 2016Funder: UKRI Project Code: EP/M003620/1Funder Contribution: 161,541 GBPPartners: University of Warwick
The Warwick EPSRC mathematics symposium is organised annually by the University of Warwick with the support of the EPSRC for the benefit of the mathematical sciences community in the UK. It brings leading national and international experts together with UK researchers in a year-long programme of research activities focused on an emerging theme in the mathematical sciences. The proposed symposium for the 2015-16 academic year will concentrate on the theme of "Fluctuation-driven phenomena and large deviations". In very general terms, the symposium will constitute an interdisciplinary focus on understanding the consequences of the interplay between stochasticity and nonlinearity, a recurrent challenge in many areas of the mathematical sciences, engineering and industry. Stochastic processes play a fundamental role in the mathematical sciences, both as tools for constructing models and as abstract mathematical structures in their own right. When nonlinear interactions between stochastic processes are introduced, however, the rigorous understanding of the resulting equations in terms of stochastic analysis becomes very challenging. Mean field theories are useful heuristics which are commonly employed outside of mathematics for dealing with this problem. Mean field theories in one way or another usually involve replacing random variables by their mean and assuming that fluctuations about the mean are approximately Gaussian distributed. In some cases, such models provide a good description of the original system and can be rigorously justified. In many cases they do not. Understanding the latter case, where mean-field models fail, is the central challenge of this symposium. We use "fluctuation driven phenomena" as a generic term to describe the kinds of effects which are observed when mean field theories fail. The challenges stem from the fact that the rich phenomenology of deterministic nonlinear dynamics (singularities, nonlinear resonance, chaos and so forth) is reflected in the stochastic context by a variety of interesting and sometimes unintuitive behaviours: long range correlations, strongly non-Gaussian statistics, coherent structures, absorbing state phase transitions, heavy-tailed probability distributions and enhanced probabilities of large deviations. Such phenomena are found throughout mathematics, both pure and applied, the physical, biological and engineering sciences as well as presenting particular problems to industrialists and policymakers. Contemporary problems such as the forecasting of extreme weather events, the design of marine infrastructure to withstand so-called "rogue waves", quantifying the probability of fluctuation driven transitions or "tipping points" in the climate system or estimating the redundancy required to ensure that infrastructure systems are resilient to shocks all require a step change in our ability to model and predict such fluctuation-driven phenomena. The programme of research activities constituting this symposium will therefore range from the very theoretical to the very applied. At the theoretical end we have random matrix theory which has recently emerged as a powerful tool for analysing the statistics of stochastic processes which are strongly non-Gaussian without the need to go via perturbative techniques developed in the physical sciences such as the renormalisation group. At the applied end we have questions of existential importance to the insurance industry such as how to cost the risk of extreme natural disasters and quantify their interaction with risks inherent in human-built systems. In between we have research on the connections between large deviation theory and nonequilibrium statistical mechanics, extreme events in the Earth sciences, randomness in the biological sciences and the latest numerical algorithms for computing rare events, a topic which has seen strong growth recent years.

- Project . 2013 - 2016Funder: UKRI Project Code: EP/J019844/1Funder Contribution: 263,385 GBPPartners: KCL
Organic molecular monolayers at surfaces often constitute the central working component in nanotechnologies such as sensors, molecular electronics, smart coatings, organic solar cells, catalysts, medical devices, etc. A central challenge in the field is to achieve controlled creation of desired 2D molecular architectures at surfaces. Within this context, the past decade has witnessed a real and significant step-change in the 'bottom-up' self-organisation of 2D molecular assemblies at surfaces. The enormous variety and abundance of molecular structures formed via self-oeganisation has now critically tipped the argument strongly in favour of a 'bottom-up' construction strategy, which harnesses two powerful attributes of nanometer-precision (inaccessible to top-down methods) and highly parallel fabrication (impossible with atomic/molecular manipulation). Thus, bottom-up molecular assembly at surfaces holds the real possibility of becoming a dominating synthesis protocol in 21st century nanotechnologies Uniquely, the scope and versatility of these molecular architectures at 2D surfaces have been directly captured at the nanoscale via imaging with scanning probe microscopies and advanced surface spectroscopies. At present, however, the field is largely restricted to a 'make and see' approach and there is scarce understanding of any of the parameters that ultimately control molecular surface assembly. For example: (1) molecular assemblies at surfaces show highly polymorphic behaviour, and a priori control of assembly is practically non-existent; (2) little is understood of the influence and balance of the many interactions that drive molecular recognition and assembly (molecule-molecule interactions including dispersion, directional H-bonding and strong electrostatic and covalent interactions); (3) the role of surface-molecule interactions is largely uncharted even though they play a significant role in the diffusion of molecules and their subsequent assembly; (4), there is ample evidence that the kinetics of self-assembly is the major factor in determining the final structure, often driving polymorphic behaviour and leading to widely varied outcomes, depending on the conditions of formation; (5) a gamut of additional surface phenomena also also influence assembly e.g. chemical reactions between molecules, thermally activated internal degrees of freedom of molecules, surface reconstructions and co-assembly via coordinating surface atoms. The main objective of this project is to advance from experimental phenomena-reporting to knowledge-based design, and its central goal is to identify the role played by thermodynamic, entropic, kinetic and chemical factors in dictating molecular organisation at surfaces under given experimental conditions. To address this challenge requires a two-pronged approach in which ambitious and comprehensive theory development is undertaken alongside powerful imaging and spectroscopic tools applied to the same systems. This synergy of experiment and theory is absolutely essential to develop a fundamental understanding, which would enable a roadmap for controlled and engineered self-assembly at surfaces to be proposed that would, ultimately, allow one to 'dial up' a required structure at will. Four important and qualitatively different classes of assembly at surfaces will be studied: Molecular Self-Assembly; Hierarchical Self-Assembly; Metal-Organic Self Assembly; and, on-surface Covalent Assembly.

- Project . 2012 - 2016Funder: UKRI Project Code: EP/J009733/1Funder Contribution: 406,787 GBPPartners: University of Rome III (Tre), UNIME, BU, University of Glasgow, National University Paris ENS
The peculiar behaviour of liquid and supercooled water has been baffling science for at least 236 years and is still seen as a major challenge facing chemistry today (Whitesides & Deutch, Nature 469, 21 (2011)). It was suggested that such strange behaviour might be caused by thermodynamic transitions, possibly even a second critical point. This second critical point would terminate a coexistence line between low- and high-density amorphous phases of water. Unfortunately, this second critical point (if it exists) and the associated polyamorphic liquid-liquid transition is difficult to study as it is thought to lie below the homogeneous nucleation temperature in a region known as "no man's land" (Angell, Science 319, 582 (2008)). In recent preliminary femtosecond optical Kerr-effect spectroscopy experiments, we have shown that water in concentrated eutectic solutions forms nanometre scale pools in which it retains many if not most of its bulk liquid characteristics. Most importantly, such solutions can be cooled to below 200 K without crystallisation (typically forming a glass at lower temperatures) allowing one to explore "no man's land" in detail for the first time. Preliminary experiments combining femtosecond spectroscopy with NMR diffusion measurements have shown that water in these pools undergoes a liquid-liquid transition as predicted for bulk water. Hence, it is proposed to use such nanopools as nanometre scale laboratories for the study of liquid and glassy water. A wide-ranging international collaboration has been set up to be able to study different critical aspects of the structure and dynamics of water. This includes cryogenic viscosity measurements, large dynamic-range (femtosecond to millisecond) optical Kerr-effect experiments, pulsed field gradient NMR, dielectric relaxation spectroscopy, terahertz time-domain spectroscopy, infrared pump-probe spectroscopy, and two-dimensional infrared spectroscopy. To ensure maximum impact of the experimental work, it is critical to have strong ties with experts in the theory and simulation of water and its thermodynamic behaviour. We have arranged collaboration with two international theory groups covering different aspects of the proposed work. Although the proposed research is relatively fundamental in nature, it will have impact as described in more detail elsewhere. The research addresses EPSRC priorities in nanoscience (supramolecular structures in liquids), energy (proton transport and liquid structuring in electrolytes for batteries and fuel cells), life sciences (the role of water in and on biomolecules), and the chemistry-chemical engineering interface (the role of the structuring of water in crystal nucleation). Our strong links with theory collaborators will ensure that fundamental insights will indeed propagate to the 'users' of such information. The close working relationship between the PI and CI has made Glasgow a centre of excellence in advanced femtosecond spectroscopy. This project exploits this expertise and international collaborations to immerse PDRAs and PGRSs in internationally leading research using state-of-the-art previously funded equipment.

- Project . 2010 - 2016Funder: UKRI Project Code: TS/I002170/1Funder Contribution: 477,743 GBPPartners: University of London
This project develops an approach, genomic selection, to increase the rate at which varieties of Spring barley are developed. This is a very important crop in national agriculture, particularly for the malting, brewing and distilling industries. It is important that the rate with which improved varieties are created is increased so that more effort can be placed by breeders on improving disease resistance while maintaining or increasing grain yield and grain quality, which remain of greatest importance to growers and end users.Genomic selection represents a way of predicting traits purely from genetic markers rather than by direct measurement. These predictions require that a set of plants is first measured for the target traits so that the effect of each marker can be estimated. However, after that, selection can occur for several generations purely on markers.Direct measurement of many traits can take much longer than a single growing season: seed must first be bulked up over several generations to provide a sufficient quantity for yield trials. In contrast, marker data can be collected within the generation time of any crop and is therefore much faster than conventional selection.Other approaches to plant breeding using genetic molecular markers have been in use for many years. In these, a very small numbers of markers with strong evidence of an affect on a trait are first identified. These are then tracked through the breeding programme. Genomic selection differs in that all available markers are used to predict traits: the more markers the better. The inclusion of all markers gives more accurate prediction of overall trait values even though the precise involvement of each marker is known with less certainty.Our study has four themes. Firstly, throughout the life of the project, we shall develop new statistical methods to establish relationships between very high numbers of genetic markers and traits. The methods we develop will be more focussed on the problems of plant breeding: most methods to date have been targeted at animal breeding. Secondly, we shall test methods which are available now using historical data available from to an existing Spring barley scheme. Results will be used immediately to make selections within this scheme. We expect to register new varieties from these selections within the five year life of the project.Next, we shall use results from the analysis of the historical data together with any early methodological developments we make to create crosses specifically to exploit genomic selection. These crosses may not necessarily be the typical crosses between two parents which are commonly used by breeders but may involve more complicated crossing schemes involving, for example four parents. Within the life of the project, we shall test whether this approach gives a greater response to selection that achieved by more conventional breeding, but there will be insufficient time to resister a new variety.Finally, we shall integrate results and methods from the first three phases to completely redesign the breeding programme to get the greatest advantage out of genomic selection.In short, we plan to develop a new approach to Spring barley breeding .Genomic selection could result in a fundamental change to the way crops are bred and enable targets for increased food production and environmental sustainability to be met. Compared to other temperate crops, Spring barley has a short generation time which make it well suited to develop and test these ideas, which may also be applicable to other crops.