Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
377 Projects

  • 2013-2022
  • UK Research and Innovation
  • UKRI|EPSRC
  • 2013
  • 2016

10
arrow_drop_down
  • Funder: UKRI Project Code: EP/K023349/1
    Funder Contribution: 1,780,200 GBP

    This proposal brings together a critical mass of scientists from the Universities of Cardiff, Lancaster, Liverpool and Manchester and clinicians from the Christie, Lancaster and Liverpool NHS Hospital Trusts with the complementary experience and expertise to advance the understanding, diagnosis and treatment of cervical, oesophageal and prostate cancers. Cervical and prostate cancer are very common and the incidence of oesophageal is rising rapidly. There are cytology, biopsy and endoscopy techniques for extracting tissue from individuals who are at risk of developing these diseases. However the analysis of tissue by the standard techniques is problematic and subjective. There is clearly a national and international need to develop more accurate diagnostics for these diseases and that is a primary aim of this proposal. Experiments will be conducted on specimens from all three diseases using four different infrared based techniques which have complementary strengths and weaknesses: hyperspectral imaging, Raman spectroscopy, a new instrument to be developed by combining atomic force microscopy with infrared spectroscopy and a scanning near field microscope recently installed on the free electron laser on the ALICE accelerator at Daresbury. The latter instrument has recently been shown to have considerable potential for the study of oesophageal cancer yielding images which show the chemical composition with unprecedented spatial resolution (0.1 microns) while hyperspectral imaging and Raman spectroscopy have been shown by members of the team to provide high resolution spectra that provide insight into the nature of cervical and prostate cancers. The new instrument will be installed on the free electron laser at Daresbury and will yield images on the nanoscale. This combination of techniques will allow the team to probe the physical and chemical structure of these three cancers with unprecedented accuracy and this should reveal important information about their character and the chemical processes that underlie their malignant behavior. The results of the research will be of interest to the study of cancer generally particularly if it reveals feature common to all three cancers. The infrared techniques have considerable medical potential and to differing extents are on the verge of finding practical applications. Newer terahertz techniques also have significant potential in this field and may be cheaper to implement. Unfortunately the development of cheap portable terahertz diagnositic instruments is being impeded by the weakness of existing sources of terahertz radiation. By exploiting the terahertz radiation from the ALICE accelerator, which is seven orders of magnitude more intense that conventional sources, the team will advance the design of two different terahertz instruments and assess their performance against the more developed infrared techniques in cancer diagnosis. However before any of these techniques can be used by medical professionals it is essential that their strengths and limitations of are fully understood. This is one of the objectives of the proposal and it will be realised by comparing the results of each technique in studies of specimens from the three cancers that are the primary focus of the research. This will be accompanied by developing data basis and algorithms for the automated analysis of spectral and imaging data thus removing subjectivity from the diagnostic procedure. Finally the team will explore a new approach to monitoring the interactions between pathogens, pharmaceuticals and relevant cells or tissues at the cellular and subcellular level using the instruments deployed on the free electron laser at Daresbury together with Raman microscopy. If this is successful, it will be important in the longer term in developing new treatments for cancer and other diseases.

    visibility118
    visibilityviews118
    downloaddownloads525
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/J010790/1
    Funder Contribution: 613,852 GBP

    String theory is believed to be a theory capable of describing all the known forces of nature, and provides a solution to the venerable problem of finding a theory of gravity consistent with quantum mechanics. To a first approximation, the world we observe corresponds to a vacuum of this theory. String theory admits many of these vacuum states and the class that is most likely to describe the observed world are the so-called `heterotic vacua'. Analysing these vacua requires the application of sophisticated tools drawn from mathematics, particularly from algebraic geometry. If history is any guide, the synthesis of these mathematical tools with observations drawn from physics will lead not only to significant progress in physics, but also important advances in mathematics. An example of such a major insight in mathematics, that arose from string theory, is mirror symmetry. This is the observation that within in a restricted class of string vacua, these arise in `mirror pairs'. This has the consequence that certain mathematical quantities, which are both important and otherwise mysterious, can be calculated in a straightforward manner. The class of heterotic vacua, of interest here, are a wider class of vacua, and an important question is to what extent mirror symmetry generalises and how it acts on this wider class. In a more precise description, the space of heterotic vacua is the parameter space of pairs (X,V) where X is a Calabi-Yau manifold and V is a stable holomorphic vector bundle on X. This space is a major object of study in algebra and geometry. String theory tells us that it is subject to quantum corrections. To understand the nature of these corrections is the key research problem in this proposal and any advance in our understanding will have a important impact in both mathematics and physics. By now it is widely understood that string theory and geometry are intimately related with much to be learned from each other, yet this relationship is relatively unexplored in the heterotic string. This fact, together with recent developments that indicate that longstanding problems have recently become tractable, means that the time is right to revisit the geometry of heterotic vacua.

    more_vert
  • Funder: UKRI Project Code: EP/J019844/1
    Funder Contribution: 263,385 GBP

    Organic molecular monolayers at surfaces often constitute the central working component in nanotechnologies such as sensors, molecular electronics, smart coatings, organic solar cells, catalysts, medical devices, etc. A central challenge in the field is to achieve controlled creation of desired 2D molecular architectures at surfaces. Within this context, the past decade has witnessed a real and significant step-change in the 'bottom-up' self-organisation of 2D molecular assemblies at surfaces. The enormous variety and abundance of molecular structures formed via self-oeganisation has now critically tipped the argument strongly in favour of a 'bottom-up' construction strategy, which harnesses two powerful attributes of nanometer-precision (inaccessible to top-down methods) and highly parallel fabrication (impossible with atomic/molecular manipulation). Thus, bottom-up molecular assembly at surfaces holds the real possibility of becoming a dominating synthesis protocol in 21st century nanotechnologies Uniquely, the scope and versatility of these molecular architectures at 2D surfaces have been directly captured at the nanoscale via imaging with scanning probe microscopies and advanced surface spectroscopies. At present, however, the field is largely restricted to a 'make and see' approach and there is scarce understanding of any of the parameters that ultimately control molecular surface assembly. For example: (1) molecular assemblies at surfaces show highly polymorphic behaviour, and a priori control of assembly is practically non-existent; (2) little is understood of the influence and balance of the many interactions that drive molecular recognition and assembly (molecule-molecule interactions including dispersion, directional H-bonding and strong electrostatic and covalent interactions); (3) the role of surface-molecule interactions is largely uncharted even though they play a significant role in the diffusion of molecules and their subsequent assembly; (4), there is ample evidence that the kinetics of self-assembly is the major factor in determining the final structure, often driving polymorphic behaviour and leading to widely varied outcomes, depending on the conditions of formation; (5) a gamut of additional surface phenomena also also influence assembly e.g. chemical reactions between molecules, thermally activated internal degrees of freedom of molecules, surface reconstructions and co-assembly via coordinating surface atoms. The main objective of this project is to advance from experimental phenomena-reporting to knowledge-based design, and its central goal is to identify the role played by thermodynamic, entropic, kinetic and chemical factors in dictating molecular organisation at surfaces under given experimental conditions. To address this challenge requires a two-pronged approach in which ambitious and comprehensive theory development is undertaken alongside powerful imaging and spectroscopic tools applied to the same systems. This synergy of experiment and theory is absolutely essential to develop a fundamental understanding, which would enable a roadmap for controlled and engineered self-assembly at surfaces to be proposed that would, ultimately, allow one to 'dial up' a required structure at will. Four important and qualitatively different classes of assembly at surfaces will be studied: Molecular Self-Assembly; Hierarchical Self-Assembly; Metal-Organic Self Assembly; and, on-surface Covalent Assembly.

    visibility33
    visibilityviews33
    downloaddownloads35
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K011693/1
    Funder Contribution: 300,568 GBP

    It is reported that the total energy consumed by the ICT infrastructure of wireless and wired networks takes up over 3 percent of the worldwide electric energy consumption that generated 2 percent of the worldwide CO2 emissions nowadays. It is predicted that in the future a major portion of expanding traffic volumes will be in wireless side. Furthermore, future wireless network systems (e.g., 4G/B4G) are increasingly demanded as broadband and high-speed tailored to support reliable Quality of Service (QoS) for numerous multimedia applications. With explosive growth of high-rate multimedia applications (e.g. HDTV and 3DTV), more and more energy will be consumed in wireless networks to meet the QoS requirements. Specifically, it is predicted that footprint of mobile wireless communications could almost triple from 2007 to 2020 corresponding to more than one-third of the present annual emissions of the whole UK. Therefore, energy-efficient green wireless communications are paid increasing attention given the limited energy resources and environment-friendly transmission requirements globally. The aim of this project is to improve the joint spectrum and energy efficiency of future wireless network systems using cognitive radio technology along with innovative game-theoretic resource scheduling methods, efficient cross-layer designs and contemporary clinical findings. We plan to consider the health and environmental concerns to introduce power-efficient resource scheduling designs that intelligently exploit the available wireless resources in next-generation systems. Our efforts will leverage applications of cognitive radio techniques to situational awareness of the communications system with adaptive power control and dynamic spectrum allocation. This project will underpin the UK green communication technology by designing environment-friendly joint power and spectrum efficient wireless communication systems.

    visibility48
    visibilityviews48
    downloaddownloads482
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K011588/1
    Funder Contribution: 255,173 GBP

    Differential geometry is the study of "smooth shapes", e.g. curved surfaces that have no rough edges or sharp bends. A surface is a 2-dimensional object, and one can similarly imagine smooth shapes that are 1-dimensional, such as a line, or curve, or circle. What is much harder to imagine, but can nonetheless be described in precise mathematical terms, is a smooth shape in an arbitrary number of dimensions: these objects are called "manifolds". A specific example of a 2-dimensional manifold is a disk, i.e. the region inside a circle, and its "boundary" is a 1-dimensional manifold, namely the circle. Similarly, for any positive integer n, an n-dimensional manifold may have a boundary which is an (n-1)-dimensional manifold. All the 3-dimensional manifolds that we can easily picture are of this type: e.g. if we imagine any surface in 3-dimensional space, such as a sphere or a "torus" (the shape of the surface of a doughnut), then the region inside that surface is a 3-dimensional manifold whose boundary is the surface. We can now ask one of the most basic questions concerning manifolds: given an n-dimensional manifold, is it the boundary of something? This is actually not just a geometric question, but really a question of "topology", which is a certain way of studying the "overall shape" of geometric objects. As in the example given above, most 2-dimensional manifolds that we can easily imagine are boundaries of the 3-dimensional regions they enclose. But for a more interesting example, we can try to imagine a "Klein bottle": this is a surface formed by taking an ordinary bottle and bending its opening around and through the glass into the inside, then connecting the opening to the floor of the bottle by curving the floor upward. The result is a surface that is not a boundary of anything, as its inside is not distinct from its outside; like a Moebius strip, but closed in on itself. The subject of this proposal concerns a more elaborate version of the above question about boundaries: we deal with a particular type of manifold in an even number of dimensions, called "symplectic" manifolds, and their odd-dimensional boundaries are called "contact" manifolds. The idea of a symplectic manifold comes originally from physics: a century ago, symplectic manifolds were understood to be the natural geometric setting in which to study Hamilton's 19th century reformulation of Newton's classical mechanics. Today symplectic manifolds are considered interesting in their own right, and they retain a connection to physics, but of a very different and non-classical sort: by studying certain special surfaces in symplectic manifolds with contact boundary, one can define a so-called "Symplectic Field Theory" (or "SFT" for short), which bears a strong but mysterious resemblance to some of the theories that modern physics uses to describe elementary particles and their interactions. Unlike those theories, SFT does not help us to predict what will happen in a particle accelerator, but it can help us answer a basic question in the area of "Symplectic and Contact Topology": given a contact manifold, is it the boundary of any symplectic manifold? More generally, one way to study contact manifolds themselves is to consider the following relation: we say that two such manifolds are "symplectically cobordant" if they form two separate pieces of the boundary of a symplectic manifold. The question of whether two given contact manifolds are cobordant helps us understand what kinds of contact manifolds can exist in the first place, and Symplectic Field Theory is one of the most powerful methods we have for studying this. The goal of this project is thus to use this and related tools to learn as much as we can about the symplectic cobordism relation on contact manifolds. Since most previous results on this subject have focused on 4-dimensional manifolds with 3-dimensional boundaries, we aim especially to gain new insights in higher dimensions.

    more_vert
  • Funder: UKRI Project Code: EP/K018868/1
    Funder Contribution: 556,849 GBP

    We are all familiar with the language of classical logic, which is normally used for both mathematical and informal arguments, but there are other important and useful logics. Some nonclassical logics, for example, can be associated with programming languages to help control the behaviour of their programs, for instance via type systems. In order to define the proofs of a logic we need a proof system consisting of a formal language and some inference rules. We normally design proof systems following the prescriptions of some known formalism that ensures that we obtain desirable mathematical properties. In any case, we must make sure that proofs can be checked for validity with a computational effort that does not exceed certain limits. In other words, we want checking correctness to be relatively easy, also because this property facilitates the design of algorithms for the automatic discovery of proofs. However, there is a tension between the ease by which proofs can be checked and their size. If a proof is too small, checking it is difficult. Conversely, formalisms that make it very easy to check and to search for proofs create big bureaucratic unnatural proofs. All traditional proof systems suffer to various extents from this problem, because of the rigidity of all traditional formalisms, which impose an excess of structure on proof systems. We intend to design a formalism, provisionally called Formalism B, in which arbitrary logics can be defined and their proofs described in a way that is at the same time efficient and natural. Formalism B will ideally lie at the boundary between the class of proof systems and that of systems containing proto-proofs that are small and natural, but are too difficult to check. In other words, we want to maximise naturality by retaining as much efficiency as possible in proof representation. A driving force in this effort will be the use of existing algebraic theories that seem to capture some of the structure needed by the new formalism. There are two main reasons for doing this. One is theoretical: the problem is compelling, and tackling it fits well into a research effort in the theory of computation that tries to define proofs as more abstract mathematical objects than just intricate pieces of syntax. Suffice to say that we are at present unable to decide by an algorithm when two seemingly different proofs of the same statement use the same ideas and so are equivalent, or not. This is a problem that dates back to Hilbert and that requires more abstract ways to describe proofs than traditional syntax provides. The second reason is practical: we need formal proofs to verify the correctness of complex computer systems. The more powerful computer systems become, the more we need to ensure that they do what they are supposed to do. Formal verification is increasingly adopted as a viable instrument for this, but still much needs to be done in order to make it an everyday tool. We need to invent proof systems that simplify the interaction with proof assistants, and that could represent in some canonical way proofs that are essentially the same, so that no duplication occurs in the search for and storing of proofs in proof libraries. This project intends to contribute by exploiting proof-theoretic advances of the past ten years. We have developed a new design method for proof systems that reduces the size of inference steps to their minimal terms, in a theory called `deep inference'. The finer scale of rules allows us to associate proofs with certain purely geometric objects that faithfully abstract away proof structure, and that are natural guides for the design of proof systems whose proofs would not suffer from several forms of bureaucracy. In short, after a decade in which we broke proofs into their smallest pieces, by retaining their properties, we are now reshaping them in such a way that they still retain all the features we need but do not keep the undesirable ones.

    more_vert
  • Funder: UKRI Project Code: EP/K009850/1
    Funder Contribution: 158,970 GBP

    We are in the midst of an information revolution, where advances in science and technology, as well as the day-to-day operation of successful organisations and businesses, are increasingly reliant on the analyses of data. Driving these advances is a deluge of data, which is far outstripping the increase in computational power available. The importance of managing, analysing, and deriving useful understanding from such large scale data is highlighted by high-profile reports by McKinsey and The Economist as well as other outlets, and by the EPSRC's recent ICT priority of "Towards an Intelligent Information Infrastructure". Bayesian analysis is one of the most successful family of methods for analysing data, and one now widely adopted in the statistical sciences as well as in AI technologies like machine learning. The Bayesian approach offers a number of attractive advantages over other methods: flexibility in constructing complex models from simple parts; fully coherent inferences from data; natural incorporation of prior knowledge; explicit modelling assumptions; precise reasoning of uncertainties over model order and parameters; and protection against overfitting. On the other hand, there is a general perception that they can be too slow to be practically useful on big data sets. This is because exact Bayesian computations are typically intractable, so a range of more practical approximate algorithms are needed, including variational approximations, sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). MCMC methods arguably form the most popular class of Bayesian computational techniques, due to their flexibility, general applicability and asymptotic exactness. Unfortunately, MCMC methods do not scale well to big data sets, since they require many iterations to reduce Monte Carlo noise, and each iteration already involves an expensive sweep through the whole data set. In this project we propose to develop the theoretical foundations for a new class of MCMC inference procedures that can scale to billions of data items, thus unlocking the strengths of Bayesian methods for big data. The basic idea is to use a small subset of the data during each parameter update iteration of the algorithm, so that many iterations can be performed cheaply. This introduces excess stochasticity in the algorithm, which can be controlled by annealing the update step sizes towards zero as the number of iterations increases. The resulting algorithm is a cross between an MCMC and a stochastic optimization algorithm. An initial exploration of this procedure, which we call stochastic gradient Langevin dynamics (SGLD), was initiated by us recently (Welling and Teh, ICML 2011). Our proposal is to lay the mathematical foundations for understanding the theoretical properties of such stochastic MCMC algorithms, and to build on these foundations to develop more sophisticated algorithms. We aim to understand the conditions under which the algorithm is guaranteed to converge, and the type and speed of convergence. Using this understanding, we aim to develop algorithmic extensions and generalizations with better convergence properties, including preconditioning, adaptive and Riemannian methods, Hamiltonian Monte Carlo methods, Online Bayesian learning methods, and approximate methods with large step sizes. These algorithms will be empirically validated on real world problems, including large scale data analysis problems for text processing and collaborative filtering which are standard problems in machine learning, and large scale data from ID Analytics, a partner company interested in detecting identity theft and fraud.

    visibility10
    visibilityviews10
    downloaddownloads75
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/I004130/2
    Funder Contribution: 322,634 GBP

    In homotopy theory, topological spaces (i.e. shapes) are regarded as being the same if we can deform continuously from one to the other. Algebraic varieties are spaces defined by polynomial equations, often over the complex numbers; studying their homotopy theory means trying to tell which topological spaces can be deformed continuously to get algebraic varieties, or when a continuous map between algebraic varieties can be continuously deformed to a map defined by polynomials.If the polynomials defining a variety are rational numbers (i.e. fractions), this automatically gives the complex variety a group of symmetries, called the Galois group. Although these symmetries are not continuous (i.e. nearby points can be sent far apart), they preserve something called the etale topology. This is an abstract concept which looks somewhat unnatural, butbehaves well enough to preserve many of the topological features of the variety. Part of my project will involve investigating how the Galois group interacts with the etale topology. I also study algebraic varieties in finite and mixed characteristics. Finite characteristics are universes in which the rules of arithmetic are modified by choosing a prime number p, and setting it to zero. For instance, in characteristic 3 the equation 1+1+1=0 holds. In mixed characteristic, p need not be 0, but the sequence 1,p, p^2, p^3 ... converges to 0.Although classical geometry of varieties does not make sense in finite and mixed characteristics, the etale topology provides a suitable alternative, allowing us to gain much valuable insight into the behaviour of the Galois group. This is an area which I find fascinating, as much topological intuition still works in contexts far removed from real and complex geometry. Indeed, many results in complex geometry have been motivated by phenomena observed in finite characteristic.Moduli spaces parametrise classes of geometric objects, and can themselves often be given geometric structures, similar to those of algebraic varieties. This structure tends to misbehave at points parametrising objects with a lot of symmetry. To obviate this difficulty, algebraic geometers work with moduli stacks, which parametrise the symmetries as well as the objects. Sometimes the symmetries can themselves have symmetries and so on, giving rise to infinity stacks.Usually, the dimension of a moduli stack can be calculated by naively counting the degrees of freedom in defining the geometric object it parametrises. However, the space usually contains singularities (points where the space is not smooth), and regions of different dimensions. Partially inspired by ideas from theoretical physics, it has been conjectured that every moduli stack can be extended to a derived moduli stack, which would have the expected dimension, but with some of the dimensions only virtual. Extending to these virtual dimensions also removes the singularities, a phenomenon known as hidden smoothness . Different classification problems can give rise to the same moduli stack, but different derived moduli stacks. Much of my work will be to try to construct derived moduli stacks for a large class of problems. This has important applications in algebraic geometry, as there are many problems for which the moduli stacks are unmanageable, but which should become accessible using derived moduli stacks. I will also seek to investigate the geometry and behaviour of derived stacks themselves.A common thread through the various aspects of my project will be to find ways of applying powerful ideas and techniques from a branch of topology, namely homotopy theory, in contexts where they would not, at first sight, appear to be relevant.

    visibility2
    visibilityviews2
    downloaddownloads1
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K028286/1
    Funder Contribution: 278,462 GBP

    Society faces a number of major challenges due to the impact of global warming on world climate. One consequence is the spread of otherwise rare and poorly characterised viral infections into economically advanced areas of the world. Examples include Bluetongue virus, which arrived in the UK after years of being restricted to much warmer climates. This poses a threat to public and animal health from both existing viruses and newly emerging ones. A major problem in the design of anti-viral therapies is the emergence of viral strains that are resistant to anti-viral drugs soon after initial treatment. Research into the mechanisms that could prevent such viral escape is therefore urgently required in order to develop therapeutics with long-term action. Moreover, viruses can evolve strains that cross the species barrier, for example from an animal to a human host as in the case of bird flu, and it is important to be able to develop strategies to prevent this. Insights into virus evolution could shed light on both issues. In particular, we need to better understand the constraints that viruses face when their genomes evolve, and find ways of predicting such evolutionary behaviour. In our previous research we have gained fundamentally new insights into the constraints underlying virus structure and function. In an interdisciplinary research programme, combining the modelling expertise of the Twarock group at the York Centre for Complex Systems Analysis at the University of York with the experimental know-how of the Stockley and Rowlands Labs at the Astbury Centre for Structural Molecular Biology In Leeds, we investigate here their impact on the evolution of viruses, working with a number of viruses including picornaviridae that contain important human and animal viruses, such as foot-and-mouth virus. Our research programme aims at improving our understanding of the factors that determine the evolutionary behaviour of viruses, and we will use these results to explore strategies to misdirect viral evolution. In particular, we will assess in which ways the structural constraints we have discovered earlier lead to evolutionary bottlenecks, i.e. correspond to constraints that the viral escape mutants cannot avoid, and that a new generation of anti-viral therapeutics could target. Moreover, we plan to develop methods to predict how viruses may react to a drug, and use this to test the impact of different anti-viral strategies. This research has the potential to lead to a new generation of "evolutionarily-stable" therapeutics that are less susceptible to the problem of escape mutants.

    visibility28
    visibilityviews28
    downloaddownloads31
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/L504671/1
    Funder Contribution: 637,523 GBP

    Development of a solid state radiation hard high temperature sensor for neutron and gamma detection has many potential uses. With long term reliability suitable for use in nuclear power generation plant, high energy physics, synchrotron favilities, medical devices and national resiliience the use of solid state diamond devices is an obvious choice. Diamond eliminates the need to use helium-3 and is very radiation hard. Diamond is an expensive synthetic material and challenging to process reliably so work needs undertaking on the use of less expensive poly-crystalline diamond. Areas of innovation include precise laser cutting and plasma processing of diamond to improve the production of multi-layer devices for neutron detection. Diamond polishing needs to be improved and understood so that optimal and economic devices can be manufactured. Advanced electron micropscopy techniques, nano-mechanical and tensile testing, radiation testing as well as high temperature neutron performance and mechanical stability will be demonstrated to show how this technology can be applied successfully to future power plant designs and radiation monitoring.

    more_vert
Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
377 Projects
  • Funder: UKRI Project Code: EP/K023349/1
    Funder Contribution: 1,780,200 GBP

    This proposal brings together a critical mass of scientists from the Universities of Cardiff, Lancaster, Liverpool and Manchester and clinicians from the Christie, Lancaster and Liverpool NHS Hospital Trusts with the complementary experience and expertise to advance the understanding, diagnosis and treatment of cervical, oesophageal and prostate cancers. Cervical and prostate cancer are very common and the incidence of oesophageal is rising rapidly. There are cytology, biopsy and endoscopy techniques for extracting tissue from individuals who are at risk of developing these diseases. However the analysis of tissue by the standard techniques is problematic and subjective. There is clearly a national and international need to develop more accurate diagnostics for these diseases and that is a primary aim of this proposal. Experiments will be conducted on specimens from all three diseases using four different infrared based techniques which have complementary strengths and weaknesses: hyperspectral imaging, Raman spectroscopy, a new instrument to be developed by combining atomic force microscopy with infrared spectroscopy and a scanning near field microscope recently installed on the free electron laser on the ALICE accelerator at Daresbury. The latter instrument has recently been shown to have considerable potential for the study of oesophageal cancer yielding images which show the chemical composition with unprecedented spatial resolution (0.1 microns) while hyperspectral imaging and Raman spectroscopy have been shown by members of the team to provide high resolution spectra that provide insight into the nature of cervical and prostate cancers. The new instrument will be installed on the free electron laser at Daresbury and will yield images on the nanoscale. This combination of techniques will allow the team to probe the physical and chemical structure of these three cancers with unprecedented accuracy and this should reveal important information about their character and the chemical processes that underlie their malignant behavior. The results of the research will be of interest to the study of cancer generally particularly if it reveals feature common to all three cancers. The infrared techniques have considerable medical potential and to differing extents are on the verge of finding practical applications. Newer terahertz techniques also have significant potential in this field and may be cheaper to implement. Unfortunately the development of cheap portable terahertz diagnositic instruments is being impeded by the weakness of existing sources of terahertz radiation. By exploiting the terahertz radiation from the ALICE accelerator, which is seven orders of magnitude more intense that conventional sources, the team will advance the design of two different terahertz instruments and assess their performance against the more developed infrared techniques in cancer diagnosis. However before any of these techniques can be used by medical professionals it is essential that their strengths and limitations of are fully understood. This is one of the objectives of the proposal and it will be realised by comparing the results of each technique in studies of specimens from the three cancers that are the primary focus of the research. This will be accompanied by developing data basis and algorithms for the automated analysis of spectral and imaging data thus removing subjectivity from the diagnostic procedure. Finally the team will explore a new approach to monitoring the interactions between pathogens, pharmaceuticals and relevant cells or tissues at the cellular and subcellular level using the instruments deployed on the free electron laser at Daresbury together with Raman microscopy. If this is successful, it will be important in the longer term in developing new treatments for cancer and other diseases.

    visibility118
    visibilityviews118
    downloaddownloads525
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/J010790/1
    Funder Contribution: 613,852 GBP

    String theory is believed to be a theory capable of describing all the known forces of nature, and provides a solution to the venerable problem of finding a theory of gravity consistent with quantum mechanics. To a first approximation, the world we observe corresponds to a vacuum of this theory. String theory admits many of these vacuum states and the class that is most likely to describe the observed world are the so-called `heterotic vacua'. Analysing these vacua requires the application of sophisticated tools drawn from mathematics, particularly from algebraic geometry. If history is any guide, the synthesis of these mathematical tools with observations drawn from physics will lead not only to significant progress in physics, but also important advances in mathematics. An example of such a major insight in mathematics, that arose from string theory, is mirror symmetry. This is the observation that within in a restricted class of string vacua, these arise in `mirror pairs'. This has the consequence that certain mathematical quantities, which are both important and otherwise mysterious, can be calculated in a straightforward manner. The class of heterotic vacua, of interest here, are a wider class of vacua, and an important question is to what extent mirror symmetry generalises and how it acts on this wider class. In a more precise description, the space of heterotic vacua is the parameter space of pairs (X,V) where X is a Calabi-Yau manifold and V is a stable holomorphic vector bundle on X. This space is a major object of study in algebra and geometry. String theory tells us that it is subject to quantum corrections. To understand the nature of these corrections is the key research problem in this proposal and any advance in our understanding will have a important impact in both mathematics and physics. By now it is widely understood that string theory and geometry are intimately related with much to be learned from each other, yet this relationship is relatively unexplored in the heterotic string. This fact, together with recent developments that indicate that longstanding problems have recently become tractable, means that the time is right to revisit the geometry of heterotic vacua.

    more_vert
  • Funder: UKRI Project Code: EP/J019844/1
    Funder Contribution: 263,385 GBP

    Organic molecular monolayers at surfaces often constitute the central working component in nanotechnologies such as sensors, molecular electronics, smart coatings, organic solar cells, catalysts, medical devices, etc. A central challenge in the field is to achieve controlled creation of desired 2D molecular architectures at surfaces. Within this context, the past decade has witnessed a real and significant step-change in the 'bottom-up' self-organisation of 2D molecular assemblies at surfaces. The enormous variety and abundance of molecular structures formed via self-oeganisation has now critically tipped the argument strongly in favour of a 'bottom-up' construction strategy, which harnesses two powerful attributes of nanometer-precision (inaccessible to top-down methods) and highly parallel fabrication (impossible with atomic/molecular manipulation). Thus, bottom-up molecular assembly at surfaces holds the real possibility of becoming a dominating synthesis protocol in 21st century nanotechnologies Uniquely, the scope and versatility of these molecular architectures at 2D surfaces have been directly captured at the nanoscale via imaging with scanning probe microscopies and advanced surface spectroscopies. At present, however, the field is largely restricted to a 'make and see' approach and there is scarce understanding of any of the parameters that ultimately control molecular surface assembly. For example: (1) molecular assemblies at surfaces show highly polymorphic behaviour, and a priori control of assembly is practically non-existent; (2) little is understood of the influence and balance of the many interactions that drive molecular recognition and assembly (molecule-molecule interactions including dispersion, directional H-bonding and strong electrostatic and covalent interactions); (3) the role of surface-molecule interactions is largely uncharted even though they play a significant role in the diffusion of molecules and their subsequent assembly; (4), there is ample evidence that the kinetics of self-assembly is the major factor in determining the final structure, often driving polymorphic behaviour and leading to widely varied outcomes, depending on the conditions of formation; (5) a gamut of additional surface phenomena also also influence assembly e.g. chemical reactions between molecules, thermally activated internal degrees of freedom of molecules, surface reconstructions and co-assembly via coordinating surface atoms. The main objective of this project is to advance from experimental phenomena-reporting to knowledge-based design, and its central goal is to identify the role played by thermodynamic, entropic, kinetic and chemical factors in dictating molecular organisation at surfaces under given experimental conditions. To address this challenge requires a two-pronged approach in which ambitious and comprehensive theory development is undertaken alongside powerful imaging and spectroscopic tools applied to the same systems. This synergy of experiment and theory is absolutely essential to develop a fundamental understanding, which would enable a roadmap for controlled and engineered self-assembly at surfaces to be proposed that would, ultimately, allow one to 'dial up' a required structure at will. Four important and qualitatively different classes of assembly at surfaces will be studied: Molecular Self-Assembly; Hierarchical Self-Assembly; Metal-Organic Self Assembly; and, on-surface Covalent Assembly.

    visibility33
    visibilityviews33
    downloaddownloads35
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K011693/1
    Funder Contribution: 300,568 GBP

    It is reported that the total energy consumed by the ICT infrastructure of wireless and wired networks takes up over 3 percent of the worldwide electric energy consumption that generated 2 percent of the worldwide CO2 emissions nowadays. It is predicted that in the future a major portion of expanding traffic volumes will be in wireless side. Furthermore, future wireless network systems (e.g., 4G/B4G) are increasingly demanded as broadband and high-speed tailored to support reliable Quality of Service (QoS) for numerous multimedia applications. With explosive growth of high-rate multimedia applications (e.g. HDTV and 3DTV), more and more energy will be consumed in wireless networks to meet the QoS requirements. Specifically, it is predicted that footprint of mobile wireless communications could almost triple from 2007 to 2020 corresponding to more than one-third of the present annual emissions of the whole UK. Therefore, energy-efficient green wireless communications are paid increasing attention given the limited energy resources and environment-friendly transmission requirements globally. The aim of this project is to improve the joint spectrum and energy efficiency of future wireless network systems using cognitive radio technology along with innovative game-theoretic resource scheduling methods, efficient cross-layer designs and contemporary clinical findings. We plan to consider the health and environmental concerns to introduce power-efficient resource scheduling designs that intelligently exploit the available wireless resources in next-generation systems. Our efforts will leverage applications of cognitive radio techniques to situational awareness of the communications system with adaptive power control and dynamic spectrum allocation. This project will underpin the UK green communication technology by designing environment-friendly joint power and spectrum efficient wireless communication systems.

    visibility48
    visibilityviews48
    downloaddownloads482
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K011588/1
    Funder Contribution: 255,173 GBP

    Differential geometry is the study of "smooth shapes", e.g. curved surfaces that have no rough edges or sharp bends. A surface is a 2-dimensional object, and one can similarly imagine smooth shapes that are 1-dimensional, such as a line, or curve, or circle. What is much harder to imagine, but can nonetheless be described in precise mathematical terms, is a smooth shape in an arbitrary number of dimensions: these objects are called "manifolds". A specific example of a 2-dimensional manifold is a disk, i.e. the region inside a circle, and its "boundary" is a 1-dimensional manifold, namely the circle. Similarly, for any positive integer n, an n-dimensional manifold may have a boundary which is an (n-1)-dimensional manifold. All the 3-dimensional manifolds that we can easily picture are of this type: e.g. if we imagine any surface in 3-dimensional space, such as a sphere or a "torus" (the shape of the surface of a doughnut), then the region inside that surface is a 3-dimensional manifold whose boundary is the surface. We can now ask one of the most basic questions concerning manifolds: given an n-dimensional manifold, is it the boundary of something? This is actually not just a geometric question, but really a question of "topology", which is a certain way of studying the "overall shape" of geometric objects. As in the example given above, most 2-dimensional manifolds that we can easily imagine are boundaries of the 3-dimensional regions they enclose. But for a more interesting example, we can try to imagine a "Klein bottle": this is a surface formed by taking an ordinary bottle and bending its opening around and through the glass into the inside, then connecting the opening to the floor of the bottle by curving the floor upward. The result is a surface that is not a boundary of anything, as its inside is not distinct from its outside; like a Moebius strip, but closed in on itself. The subject of this proposal concerns a more elaborate version of the above question about boundaries: we deal with a particular type of manifold in an even number of dimensions, called "symplectic" manifolds, and their odd-dimensional boundaries are called "contact" manifolds. The idea of a symplectic manifold comes originally from physics: a century ago, symplectic manifolds were understood to be the natural geometric setting in which to study Hamilton's 19th century reformulation of Newton's classical mechanics. Today symplectic manifolds are considered interesting in their own right, and they retain a connection to physics, but of a very different and non-classical sort: by studying certain special surfaces in symplectic manifolds with contact boundary, one can define a so-called "Symplectic Field Theory" (or "SFT" for short), which bears a strong but mysterious resemblance to some of the theories that modern physics uses to describe elementary particles and their interactions. Unlike those theories, SFT does not help us to predict what will happen in a particle accelerator, but it can help us answer a basic question in the area of "Symplectic and Contact Topology": given a contact manifold, is it the boundary of any symplectic manifold? More generally, one way to study contact manifolds themselves is to consider the following relation: we say that two such manifolds are "symplectically cobordant" if they form two separate pieces of the boundary of a symplectic manifold. The question of whether two given contact manifolds are cobordant helps us understand what kinds of contact manifolds can exist in the first place, and Symplectic Field Theory is one of the most powerful methods we have for studying this. The goal of this project is thus to use this and related tools to learn as much as we can about the symplectic cobordism relation on contact manifolds. Since most previous results on this subject have focused on 4-dimensional manifolds with 3-dimensional boundaries, we aim especially to gain new insights in higher dimensions.

    more_vert
  • Funder: UKRI Project Code: EP/K018868/1
    Funder Contribution: 556,849 GBP

    We are all familiar with the language of classical logic, which is normally used for both mathematical and informal arguments, but there are other important and useful logics. Some nonclassical logics, for example, can be associated with programming languages to help control the behaviour of their programs, for instance via type systems. In order to define the proofs of a logic we need a proof system consisting of a formal language and some inference rules. We normally design proof systems following the prescriptions of some known formalism that ensures that we obtain desirable mathematical properties. In any case, we must make sure that proofs can be checked for validity with a computational effort that does not exceed certain limits. In other words, we want checking correctness to be relatively easy, also because this property facilitates the design of algorithms for the automatic discovery of proofs. However, there is a tension between the ease by which proofs can be checked and their size. If a proof is too small, checking it is difficult. Conversely, formalisms that make it very easy to check and to search for proofs create big bureaucratic unnatural proofs. All traditional proof systems suffer to various extents from this problem, because of the rigidity of all traditional formalisms, which impose an excess of structure on proof systems. We intend to design a formalism, provisionally called Formalism B, in which arbitrary logics can be defined and their proofs described in a way that is at the same time efficient and natural. Formalism B will ideally lie at the boundary between the class of proof systems and that of systems containing proto-proofs that are small and natural, but are too difficult to check. In other words, we want to maximise naturality by retaining as much efficiency as possible in proof representation. A driving force in this effort will be the use of existing algebraic theories that seem to capture some of the structure needed by the new formalism. There are two main reasons for doing this. One is theoretical: the problem is compelling, and tackling it fits well into a research effort in the theory of computation that tries to define proofs as more abstract mathematical objects than just intricate pieces of syntax. Suffice to say that we are at present unable to decide by an algorithm when two seemingly different proofs of the same statement use the same ideas and so are equivalent, or not. This is a problem that dates back to Hilbert and that requires more abstract ways to describe proofs than traditional syntax provides. The second reason is practical: we need formal proofs to verify the correctness of complex computer systems. The more powerful computer systems become, the more we need to ensure that they do what they are supposed to do. Formal verification is increasingly adopted as a viable instrument for this, but still much needs to be done in order to make it an everyday tool. We need to invent proof systems that simplify the interaction with proof assistants, and that could represent in some canonical way proofs that are essentially the same, so that no duplication occurs in the search for and storing of proofs in proof libraries. This project intends to contribute by exploiting proof-theoretic advances of the past ten years. We have developed a new design method for proof systems that reduces the size of inference steps to their minimal terms, in a theory called `deep inference'. The finer scale of rules allows us to associate proofs with certain purely geometric objects that faithfully abstract away proof structure, and that are natural guides for the design of proof systems whose proofs would not suffer from several forms of bureaucracy. In short, after a decade in which we broke proofs into their smallest pieces, by retaining their properties, we are now reshaping them in such a way that they still retain all the features we need but do not keep the undesirable ones.

    more_vert
  • Funder: UKRI Project Code: EP/K009850/1
    Funder Contribution: 158,970 GBP

    We are in the midst of an information revolution, where advances in science and technology, as well as the day-to-day operation of successful organisations and businesses, are increasingly reliant on the analyses of data. Driving these advances is a deluge of data, which is far outstripping the increase in computational power available. The importance of managing, analysing, and deriving useful understanding from such large scale data is highlighted by high-profile reports by McKinsey and The Economist as well as other outlets, and by the EPSRC's recent ICT priority of "Towards an Intelligent Information Infrastructure". Bayesian analysis is one of the most successful family of methods for analysing data, and one now widely adopted in the statistical sciences as well as in AI technologies like machine learning. The Bayesian approach offers a number of attractive advantages over other methods: flexibility in constructing complex models from simple parts; fully coherent inferences from data; natural incorporation of prior knowledge; explicit modelling assumptions; precise reasoning of uncertainties over model order and parameters; and protection against overfitting. On the other hand, there is a general perception that they can be too slow to be practically useful on big data sets. This is because exact Bayesian computations are typically intractable, so a range of more practical approximate algorithms are needed, including variational approximations, sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). MCMC methods arguably form the most popular class of Bayesian computational techniques, due to their flexibility, general applicability and asymptotic exactness. Unfortunately, MCMC methods do not scale well to big data sets, since they require many iterations to reduce Monte Carlo noise, and each iteration already involves an expensive sweep through the whole data set. In this project we propose to develop the theoretical foundations for a new class of MCMC inference procedures that can scale to billions of data items, thus unlocking the strengths of Bayesian methods for big data. The basic idea is to use a small subset of the data during each parameter update iteration of the algorithm, so that many iterations can be performed cheaply. This introduces excess stochasticity in the algorithm, which can be controlled by annealing the update step sizes towards zero as the number of iterations increases. The resulting algorithm is a cross between an MCMC and a stochastic optimization algorithm. An initial exploration of this procedure, which we call stochastic gradient Langevin dynamics (SGLD), was initiated by us recently (Welling and Teh, ICML 2011). Our proposal is to lay the mathematical foundations for understanding the theoretical properties of such stochastic MCMC algorithms, and to build on these foundations to develop more sophisticated algorithms. We aim to understand the conditions under which the algorithm is guaranteed to converge, and the type and speed of convergence. Using this understanding, we aim to develop algorithmic extensions and generalizations with better convergence properties, including preconditioning, adaptive and Riemannian methods, Hamiltonian Monte Carlo methods, Online Bayesian learning methods, and approximate methods with large step sizes. These algorithms will be empirically validated on real world problems, including large scale data analysis problems for text processing and collaborative filtering which are standard problems in machine learning, and large scale data from ID Analytics, a partner company interested in detecting identity theft and fraud.

    visibility10
    visibilityviews10
    downloaddownloads75
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/I004130/2
    Funder Contribution: 322,634 GBP

    In homotopy theory, topological spaces (i.e. shapes) are regarded as being the same if we can deform continuously from one to the other. Algebraic varieties are spaces defined by polynomial equations, often over the complex numbers; studying their homotopy theory means trying to tell which topological spaces can be deformed continuously to get algebraic varieties, or when a continuous map between algebraic varieties can be continuously deformed to a map defined by polynomials.If the polynomials defining a variety are rational numbers (i.e. fractions), this automatically gives the complex variety a group of symmetries, called the Galois group. Although these symmetries are not continuous (i.e. nearby points can be sent far apart), they preserve something called the etale topology. This is an abstract concept which looks somewhat unnatural, butbehaves well enough to preserve many of the topological features of the variety. Part of my project will involve investigating how the Galois group interacts with the etale topology. I also study algebraic varieties in finite and mixed characteristics. Finite characteristics are universes in which the rules of arithmetic are modified by choosing a prime number p, and setting it to zero. For instance, in characteristic 3 the equation 1+1+1=0 holds. In mixed characteristic, p need not be 0, but the sequence 1,p, p^2, p^3 ... converges to 0.Although classical geometry of varieties does not make sense in finite and mixed characteristics, the etale topology provides a suitable alternative, allowing us to gain much valuable insight into the behaviour of the Galois group. This is an area which I find fascinating, as much topological intuition still works in contexts far removed from real and complex geometry. Indeed, many results in complex geometry have been motivated by phenomena observed in finite characteristic.Moduli spaces parametrise classes of geometric objects, and can themselves often be given geometric structures, similar to those of algebraic varieties. This structure tends to misbehave at points parametrising objects with a lot of symmetry. To obviate this difficulty, algebraic geometers work with moduli stacks, which parametrise the symmetries as well as the objects. Sometimes the symmetries can themselves have symmetries and so on, giving rise to infinity stacks.Usually, the dimension of a moduli stack can be calculated by naively counting the degrees of freedom in defining the geometric object it parametrises. However, the space usually contains singularities (points where the space is not smooth), and regions of different dimensions. Partially inspired by ideas from theoretical physics, it has been conjectured that every moduli stack can be extended to a derived moduli stack, which would have the expected dimension, but with some of the dimensions only virtual. Extending to these virtual dimensions also removes the singularities, a phenomenon known as hidden smoothness . Different classification problems can give rise to the same moduli stack, but different derived moduli stacks. Much of my work will be to try to construct derived moduli stacks for a large class of problems. This has important applications in algebraic geometry, as there are many problems for which the moduli stacks are unmanageable, but which should become accessible using derived moduli stacks. I will also seek to investigate the geometry and behaviour of derived stacks themselves.A common thread through the various aspects of my project will be to find ways of applying powerful ideas and techniques from a branch of topology, namely homotopy theory, in contexts where they would not, at first sight, appear to be relevant.

    visibility2
    visibilityviews2
    downloaddownloads1
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/K028286/1
    Funder Contribution: 278,462 GBP

    Society faces a number of major challenges due to the impact of global warming on world climate. One consequence is the spread of otherwise rare and poorly characterised viral infections into economically advanced areas of the world. Examples include Bluetongue virus, which arrived in the UK after years of being restricted to much warmer climates. This poses a threat to public and animal health from both existing viruses and newly emerging ones. A major problem in the design of anti-viral therapies is the emergence of viral strains that are resistant to anti-viral drugs soon after initial treatment. Research into the mechanisms that could prevent such viral escape is therefore urgently required in order to develop therapeutics with long-term action. Moreover, viruses can evolve strains that cross the species barrier, for example from an animal to a human host as in the case of bird flu, and it is important to be able to develop strategies to prevent this. Insights into virus evolution could shed light on both issues. In particular, we need to better understand the constraints that viruses face when their genomes evolve, and find ways of predicting such evolutionary behaviour. In our previous research we have gained fundamentally new insights into the constraints underlying virus structure and function. In an interdisciplinary research programme, combining the modelling expertise of the Twarock group at the York Centre for Complex Systems Analysis at the University of York with the experimental know-how of the Stockley and Rowlands Labs at the Astbury Centre for Structural Molecular Biology In Leeds, we investigate here their impact on the evolution of viruses, working with a number of viruses including picornaviridae that contain important human and animal viruses, such as foot-and-mouth virus. Our research programme aims at improving our understanding of the factors that determine the evolutionary behaviour of viruses, and we will use these results to explore strategies to misdirect viral evolution. In particular, we will assess in which ways the structural constraints we have discovered earlier lead to evolutionary bottlenecks, i.e. correspond to constraints that the viral escape mutants cannot avoid, and that a new generation of anti-viral therapeutics could target. Moreover, we plan to develop methods to predict how viruses may react to a drug, and use this to test the impact of different anti-viral strategies. This research has the potential to lead to a new generation of "evolutionarily-stable" therapeutics that are less susceptible to the problem of escape mutants.

    visibility28
    visibilityviews28
    downloaddownloads31
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/L504671/1
    Funder Contribution: 637,523 GBP

    Development of a solid state radiation hard high temperature sensor for neutron and gamma detection has many potential uses. With long term reliability suitable for use in nuclear power generation plant, high energy physics, synchrotron favilities, medical devices and national resiliience the use of solid state diamond devices is an obvious choice. Diamond eliminates the need to use helium-3 and is very radiation hard. Diamond is an expensive synthetic material and challenging to process reliably so work needs undertaking on the use of less expensive poly-crystalline diamond. Areas of innovation include precise laser cutting and plasma processing of diamond to improve the production of multi-layer devices for neutron detection. Diamond polishing needs to be improved and understood so that optimal and economic devices can be manufactured. Advanced electron micropscopy techniques, nano-mechanical and tensile testing, radiation testing as well as high temperature neutron performance and mechanical stability will be demonstrated to show how this technology can be applied successfully to future power plant designs and radiation monitoring.

    more_vert