Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
1,532 Projects, page 1 of 154

  • UK Research and Innovation
  • UKRI|EPSRC
  • 2008

10
arrow_drop_down
  • Funder: UKRI Project Code: EP/F008384/1
    Funder Contribution: 273,172 GBP
    Partners: University of St Andrews

    Hydrogen is considered a promising alternative automotive fuel, as the only combustion products are carbon dioxide and water. In the petrochemical industry, hydrogen is a byproduct which can be found in many process streams and which is sometimes burnt as waste. This project aims at designing porous materials that can recover and purify hydrogen for industrial gas streams. The different molecules present in a process stream interact differently with the internal surface of the porous solids (this process is called adsorption) and can therefore be selectively removed. For this project, we will be using metal-organic frameworks (MOFs), materials synthesised in a building-block approach from corner units and linkers. The properties of MOFs can be changed by using different building blocks, offerering the possibility to fine tune the interactions between the gas molecules and the surface.In this project we will be designing MOFs tailored to hydrogen purification. For this, we will use an integrated approach that combines skills from chemistry and chemical engineering, including the computer simulation of the synthesis of MOFs and of their adsorption performance, the actual synthesis of the materials, and the evaluation of their structure and their performance under industrially relevant conditions. In addition to the technical objectives of the project, we will be training researchers who are capable of carrying out research at this important interface between chemistry and chemical engineering. The researchers will learn how chemistry and chemical engineering research can be integrated effectively and therefore will be able to work effectively in mixed teams of scientists and engineers.

  • Funder: UKRI Project Code: EP/E06356X/1
    Funder Contribution: 183,238 GBP
    Partners: UEA

    Bulky cobalt metallocenes have recently found application in a number of asymmetric transformations characterised by high enantioselectivity. The further development of this area is hampered by the absence of a general methodology for the synthesis of planar chiral derivatives. A general solution to this problem is proposed involving asymmetric desymmetrisation by either metal catalysed cross-coupling and/or enantioselective halogen-metal exchange. The resulting non-racemic products will be used for the synthesis of representative P, PN and PP ligands, and also for the generation of palladium based and 'organocatalytic' rearrangement catalysts for application in asymmetric synthesis.

  • Funder: UKRI Project Code: EP/F027028/1
    Funder Contribution: 228,108 GBP
    Partners: University of York

    Diophantine approximation is a branch of number theory that can loosely be described as a quantitative analysis of the property that every real number can be approximated by a rational number arbitrarily closely. The theory dates back to the ancient Greeks and Chinese who used good rational approximations to the number pi (3.14159...) in order to accurately predict the position of planets and stars.The metric theory of Diophantine approximation is the study of the approximation properties of real numbers by rationals from a measure theoretic (probabilistic) point of view. The central theme is to determine whether a given approximation property holds everywhere except on an exceptional set of measure zero. In his pioneering work of 1924, Khintchine established an elegant probabilistic criterion (a `zero-one' law) in terms of Lebesgue measure for a real number to be approximable by rationals with an arbitrary decreasing (monotonic) error. The error is a function of the size of the denominators of the rational approximates and decreases as the size of the denominators increases. The monotonicity assumption is crucial since the criterion is false otherwise. Under the natural assumption that the rational approximates are reduced (i.e. in their lowest form so that the error of approximation at a rational point is determined uniquely), the Duffin-Schaeffer conjecture (1941) provides the appropriate expected statement without the monotonicity assumption. It represents one of the most famous unsolved problems in number theory. A major aim is to make significant contributions to this key conjecture by exploiting the recent `martingale' approach developed by Haynes (the named Research Assistant) and Vaaler. Furthermore, a more general form of the conjecture in which Lebesgue measure is replaced by Hausdorff measure (a fractal quantity) will be investigated. A major outcome will be the Duffin-Schaeffer conjecture for measures close to Lebesgue measure. The importance of the Duffin-Schaeffer conjecture is unquestionable. However, it does change the underlying nature of the problem considered by Khintchine in that the rational approximates are reduced. In 1971, Catlin stated a conjecture for the unconstrained problem in which the rationals are not assumed to be reduced. Catlin claimed that his conjecture was equivalent to the Duffin-Schaeffer conjecture. However, his proof contained a serious flaw and the claim remains an interesting problem in its own right. In higher dimensions, the approximation of arbitrary points in n-dimensional space by rational points (simultaneous approximation) or rational hyperplanes (dual approximation) is the natural generalisation of the one-dimensional theory. Considering a system of linear forms unifies both forms and naturally gives rise to the linear forms theory. The metric theory of Diophantine approximation is complete for simultaneous approximation in dimension greater than one. The analogues of Khintchine's criterion without any monotonicity assumption (i.e. the simultaneous Catlin conjecture) and the Duffin-Schaeffer conjecture have both been established as well as the more precise and delicate Hausdorff measure theoretic statements. However, the dual and more generally the linear forms theory are far from complete. In this proposal the linear forms analogues of the Duffin-Schaeffer and Catlin conjectures are precisely formulated. A principle goal is to establish these conjectures in dimension greater than one. A novel idea is to develop a `slicing' technique that reduces a linear forms problem to a well understood simultaneous problem. The major outcome will be a unified linear forms theory in Euclidean space.

  • Project . 2008 - 2012
    Funder: UKRI Project Code: EP/F040857/1
    Funder Contribution: 1,251,550 GBP
    Partners: University of Glasgow

    Technologies associated with looking at the microworld are extremely mature, and include a wide variety of microscopies. By contrast little work has been done to extend our sense of hearing into the micro-world. The purpose of this grant is to develop a basic technology for listening to the micro world, in as sense a micro ear.Just like our own ears, most sound detectors respond to changes in pressure, creating small acoustic forces and corresponding displacement of a sensor. One extremely sensitive way of measuring force is to compare it against the momentum of a light beam. Tightly focused laser beams are now routinely used to form optical tweezers, which can trap micron-sized beads, overcoming both the thermal and gravitation forces. These tweezers systems are typically built around a microscope and manipulate samples suspended in a fluid medium / such that the technology is highly compatible with biological systems. Using a microscope to observe the bead position allows the measurement of piconewton forces and the corresponding displacement of a few nanometres. The subtle movements of these optically trapped beads will form the basis of our micro-ear. We plan to develop, demonstrate and test a number of different micro-ear approaches. All imaging systems based upon focusing are restricted to scales of a wavelength or so. Even in water, acoustic wavelengths are 100s mm, making the concept of focussing irrelevant to microscopic systems. However, as evident by most wind instruments or antique hearing aids, sub wavelength horns still work. In this proposal we plan to use microfabrication techniques to produce structures that channel the fluid flow from the emitting object to the sensor bead, providing a method of guiding the pressure wave, and if necessary amplifying it (e.g. in a flared channel). We will use the optically trapped beads as sensors to measure these forces (as described above). However, it is important to consider that, at the microscale, the movements of the beads due to an acoustic response may be masked by Brownian motion / and hence distinguishing the real signal from this thermal background will be a major challenge challenge.The key to overcoming the Brownian background will be the use of high-speed cameras to measure the position of many beads simultaneously. Rather than the signal being derived from one bead, it is the correlated motion of the beads that distinguishes the sensor response from the uncorrelated background. We envisage two basic configurations. In the first, simplest case, the beads will be positioned at the ends of defined flared microfluidic structures to measure molecular interactions resulting from mechanical biological systems (molecular motors). Alternatively, we will create a circular array around the test object and measure the radial breathing of the ring. In this latter configuration there is the possibility of being able to make new and exciting biological measurements in a non-contact mode, where we will determine both short and long range interactions between cells and surfaces.

  • Funder: UKRI Project Code: EP/F00656X/1
    Funder Contribution: 266,120 GBP
    Partners: University of Liverpool

    Pierced deep beams and shear walls are widely used in the construction industry. For utility and ease of construction, most openings are rectangular in shape. This leads to stress concentrations which cause cracks to extend from the corners of the openings. The width of the cracks often exceeds the serviceability limits indicated by the code, and in many cases the resulting distortion of the opening causes serviceability problems, resulting in unnecessary maintenance costs. This project aims to develop methods of analysis and design rules for pierced deep beams and shear walls which will permit determination and limitation of serviceability cracking together with ultimate strength, allowing rational design of such components. The aims of the project will be accomplished by first developing a novel numerical model capable of simulating the behaviour of openings in these components under a variety of loading conditions. This numerical model will employ an exciting new computational technique, the scaled boundary finite element method, to permit efficient modelling of the stress concentration, crack initiation and crack propagation from the corners of the openings. The model will be verified by application to full-scale deep beam tests and scale model shear wall tests. This detailed numerical model will be used to evaluate and refine existing consistent strut-tie models for ultimate strength design of pierced deep beams. At the same time simplified approaches for the prediction of crack widths will be formulated. For deep beams with penetrations simple design tables for satisfaction of serviceability criteria will be developed. For shear walls, two approaches will be developed and investigated. The first will be based on pseudo-empirical correlation of moment and shear in the equivalent link beams over the opening (based on the linear elastic frame-type analysis typically used in practice). In the second approach a simplified element suitable for inclusion in frame analysis packages will be constructed. This will permit prediction of crack openings directly and allow more accurate analysis of the global effects of reduced stiffness on the structural response. Typical configurations will be investigated and suitable code provisions will be proposed to ensure that existing serviceability criteria are satisfied.

  • Funder: UKRI Project Code: EP/F02844X/1
    Funder Contribution: 462,878 GBP
    Partners: Cardiff University

    In this research programme, it is proposed to investigate the effect of texturing the surface of insulating materials on their performance under applied high voltage and outdoor weather conditions. This follows a Cardiff patent for insulating structures having anti-dry band properties and initial experimental work which proved the concept. It is proposed to investigate surfaces having protuberances in the shape of hemispheres arranged in such a way that both the total surface area and the creepage path of the resulting insulation surface are increased significantly. Such texturing can also have the added benefit of improving the hydrophobicity of the surface. Modelling work using analytical and numerical computation techniques will be used to assess the performance of various shapes of texturing with dimensions in the mm and cm ranges. This modelling work will then be used to shortlist candidate textures for experimental work. A vacuum casting facility will be used to prepare flat and insulator samples that will be tested in inclined-plane and pollution-chamber test environments. Various characterisation laboratory tasks, including long term pollution exposure and ageing investigations, will be undertaken in order to classify the various textures. This will involve synchronising visual, electrical and thermal measurements. It is, then, proposed to use the order of merit from the flat samples investigations to select two or three geometries for fabrication of complete insulators. The allter will be used to conduct a comparative test and modelling programme which will demonstrate/assess the superior properties of the textured surface insulator, particularly under polluted conditions. Furthermore, this programme includes investigation tests at an outdoor pollution test station and collaboration with both a UK manufacturer and an overseas academic group.

  • Funder: UKRI Project Code: EP/F036019/1
    Funder Contribution: 44,182 GBP
    Partners: OS, University of Leeds

    A passenger in an aircraft requires information about a flight at a very different level of detail from the pilot. A map of an entire country on a computer screen shows far less detail than a map of single town on the same screen. For certain kinds of data, reducing the level of detail is a relatively well-understood process, but for other kinds this reduction is a challenging problem. This project is concerned with reduction in level of detail for data associated to networks in geographic information systems. Examples of such networks are roads, rivers, railways, electricity distribution networks, etc.Manipulation of level of detail, or granularity, is vitally important for any kind of system for managing processes and detecting events in geographical networks. For example: congestion and accidents on roads, floods in rivers, or terrorist attacks on railways. Such systems require some level of human intervention, and to do this effectively requires the ability to zoom in and out of the data in various ways. Changing the spatial level of detail, or 'scale' in traditional paper-based maps, is only one of the requirements -- it is also necessary to deal with classification of the things represented (ontologies), and with time at different granularities.Features in geographical information are usually classified by what kind of thing they are: here is a house, there is a school and that is a railway station, and they are all buildings . In a large scale (i.e. detailed) map we generally work with a classification that is itself detailed. Besides showing individual buildings, such maps can make fine distinctions between many different kinds of building. At smaller scales, as the separate buildings merge into undifferentiated built-up areas on the map, the classification becomes coarser too. The level of detail in classification is termed ontological granularity.If dealing with a map showing, say, traffic flow along streets in a city, we might need to see how levels of traffic vary over a single day or at a given time over a number of different days. In both of these examples, temporal granularity is involved -- grouping together and selecting periods of time.The challenge that this project addresses is the combination of these three kinds of granularity: the spatial, the ontological, and the temporal. In varying one kind of level of detail, what changes are necessarily imposed in the other kinds of level of detail? Some simple examples are easily understood: if a church and an adjacent house become represented at a smaller scale by a single entity, it might get classified simply as a building. However, general theoretical principles are lacking; the project will develop these and will evaluate them in collaboration with the Ordnance Survey. The principles will be used to specify operations for changing level of detail in network-based geographic data.The evaluation will be based on a major resource for UK network data: the Integrated Transport Network. This is a layer within Ordnance Survey'sMasterMap providing two themes: the Roads Network (containing all navigable roads in Great Britain) and Road Routing Information (containing additional information such as one-way streets and other restrictions). The project will also make essential use of the expertise of Professor Michael Worboys, Chair of the Department of Spatial Information Science and Engineering, University of Maine, who will be based in Leeds as a visiting researcher.

  • Funder: UKRI Project Code: DT/F007744/1
    Funder Contribution: 719,422 GBP
    Partners: University of Edinburgh

    Carbon dioxide (CO2) is considered to be a greenhouse gas. The concentration of CO2 in the earth atmosphere is an important control on earth surface temperature, and hence climate. CO2 dissolution in the oceans is also being recognised as an important factor in making surface seawater unusually acid / this severely affects ecosystems and species from algae to fish and whales. Increased CO2 in the atmosphere is recognised as being partly caused by burning of fossil fuels, such as coal and gas, in power stations. Carbon Capture and Storage is a suite of technologies which enables CO2 to be captured at power stations, liquefied by increasing the pressure, transported by a pipe, and injected deep underground in to pore space of deeply buried sedimentary rocks such as sandstones. This can effectively remove CO2 from the power cycle of fossil fuel use, and store the CO2 for tens of thousands of years, which enables the earth atmosphere to return to normal. Because of the very large CO2 volumes involved, it is not possible to build surface stores. Because of the acid effects of CO2, it is not possible to inject CO2 into seawater. By contrast, the Intergovernmental Panel on Climate Change (IPCC) have calculated that more than 25% of world CO2 emissions could be stored by geological CCS. This could be a vital technology for the world's future. There is a great deal of interest worldwide in CCS and, because of the offshore oil industry, the North Sea is one of the world's prime areas for CCS to be rapidly developed. However, there are only three full-scale projects at present in the world. For UK power generating companies to become commercially interested the chain of technologies must be both demonstrated to work reliably, and must be capable of cost-effective development. This project is trying to identify aquifer sites deep underground which are close to power plant in the U.K., where CO2 can be safely stored, but sites are quicker and cheaper to develop than offshore in the North Sea. This can enable power generating companies to develop CCS over a period of years, on a medium scale, and learn to conduct the industrial operation. If this project is successful, it could lead to take up of CCS in the U.K. 10 or 15 years earlier than waiting for an infrastructure of large North Sea pipelines to be developed for CO2. When those pipes become available, UK power companies will be completely ready to connect power plant to store CO2 in large redundant hydrocarbon fields offshore. This could save many tens of million tons CO2 per year being emitted into the atmosphere from the U.K., and place the U.K. in the forefront of carbon reduction nations. The universities and companies involved in this 2.3M consortium are all experienced in investigating the deep subsurface for oil and gas production. Edinburgh, Heriot-Watt and BGS already have 1.6M from the Scottish Executive to establish the UK's largest research grouping to investigate CO2 storage. This expertise will be transferred to exploring for CO2 disposal sites. Using the information held by the British Geological Survey, maps will be made of the subsurface deep beneath England, and deep beneath the Forth estuary. Heriot-Watt university will assess the potential chemical reactions of CO2 with rock, and how much CO2 can be injected. Electricity generators, led by Scottish Power, will make engineering designs for modified power stations to supply CO2. Schlumberger and Marathon Oil, will assess the subsurface technology required for safe and reliable injection and monitoring. The University of Edinburgh will make computer simulations to determine if CO2 will leak deep below ground, and will assess how specific site is storage sites will perform to safely retain CO2. Amec will evaluate transport of CO2 by pipe. Tyndall will investigate the public attitudes at the candidate storage sites.

  • Funder: UKRI Project Code: EP/G500045/1
    Funder Contribution: 1,213,030 GBP
    Partners: University of Warwick

    We propose to establish a Doctoral Training Centre for Systems Biology at Warwick. The Centre will be an independent unit within Warwick University providing training in the multi-disciplinary skills required for research into biological systems, with students undertaking a dedicated taught MSc course in the first year followed by a PhD in systems biology during years 2-4. It will be set up in completely refurbished premises and managed by senior academic staff from a mix of life sciences and physical sciences Departments.

  • Funder: UKRI Project Code: EP/G009104/1
    Funder Contribution: 81,522 GBP
    Partners: University of Leeds, Intel (Ireland)

    In this project we will prove the principles of fabricating graphene in a form useful for manufacturing nanoscale electronics and fabricate some simple devices. Graphene is a form of carbon discovered in the 21st century: a single two dimensional sheet of atoms in a hexagonal chickenwire array. It completes the set of carbon materials, which already had zero-dimensional (buckyballs), one-dimensional (nanotubes), and three-dimensional (graphite) members that are all formed by rolling or stacking up graphene sheets. In its simplest form it can be made by anyone / a pencil trace consists of millions of carbon flakes, and amongst the millions a few will be just one atomic sheet thick. Experiments on these flakes have shown that they have really remarkable properties, particularly for electronic components. The two-dimensional nature of the material, along with the symmetry of the lattice, means that the electrons in the graphene sheet have the same dynamics as relativistic particles such neutrinos: they are now commonly referred to as massless Dirac fermions, with a new quantum number, chirality, not possessed by free electrons. This has been shown to lead to bizarre new physics such as finite electrical conductivity without charge carriers and new versions of the quantum Hall effect. Although new nanoelectronic devices based on this novel physics offer exciting possibilities, using graphene can also make marked improvements to present day technologies. This is because it possesses the higher value of a key materials property than any other semiconductor: the mobility. A simple field effect CMOS-like transistor, using a graphene flake to form the channel, outperformed Si by more than a factor of ten. A major obstacle to achieving this is that building complex circuits from randomly placed, shaped, and sized flakes is not possible using today's planar fabrication technologies where reproducibility is key. What is needed a uniform layer of graphene coating an entire wafer that can be patterned and processed in the usual way. The most promising way to do this currently seems to be to use SiC wafers used commercially in high power electronics. A proper surface treatment in ultrahigh vacuum preferentially removes silicon atoms, and the carbon atoms that remain reconstruct themselves to form graphene. The promise of wafer-scale device-grade material offers the possibility of not just forming transistor channels out of graphene, but carving entire circuits from a single graphene sheet. At Leeds we have been working on epitaxial graphene production now for roughly a year. We have set up and tested the various surface science instruments that will be needed to show that graphene has indeed formed on the surface of our SiC wafer. Our recent efforts have concentrated on achieving the very high temperatures for the wafer in UHV that are the key step of producing the surface graphene, and after a series of improvements we are now close to reaching those needed. Once we have graphene, we shall optimize its production and start to make electronic devices from it. In this proof-of-principle project we have two main aims: to develop a reliable protocol for forming graphene on SiC wafers in a form useful for scaling up to manufacturing; and to build some simple demonstrator devices to show that this material can be processed in nanoscale devices including gates that can control a switching action. We will also begin some pilot experiments on connecting magnetic electrodes to graphene devices, with a view to preparing the ground for future projects involving spintronics in graphene/using the electron spin as well as charge to store and process information/which is potentially a very fertile area, as quantum spin states are very long lived in graphene, even at room temperature.

Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
1,532 Projects, page 1 of 154
  • Funder: UKRI Project Code: EP/F008384/1
    Funder Contribution: 273,172 GBP
    Partners: University of St Andrews

    Hydrogen is considered a promising alternative automotive fuel, as the only combustion products are carbon dioxide and water. In the petrochemical industry, hydrogen is a byproduct which can be found in many process streams and which is sometimes burnt as waste. This project aims at designing porous materials that can recover and purify hydrogen for industrial gas streams. The different molecules present in a process stream interact differently with the internal surface of the porous solids (this process is called adsorption) and can therefore be selectively removed. For this project, we will be using metal-organic frameworks (MOFs), materials synthesised in a building-block approach from corner units and linkers. The properties of MOFs can be changed by using different building blocks, offerering the possibility to fine tune the interactions between the gas molecules and the surface.In this project we will be designing MOFs tailored to hydrogen purification. For this, we will use an integrated approach that combines skills from chemistry and chemical engineering, including the computer simulation of the synthesis of MOFs and of their adsorption performance, the actual synthesis of the materials, and the evaluation of their structure and their performance under industrially relevant conditions. In addition to the technical objectives of the project, we will be training researchers who are capable of carrying out research at this important interface between chemistry and chemical engineering. The researchers will learn how chemistry and chemical engineering research can be integrated effectively and therefore will be able to work effectively in mixed teams of scientists and engineers.

  • Funder: UKRI Project Code: EP/E06356X/1
    Funder Contribution: 183,238 GBP
    Partners: UEA

    Bulky cobalt metallocenes have recently found application in a number of asymmetric transformations characterised by high enantioselectivity. The further development of this area is hampered by the absence of a general methodology for the synthesis of planar chiral derivatives. A general solution to this problem is proposed involving asymmetric desymmetrisation by either metal catalysed cross-coupling and/or enantioselective halogen-metal exchange. The resulting non-racemic products will be used for the synthesis of representative P, PN and PP ligands, and also for the generation of palladium based and 'organocatalytic' rearrangement catalysts for application in asymmetric synthesis.

  • Funder: UKRI Project Code: EP/F027028/1
    Funder Contribution: 228,108 GBP
    Partners: University of York

    Diophantine approximation is a branch of number theory that can loosely be described as a quantitative analysis of the property that every real number can be approximated by a rational number arbitrarily closely. The theory dates back to the ancient Greeks and Chinese who used good rational approximations to the number pi (3.14159...) in order to accurately predict the position of planets and stars.The metric theory of Diophantine approximation is the study of the approximation properties of real numbers by rationals from a measure theoretic (probabilistic) point of view. The central theme is to determine whether a given approximation property holds everywhere except on an exceptional set of measure zero. In his pioneering work of 1924, Khintchine established an elegant probabilistic criterion (a `zero-one' law) in terms of Lebesgue measure for a real number to be approximable by rationals with an arbitrary decreasing (monotonic) error. The error is a function of the size of the denominators of the rational approximates and decreases as the size of the denominators increases. The monotonicity assumption is crucial since the criterion is false otherwise. Under the natural assumption that the rational approximates are reduced (i.e. in their lowest form so that the error of approximation at a rational point is determined uniquely), the Duffin-Schaeffer conjecture (1941) provides the appropriate expected statement without the monotonicity assumption. It represents one of the most famous unsolved problems in number theory. A major aim is to make significant contributions to this key conjecture by exploiting the recent `martingale' approach developed by Haynes (the named Research Assistant) and Vaaler. Furthermore, a more general form of the conjecture in which Lebesgue measure is replaced by Hausdorff measure (a fractal quantity) will be investigated. A major outcome will be the Duffin-Schaeffer conjecture for measures close to Lebesgue measure. The importance of the Duffin-Schaeffer conjecture is unquestionable. However, it does change the underlying nature of the problem considered by Khintchine in that the rational approximates are reduced. In 1971, Catlin stated a conjecture for the unconstrained problem in which the rationals are not assumed to be reduced. Catlin claimed that his conjecture was equivalent to the Duffin-Schaeffer conjecture. However, his proof contained a serious flaw and the claim remains an interesting problem in its own right. In higher dimensions, the approximation of arbitrary points in n-dimensional space by rational points (simultaneous approximation) or rational hyperplanes (dual approximation) is the natural generalisation of the one-dimensional theory. Considering a system of linear forms unifies both forms and naturally gives rise to the linear forms theory. The metric theory of Diophantine approximation is complete for simultaneous approximation in dimension greater than one. The analogues of Khintchine's criterion without any monotonicity assumption (i.e. the simultaneous Catlin conjecture) and the Duffin-Schaeffer conjecture have both been established as well as the more precise and delicate Hausdorff measure theoretic statements. However, the dual and more generally the linear forms theory are far from complete. In this proposal the linear forms analogues of the Duffin-Schaeffer and Catlin conjectures are precisely formulated. A principle goal is to establish these conjectures in dimension greater than one. A novel idea is to develop a `slicing' technique that reduces a linear forms problem to a well understood simultaneous problem. The major outcome will be a unified linear forms theory in Euclidean space.

  • Project . 2008 - 2012
    Funder: UKRI Project Code: EP/F040857/1
    Funder Contribution: 1,251,550 GBP
    Partners: University of Glasgow

    Technologies associated with looking at the microworld are extremely mature, and include a wide variety of microscopies. By contrast little work has been done to extend our sense of hearing into the micro-world. The purpose of this grant is to develop a basic technology for listening to the micro world, in as sense a micro ear.Just like our own ears, most sound detectors respond to changes in pressure, creating small acoustic forces and corresponding displacement of a sensor. One extremely sensitive way of measuring force is to compare it against the momentum of a light beam. Tightly focused laser beams are now routinely used to form optical tweezers, which can trap micron-sized beads, overcoming both the thermal and gravitation forces. These tweezers systems are typically built around a microscope and manipulate samples suspended in a fluid medium / such that the technology is highly compatible with biological systems. Using a microscope to observe the bead position allows the measurement of piconewton forces and the corresponding displacement of a few nanometres. The subtle movements of these optically trapped beads will form the basis of our micro-ear. We plan to develop, demonstrate and test a number of different micro-ear approaches. All imaging systems based upon focusing are restricted to scales of a wavelength or so. Even in water, acoustic wavelengths are 100s mm, making the concept of focussing irrelevant to microscopic systems. However, as evident by most wind instruments or antique hearing aids, sub wavelength horns still work. In this proposal we plan to use microfabrication techniques to produce structures that channel the fluid flow from the emitting object to the sensor bead, providing a method of guiding the pressure wave, and if necessary amplifying it (e.g. in a flared channel). We will use the optically trapped beads as sensors to measure these forces (as described above). However, it is important to consider that, at the microscale, the movements of the beads due to an acoustic response may be masked by Brownian motion / and hence distinguishing the real signal from this thermal background will be a major challenge challenge.The key to overcoming the Brownian background will be the use of high-speed cameras to measure the position of many beads simultaneously. Rather than the signal being derived from one bead, it is the correlated motion of the beads that distinguishes the sensor response from the uncorrelated background. We envisage two basic configurations. In the first, simplest case, the beads will be positioned at the ends of defined flared microfluidic structures to measure molecular interactions resulting from mechanical biological systems (molecular motors). Alternatively, we will create a circular array around the test object and measure the radial breathing of the ring. In this latter configuration there is the possibility of being able to make new and exciting biological measurements in a non-contact mode, where we will determine both short and long range interactions between cells and surfaces.

  • Funder: UKRI Project Code: EP/F00656X/1
    Funder Contribution: 266,120 GBP
    Partners: University of Liverpool

    Pierced deep beams and shear walls are widely used in the construction industry. For utility and ease of construction, most openings are rectangular in shape. This leads to stress concentrations which cause cracks to extend from the corners of the openings. The width of the cracks often exceeds the serviceability limits indicated by the code, and in many cases the resulting distortion of the opening causes serviceability problems, resulting in unnecessary maintenance costs. This project aims to develop methods of analysis and design rules for pierced deep beams and shear walls which will permit determination and limitation of serviceability cracking together with ultimate strength, allowing rational design of such components. The aims of the project will be accomplished by first developing a novel numerical model capable of simulating the behaviour of openings in these components under a variety of loading conditions. This numerical model will employ an exciting new computational technique, the scaled boundary finite element method, to permit efficient modelling of the stress concentration, crack initiation and crack propagation from the corners of the openings. The model will be verified by application to full-scale deep beam tests and scale model shear wall tests. This detailed numerical model will be used to evaluate and refine existing consistent strut-tie models for ultimate strength design of pierced deep beams. At the same time simplified approaches for the prediction of crack widths will be formulated. For deep beams with penetrations simple design tables for satisfaction of serviceability criteria will be developed. For shear walls, two approaches will be developed and investigated. The first will be based on pseudo-empirical correlation of moment and shear in the equivalent link beams over the opening (based on the linear elastic frame-type analysis typically used in practice). In the second approach a simplified element suitable for inclusion in frame analysis packages will be constructed. This will permit prediction of crack openings directly and allow more accurate analysis of the global effects of reduced stiffness on the structural response. Typical configurations will be investigated and suitable code provisions will be proposed to ensure that existing serviceability criteria are satisfied.

  • Funder: UKRI Project Code: EP/F02844X/1
    Funder Contribution: 462,878 GBP
    Partners: Cardiff University

    In this research programme, it is proposed to investigate the effect of texturing the surface of insulating materials on their performance under applied high voltage and outdoor weather conditions. This follows a Cardiff patent for insulating structures having anti-dry band properties and initial experimental work which proved the concept. It is proposed to investigate surfaces having protuberances in the shape of hemispheres arranged in such a way that both the total surface area and the creepage path of the resulting insulation surface are increased significantly. Such texturing can also have the added benefit of improving the hydrophobicity of the surface. Modelling work using analytical and numerical computation techniques will be used to assess the performance of various shapes of texturing with dimensions in the mm and cm ranges. This modelling work will then be used to shortlist candidate textures for experimental work. A vacuum casting facility will be used to prepare flat and insulator samples that will be tested in inclined-plane and pollution-chamber test environments. Various characterisation laboratory tasks, including long term pollution exposure and ageing investigations, will be undertaken in order to classify the various textures. This will involve synchronising visual, electrical and thermal measurements. It is, then, proposed to use the order of merit from the flat samples investigations to select two or three geometries for fabrication of complete insulators. The allter will be used to conduct a comparative test and modelling programme which will demonstrate/assess the superior properties of the textured surface insulator, particularly under polluted conditions. Furthermore, this programme includes investigation tests at an outdoor pollution test station and collaboration with both a UK manufacturer and an overseas academic group.

  • Funder: UKRI Project Code: EP/F036019/1
    Funder Contribution: 44,182 GBP
    Partners: OS, University of Leeds

    A passenger in an aircraft requires information about a flight at a very different level of detail from the pilot. A map of an entire country on a computer screen shows far less detail than a map of single town on the same screen. For certain kinds of data, reducing the level of detail is a relatively well-understood process, but for other kinds this reduction is a challenging problem. This project is concerned with reduction in level of detail for data associated to networks in geographic information systems. Examples of such networks are roads, rivers, railways, electricity distribution networks, etc.Manipulation of level of detail, or granularity, is vitally important for any kind of system for managing processes and detecting events in geographical networks. For example: congestion and accidents on roads, floods in rivers, or terrorist attacks on railways. Such systems require some level of human intervention, and to do this effectively requires the ability to zoom in and out of the data in various ways. Changing the spatial level of detail, or 'scale' in traditional paper-based maps, is only one of the requirements -- it is also necessary to deal with classification of the things represented (ontologies), and with time at different granularities.Features in geographical information are usually classified by what kind of thing they are: here is a house, there is a school and that is a railway station, and they are all buildings . In a large scale (i.e. detailed) map we generally work with a classification that is itself detailed. Besides showing individual buildings, such maps can make fine distinctions between many different kinds of building. At smaller scales, as the separate buildings merge into undifferentiated built-up areas on the map, the classification becomes coarser too. The level of detail in classification is termed ontological granularity.If dealing with a map showing, say, traffic flow along streets in a city, we might need to see how levels of traffic vary over a single day or at a given time over a number of different days. In both of these examples, temporal granularity is involved -- grouping together and selecting periods of time.The challenge that this project addresses is the combination of these three kinds of granularity: the spatial, the ontological, and the temporal. In varying one kind of level of detail, what changes are necessarily imposed in the other kinds of level of detail? Some simple examples are easily understood: if a church and an adjacent house become represented at a smaller scale by a single entity, it might get classified simply as a building. However, general theoretical principles are lacking; the project will develop these and will evaluate them in collaboration with the Ordnance Survey. The principles will be used to specify operations for changing level of detail in network-based geographic data.The evaluation will be based on a major resource for UK network data: the Integrated Transport Network. This is a layer within Ordnance Survey'sMasterMap providing two themes: the Roads Network (containing all navigable roads in Great Britain) and Road Routing Information (containing additional information such as one-way streets and other restrictions). The project will also make essential use of the expertise of Professor Michael Worboys, Chair of the Department of Spatial Information Science and Engineering, University of Maine, who will be based in Leeds as a visiting researcher.

  • Funder: UKRI Project Code: DT/F007744/1
    Funder Contribution: 719,422 GBP
    Partners: University of Edinburgh

    Carbon dioxide (CO2) is considered to be a greenhouse gas. The concentration of CO2 in the earth atmosphere is an important control on earth surface temperature, and hence climate. CO2 dissolution in the oceans is also being recognised as an important factor in making surface seawater unusually acid / this severely affects ecosystems and species from algae to fish and whales. Increased CO2 in the atmosphere is recognised as being partly caused by burning of fossil fuels, such as coal and gas, in power stations. Carbon Capture and Storage is a suite of technologies which enables CO2 to be captured at power stations, liquefied by increasing the pressure, transported by a pipe, and injected deep underground in to pore space of deeply buried sedimentary rocks such as sandstones. This can effectively remove CO2 from the power cycle of fossil fuel use, and store the CO2 for tens of thousands of years, which enables the earth atmosphere to return to normal. Because of the very large CO2 volumes involved, it is not possible to build surface stores. Because of the acid effects of CO2, it is not possible to inject CO2 into seawater. By contrast, the Intergovernmental Panel on Climate Change (IPCC) have calculated that more than 25% of world CO2 emissions could be stored by geological CCS. This could be a vital technology for the world's future. There is a great deal of interest worldwide in CCS and, because of the offshore oil industry, the North Sea is one of the world's prime areas for CCS to be rapidly developed. However, there are only three full-scale projects at present in the world. For UK power generating companies to become commercially interested the chain of technologies must be both demonstrated to work reliably, and must be capable of cost-effective development. This project is trying to identify aquifer sites deep underground which are close to power plant in the U.K., where CO2 can be safely stored, but sites are quicker and cheaper to develop than offshore in the North Sea. This can enable power generating companies to develop CCS over a period of years, on a medium scale, and learn to conduct the industrial operation. If this project is successful, it could lead to take up of CCS in the U.K. 10 or 15 years earlier than waiting for an infrastructure of large North Sea pipelines to be developed for CO2. When those pipes become available, UK power companies will be completely ready to connect power plant to store CO2 in large redundant hydrocarbon fields offshore. This could save many tens of million tons CO2 per year being emitted into the atmosphere from the U.K., and place the U.K. in the forefront of carbon reduction nations. The universities and companies involved in this 2.3M consortium are all experienced in investigating the deep subsurface for oil and gas production. Edinburgh, Heriot-Watt and BGS already have 1.6M from the Scottish Executive to establish the UK's largest research grouping to investigate CO2 storage. This expertise will be transferred to exploring for CO2 disposal sites. Using the information held by the British Geological Survey, maps will be made of the subsurface deep beneath England, and deep beneath the Forth estuary. Heriot-Watt university will assess the potential chemical reactions of CO2 with rock, and how much CO2 can be injected. Electricity generators, led by Scottish Power, will make engineering designs for modified power stations to supply CO2. Schlumberger and Marathon Oil, will assess the subsurface technology required for safe and reliable injection and monitoring. The University of Edinburgh will make computer simulations to determine if CO2 will leak deep below ground, and will assess how specific site is storage sites will perform to safely retain CO2. Amec will evaluate transport of CO2 by pipe. Tyndall will investigate the public attitudes at the candidate storage sites.

  • Funder: UKRI Project Code: EP/G500045/1
    Funder Contribution: 1,213,030 GBP
    Partners: University of Warwick

    We propose to establish a Doctoral Training Centre for Systems Biology at Warwick. The Centre will be an independent unit within Warwick University providing training in the multi-disciplinary skills required for research into biological systems, with students undertaking a dedicated taught MSc course in the first year followed by a PhD in systems biology during years 2-4. It will be set up in completely refurbished premises and managed by senior academic staff from a mix of life sciences and physical sciences Departments.

  • Funder: UKRI Project Code: EP/G009104/1
    Funder Contribution: 81,522 GBP
    Partners: University of Leeds, Intel (Ireland)

    In this project we will prove the principles of fabricating graphene in a form useful for manufacturing nanoscale electronics and fabricate some simple devices. Graphene is a form of carbon discovered in the 21st century: a single two dimensional sheet of atoms in a hexagonal chickenwire array. It completes the set of carbon materials, which already had zero-dimensional (buckyballs), one-dimensional (nanotubes), and three-dimensional (graphite) members that are all formed by rolling or stacking up graphene sheets. In its simplest form it can be made by anyone / a pencil trace consists of millions of carbon flakes, and amongst the millions a few will be just one atomic sheet thick. Experiments on these flakes have shown that they have really remarkable properties, particularly for electronic components. The two-dimensional nature of the material, along with the symmetry of the lattice, means that the electrons in the graphene sheet have the same dynamics as relativistic particles such neutrinos: they are now commonly referred to as massless Dirac fermions, with a new quantum number, chirality, not possessed by free electrons. This has been shown to lead to bizarre new physics such as finite electrical conductivity without charge carriers and new versions of the quantum Hall effect. Although new nanoelectronic devices based on this novel physics offer exciting possibilities, using graphene can also make marked improvements to present day technologies. This is because it possesses the higher value of a key materials property than any other semiconductor: the mobility. A simple field effect CMOS-like transistor, using a graphene flake to form the channel, outperformed Si by more than a factor of ten. A major obstacle to achieving this is that building complex circuits from randomly placed, shaped, and sized flakes is not possible using today's planar fabrication technologies where reproducibility is key. What is needed a uniform layer of graphene coating an entire wafer that can be patterned and processed in the usual way. The most promising way to do this currently seems to be to use SiC wafers used commercially in high power electronics. A proper surface treatment in ultrahigh vacuum preferentially removes silicon atoms, and the carbon atoms that remain reconstruct themselves to form graphene. The promise of wafer-scale device-grade material offers the possibility of not just forming transistor channels out of graphene, but carving entire circuits from a single graphene sheet. At Leeds we have been working on epitaxial graphene production now for roughly a year. We have set up and tested the various surface science instruments that will be needed to show that graphene has indeed formed on the surface of our SiC wafer. Our recent efforts have concentrated on achieving the very high temperatures for the wafer in UHV that are the key step of producing the surface graphene, and after a series of improvements we are now close to reaching those needed. Once we have graphene, we shall optimize its production and start to make electronic devices from it. In this proof-of-principle project we have two main aims: to develop a reliable protocol for forming graphene on SiC wafers in a form useful for scaling up to manufacturing; and to build some simple demonstrator devices to show that this material can be processed in nanoscale devices including gates that can control a switching action. We will also begin some pilot experiments on connecting magnetic electrodes to graphene devices, with a view to preparing the ground for future projects involving spintronics in graphene/using the electron spin as well as charge to store and process information/which is potentially a very fertile area, as quantum spin states are very long lived in graphene, even at room temperature.

Send a message
How can we help?
We usually respond in a few hours.