When solid materials are loaded above a critical level, they may change their shape permanently: they undergo plastic deformation. Consider, for example, a cylinder which we compress by pushing from top to bottom. If the load is small, the cylinder first deforms elastically (it reverts to its original shape after the load is removed). Above a certain load, some permanent deformation remains. Now if we use a macroscopic cylinder, say, several centimetres in size, then the stress (the force per unit area) needed to obtain a given relative deformation will not depend on the size of the cylinder. It will increase gradually with increasing deformation, and this 'hardening behavior' will be identical for cylinders made of the same material and deformed under the same conditions. If the stress is everywhere the same in the cylinder, also the deformation will be homogeneous - the cylinder will get shorter and thicker but will retain its cylindrical shape. But when the deforming body becomes very small - of the order of a few micrometers in diameter - then we observe quite different behavior: (1) The stress required to deform samples of material increases as the samples become smaller. (2) Even if the stress is increased slowly and steadily, the deformation does not increase gradually but in large jumps. These jumps occur randomly, and lead to large deformations in small parts of the sample. As a consequence, in our cylinder example the samples assume irregular accordeon-like shapes. If we bend very thin wires, they may not deform into smoothly curved but into random shapes resembling mis-shapen paperclips. (3) Even if the material properties are the same (for instance, if all our cylinders have been machined out of the same block) the stresses required to deform samples may scatter hugely. In two apparently identical micrometer sized samples, the stresses required to initiate or sustain plastic deformation may easily differ by a factor of two. Obviously this poses serious problems if we want to avoid or control irreversible deformation in very small components. The first of these aspects have been studied in some detail, and some work has also been done on the second one. However, there is no systematic study which quantifies the scatter in deformation behaviour between different small samples and provides tools for assessing the risk of unwanted deformation behaviour. We have teamed up with German researchers who conduct micro-deformation experiments and with others who simulate such deformation processes by tracing the motion of material defects which produces the irreversible deformation. Together we will conduct and analyze large series of experiments and simulations to characterize the scatter in deformation behaviour and to understand how it depends on sample size, material preparation, and method of deformation. We will then use this database to develop simulation tools that allow engineers to assess the risk of undesirable outcomes. Why is it important? Imagine you want to bend sheets of metal with a size of centimetres to meters, say for making them into cylinders for producing cans, or for making car doors. It is comparatively easy to get the desired shapes. However, if you try to do the same on a very small scale, the result might look quite different! Micro-scale scatter of deformation properties may affect our ability to form materials into very small shapes and to produce very small parts for microtechnologies. A striking example are the very thin wires that provide electrical connections for microchips. If the shape of these wires scatters too much, two of them may get into contact and produce a short-circuit that makes the device useless. As miniaturization of components and devices proceeds, we need to gain the knowledge and expertise needed to handle forming processes on the microscale. Our research wants to make a contribution to this purpose.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9d19a668b0b3926de36098292a7d19bd&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9d19a668b0b3926de36098292a7d19bd&type=result"></script>');
-->
</script>
Beryllium has applications in various environments which will (or already do) expose it to radiation which will result in the accumulation of helium gas and cause the displacement of atoms from their crystal lattice sites. Beryllium will be the material facing the plasma in the ITER magnetic-confinement nuclear fusion reactor currently under construction in France and titanium beryllide is being developed for use in the "pebble bed" design concept for the tritium-breeding blanket of the DEMO power plant which will follow ITER. In high-energy accelerator research, beryllium is being considered for components in proton-driven particle sources where it will experience even higher rates of helium accumulation than in nuclear fusion environments. This project will explore the effects of gas accumulation and displacement damage in beryllium and titanium beryllide using transmission electron microscopy with in-situ ion irradiation to simultaneously observe the microstructure whilst bombarding with an ion beam. By varying the temperature, ratio of gas implantation to atomic displacement rates and the irradiation dose, this project will build up a three-dimensional matrix of experimental data. Changes in the microstructure of a material determine how the performance of components change under the extreme environments described above. Therefore, these results will allow the ways in which the microstructure evolves under these conditions to be better understood and thus support the development of these important technologies.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a9f9669bbb0887771bb4c249f76665f7&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a9f9669bbb0887771bb4c249f76665f7&type=result"></script>');
-->
</script>
Non-Alcoholic fatty liver diseases (NAFLD) with a 25% worldwide prevalence are leading causes of liver associated morbidity and mortality. No pharmacological treatment is yet available for chronic inflammation/ Non-Alcoholic SteatoHepatitis (NASH) and associated fibrosis that are the progressive forms of NAFLD. Although the mechanisms underlying NAFLD progression are not fully understood, chronic inflammation is a key player mediating dysfunction of liver, adipose tissue and the gut. Chronic inflammation modifies the physiological tolerance of the liver and promotes liver injury and fibrogenesis. The extracellular matrix (ECM) and its receptors play a key role in orchestrating chronic inflammation (liver, adipose tissue) and fibrosis leading to advanced NASH. Signaling by CD44 (cell-surface glycoprotein) is initiated by ECM molecules (osteopontin (OPN), hyaluronan (HA), and indirectly tenascin-C (TNC)) that may enhance signal intensity and duration. OPN, HA, TNC and CD44 can form complexes with cell surface receptors (e.g. c-Met, VEGFR, TLR4, EGFR, CXCR4…..) that are known to promote inflammation, angiogenesis and fibrogenesis. We propose that CD44 acts as a central integrator and potentiator by transmitting signals from the ECM into the cytoplasm thus triggering proinflammatory signaling that drives NAFLD. Indeed, the consortium already discovered that targeting CD44 strongly alleviates liver injury, steatohepatits and fibrosis by downregulating the recruitment of macrophages and neutrophils into the liver and, by decreasing OPN levels and TNC signaling. In patients, hepatic CD44 and TNC expression strongly correlates with each other and with hepatic macrophage infiltration/abundance suggesting a combined action of both molecules in NAFLD. The aims of MATRIXNASH are: 1.-To decipher the ECM-dependent pathways promoting onset and progression of NAFLD by proteomic and genomic approaches in murine NAFLD models with engineered levels of key candidates (e.g. CD44, OPN, TNC). 2.-To establish novel therapeutic strategies in our fibrotic-NASH models by targeting candidates (CD44, TNC) alone or in combination through general or hepatic cell type specific approaches using novel state of the art AAV delivery systems. 3.-To establish relevance for the human patient in an integrative approach. We will gain comprehensive knowledge about ECM and CD44 complex composition in human NAFLD by proteomic and genomic approaches and tissue analysis for candidates derived from the murine models (cohort of 1006 obese patients). Candidates, relevant in the human disease, will be confirmed in cell culture and our murine NAFLD models. The consortium will employ a common multidisciplinary approach combining unbiased proteomics, transgenic mouse models, novel therapeutic targeting of CD44 and TNC and a large cohort of human tissue samples to acquire comprehensive knowledge about NAFLD onset and progression. This information may provide novel tools for diagnosis and therapy of NAFLD.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::9bbfed907152d8c5a665bfe71eb09c53&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::9bbfed907152d8c5a665bfe71eb09c53&type=result"></script>');
-->
</script>
At any one time cloud coverage over the earth is around 70% on average and to some extent they may warm or cool the planet. I think everybody in the UK is familiar with clouds blocking the suns light and making it cooler, thick liquid clouds generally do this by reflecting the suns radiation back to space. However, ice clouds that are high up within the atmosphere may actually cause a warming effect at the surface by trapping and emitting thermal radiation. The relative amount of cooling vs heating ice dependent on the number and size of ice particles within high clouds. Our current measurements in the true atmosphere have failed to quantify the radiative properties of these clouds due to current instrumental difficulties in measuring small ice particles from aircraft. Precipitation is also an important factor in climate change and one that ice particles play a huge role in. As early as 1789 Benjamin Franklin suggested that `much of what is rain, when it arrives at the surface of the earth might have been snow when it began its descent...'. And this is very true, current estimates place the ice phase responsible for the majority of precipitation in the tropics (60%). In the part of the earths atmosphere in which we live, temperature decreases significantly with height. Furthermore, one must also consider the annual damage to crop caused by hail storms. It is a wide misconception that ice particles form when the temperature is colder than 0C. The current theories show that this only happens when liquid water has enough impurities. So for example when the water touches a dirty surface like the ground or even a car window - even if your windows are clean they still contain enough impurities to form ice crystals - the water can freeze. However, in the atmosphere water droplets are in a very pure state, and most of them do not freeze until the temperature is as cold as -35C. But there are some impurities albeit few in the atmosphere, and if these particles are contained within the cloud, then ice particles will form at temperatures perhaps as warm as -5C. The problem is that the number of these impurities alone can not explain the number of ice particles that are observed within the cloud. There are several theories that have been put forward to explain this and some have good experimental evidence for them. However, in order to accurately assess climate change we need to quantitatively determine their importance. This work will seek to resolve the three aforementioned problems by gaining an understanding of effiency of snow formation under simulated laboratory conditions and perform experiments looking at the physics of ice particle formation. Scientists at the University of Manchester, School of Earth, Atmospheric and Environmental Sciences will produce realistic clouds in a so called ice-fall chamber. They will simulate the physics of natural cloud formation itself and use state-of-the-science instrumentation to probe the particles within the cloud. By understanding the fundamental physics, they will be able to work with the met office and other universities to better understand the problem of climate change. The Manchester scientists also seek to collaborate with leading scientists from Hertfordshire university, Germany and the US in order to make progress in this area.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::39878b705070b5ac2699f6f87e037006&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::39878b705070b5ac2699f6f87e037006&type=result"></script>');
-->
</script>
A large proportion of phenomena that appear in geometry and theoretical physics can be phrased in terms of an energy (or action) function. The critical points correspond to states of equilibrium and are described by systems of non-linear partial differential equations (PDE), often solved on a curved background space. For example soap films/bubbles, fundamental particles in quantum field theory, nematic liquid crystals, the shape of red blood cells, or event horizons of black holes all admit theoretical descriptions of this type. Remarkably, in their simplest form, the above examples (and many more) correspond to a handful of archetypal mathematical problems. The setting of this proposal is the study of these archetypal problems. It involves a rich interplay between analysis and geometry, chiefly in the combination of the rigorous study of non-linear PDE and differential geometry: an area that has had tremendous impact in recent years with (for instance) Perelman's resolution of the Poincaré and Geometrisation Conjectures, Schoen-Yau's proof of the Positive Mass Theorem from mathematical relativity and Marques-Neves' proof of the Willmore conjecture in differential geometry. A naturally occurring feature of the above problems (and non-linear PDE in the large) is the formation of singularities, which correspond to regions where solutions blow up along a subset of the domain. Due to their geometric nature, there is also scope for the domain itself to degenerate or change topology. For example a thin neck may form between two parts of a surface, which disappears over time and disconnects the two parts - one might think of this as a "wormhole" type singularity. The main aim of this proposal is to introduce tools in PDE theory and differential geometry in order to model and analyse such singularities (where a change of topology takes place). In this setting, there have been tremendous advances in analysing and classifying potential singularity formation, but often relatively little is understood about whether certain singularity types exist, or not. We will initiate a systematic and novel study of the "simplest" types of singularity formation and find conditions which determine whether they exist, and can be constructed, or whether there is a barrier to their existence.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::361bad7b5d4f8343ba45799d9cc990cf&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::361bad7b5d4f8343ba45799d9cc990cf&type=result"></script>');
-->
</script>