
BACKGROUND. Probabilistic methods are ubiquitous in computer science and computational geometry, the area that specializes in the design of algorithms for problems of geometric nature, is no exception: from randomized algorithms, average-case complexity analysis of algorithms (or, its recent refinement, smoothed complexity analysis) to Erdos' probabilistic lens in discrete geometry, probabilistic methods have had a major impact on the field. Most often, probabilistic methods are used in computational geometry by computational geometers themselves. This often requires a good understanding of the properties of random geometric objects, an understanding that is the focus of the field of probabilistic geometry in mathematics. Surprisingly, the two fields have had very little interaction in the past. OVERVIEW. This project brings together computational and probabilistic geometers to tackle new probabilistic geometry problems arising from the design and analysis of geometric algorithms and data structures. This will lead to new research directions in probabilistic geometry as well as a better understanding of geometric algorithms and data-structures. We will focus on properties of discrete structures induced by or underlying random continuous geometric objects and consider three types of questions. * What does a random geometric structure (convex hulls, tesselations, visibility) look like? Such questions have been investigated in the past in computational geometry and probabilistic geometry, but from different perspectives: computational geometers obtained simple asymptotic bounds on very specific substructures whereas probabilistic geometers obtained probability law estimates for global properties. Our goal is to combine these two viewpoints and obtain strong results on specific aspects of random geometric structures that are relevant to algorithmic applications. * How to analyze and optimize the behaviour of classical algorithms on "usual" inputs? Worst-case analysis is usually only a crude estimate of how geometric algorithms perform on real-life data. Our goal is to consider certain classical algorithms (typically, Delaunay triangulations) that leave room for fine-tuning (parameters, choice of methods), analyze how they perform for randomly distributed random input and optimize them for such input. The input distribution may range from classical random points in space to random perturbations of specific configurations (smoothed analysis), including intermediate models such as points randomly distributed on lower-dimensional manifolds, etc... * How can we generate randomly "interesting" geometric structures? Geometric questions are usually phrased over continuous, geometric objects but often only depend on some underlying discrete structure; examples include finite point sets in the plane (and their order-type) or polynomial systems (and their Newton polytopes or their Hilbert series). When using randomness to explore configuration spaces to search for a "reasonably good" solution to a problem or a counter-example to a conjecture, the challenge is to generate continuous geometric objects with diverse underlying discrete structures. A natural starting point is to understand the distributions of discrete structures induced by "natural" distributions of continuous geometric objects. ORGANIZATION. There are three partners, two teams for computational geometers (Sophia-Antipolis and Nancy) and one team of probabilistic geometers (Rouen-Poitiers-Orléans). An important aspect of this project is its interdisciplinarity, and we will ensure that a real collaboration takes place by: * organizing one-week workshops in the two first years. Each workshop will gather from 10 to 20 people, students and established researchers, for a week of active discussions of research problems. * funding a PhD thesis co-advised by a computer scientist (O. Devillers) and a mathematician (P. Calka).
During the past decades, significant advances have been accomplished in the development of neuroimaging techniques allowing for more and more accurate recording of neuronal systems (in terms of temporal and spatial resolution). For example, surface electrophysiology (EEG, MEG) or functional MRI have led to a vast literature on human brain mapping. Meanwhile, progress has also been made in the understanding of the basic mechanisms involved in excitation-, inhibition- and synchronization-related processes in brain neuronal systems, at sub-cellular (membrane ion channels, neurotransmitter receptors), cellular (neurons) and network (assemblies of neurons) levels. However, the characterization of such mechanisms from non-invasive methods is still considered as a difficult and unsolved problem. Difficulties arise from: i) the large diversity of neuroimaging techniques that are now available to record from neuronal systems ii) the fact that each technique can only provide, by itself, a partial, specific and indirect measurement of the activity in considered systems, which needs to be integrated in order to provide one global picture of brain activity iii) the incomplete knowledge about the neurophysiological and the biophysical aspects involved in the generation of observations (local or global electric field potentials, magnetic fields, Blood-oxygen-level dependent - BOLD - responses, oxygen - O2 - rates, …). In this context, computational models provide a unified framework in which knowledge on the physiology of neuronal activity and neurovascular coupling can be incorporated, in order to simulate the output signals for given parameters. The parameters range can be inspired from physiology-based studies in actual animal models. Once the parameter range is defined, one can test the influence of varying a reduced set of parameters (for example, ratio between excitation and inhibition) on the output signals. In some conditions, the models can be inverted and hidden variables, i.e. variables not directly measured but contained in the models such as ratio excitation/inhibition, synchronization across populations of neurons, can be recovered from raw data using parameter identification procedures. In the current project, we propose to develop computational models in order to simulate neuroimaging data, and to compare the results of these models to multimodal datasets obtained in animals and humans. This will permit to improve the interpretation of such data and increase the quantity of information that can be extracted. We will operate at the level of population of cell, i.e. at a scale compatible with the resolution of neuroimaging tools (at the level of the mm). We propose a novel model structure, which will include astrocytes at this “mesoscopic” level and will operate in networks of connected regions. Moreover, we will compare models in physiological and pathological conditions, which will be a step towards a better understanding of mechanisms underlying epileptic condition. The MULTIMODEL project stems from a conjoint Inserm-Inria scientific initiative launched in December 2008. It is entitled « Models and interpretation of multimodal data (EEG, MEG, fMRI). Application to epilepsy » and involves 5 partners (Inserm U751 in Marseille, U678 in Paris, U836 in Grenoble, U642 in Rennes and INRIA Odyssée project-team in Sophia Antipolis). The current proposal would permit to continue the promising work started by this initiative.
GeMCo is a multidisciplinary research project directed at studying, understanding the basic principles and interaction motifs underlying information processing at the cellular level, with the aim of controlling genetic regulation. To study how cells regulate and adapt to a changing environment, the field of synthetic biology has opened up a new generation of fundamental research by trying to redesign natural systems or create novel systems from scratch. Along these lines, we propose to focus on the gene expression machinery of the bacterium Escherichia coli, with the aim of controlling the growth rate of the cells. E. coli is a model organism that is easy to manipulate and much knowledge is available about its regulatory networks. While many experimental and theoretical studies have addressed the biological regulation of the growth rate, no attempts have been made to modify these control mechanisms in a directed way. In addition to the experimental difficulties, one of the main obstacles is theoretical: no control strategies with adequate biological constraints exist for such systems. Mathematical modeling and analysis are essential components of systems and synthetic biology, as they help understanding the consequences of (changes in) the network of interactions on the dynamical behavior of the system. Thus, the goals of the present proposal are: (i) construct a detailed kinetic mathematical model of the network controlling the gene expression machinery in E. coli, (ii) identify the parameters and validate the model by comparing predictions with experimental data, (iii) develop appropriate mathematical control strategies to re-design the network structure and elicit a new global behavior, and (iv) experimentally validate the behavior of the re-designed system. At the theoretical level, we propose to develop model reduction methods and explore control theoretical strategies specifically devoted to deal with the various constraints imposed by biological systems. Model reduction is aimed at finding smaller subsystems, possibly motifs, that can be more easily compared with data and facilitate parameter identification. These subsystems will also be analyzed from the control point of view, to generate control laws which may then be incorporated into the more complex global system. The mathematical methodologies to be used in GeMCo echo the currently available experimental techniques. Quantitative and smooth measurements of gene expression can be obtained through the use of green fluorescence reporter genes. This justifies the use of continuous ordinary differential equations for model construction and validation. A standard procedure to control the transcription rate of a gene involves the construction of plasmid containing an inducer to that gene, so that its transcription rate can be increased by a certain factor simply by adding an amount of the corresponding inducer molecules to the system. This leads only to a qualitative knowledge of the system's inputs, and therefore justifies the choice of piecewise affine differential models to explore new control strategies. The GeMCo consortium consists of three groups, with a large experience in multidisciplinary research, that cover the entire spectrum of competences needed to tackle the project: mathematical analysis and control of dynamical systems, modeling and identification of biological regulatory networks, microbiology and molecular biology. The methods to be developed extend existing approaches in applied mathematics and control theory into novel directions, adapted to the particularities of biological systems. They are intended to be generic tools, but their applicability will be tested on a specific biological modelling and control problem. The outcome of the application of these methods is interesting in its own right, as it presents a novel approach to a fundamental biological and biotechnological problem, the control of the growth rate of the cell.