Powered by OpenAIRE graph
Found an issue? Give us feedback

Italian Institute of Technology

Italian Institute of Technology

Top 100 values are shown in the filters
Results number
13 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/I028773/1
    Funder Contribution: 97,667 GBP

    Robots have been able to serve the original promise to replace human counterparts in laborious, hazardous, and repetitive tasks mainly in the area of position control that includes tasks such as pick and place of components, arc welding, grinding known objects, and even in bipedal walking on fairly smooth and known grounds. However, robots still find it hard to carry out stable force control tasks on uncertain objects or walk on natural soft terrains (grass, sand, mud). Just like the difference between the way we use the left hand and the right hand can not be explained using their biomechanical basis alone, the answer to robotic survival in uncertain environments does not come from an attempt to build robots that resemble human bodies alone. From early 1980s, scientists have begun to believe that the secrets of stable interactions with natural compliant environments will come from an ability of the robot itself to be compliant. The original work of Neville Hogan on impedance control was based on this concept. Since then, a considerable body of literature can be found on how impedance control is applied in various force control applications such as rehabilitation, massaging, bipedal walking, exoskeletal robotics, and several other direct interactions with humans. However, still there is no answer to how impedance control should be adaptively managed to sustain stability when the coupled dynamics between the robot and the environment evolves metastable dynamics. The theory of Metastability states that an uncertain dynamics system can exhibit intermittent instability though it may stay stable most of the time. A human using a screw driver is one example, where the dynamic contact with the screw may stay stable most of the time, but exhibit intermittent slipping due to uncertainty in the friction between the screw and the surrounding medium. Even a human walker can fall down in rare situations due to the same phenomenon. However, an uncertain dynamic system can enhance stability if it can predict where it is likely to fail. A number of recent advances in metastable systems use the concept of mean first passage time (MFPT) as an indicator to assess the current control policy in an uncertain environment. MFPT is the expected time to the next failure situation given the current knowledge of the uncertain dynamics of the coupled dynamics of the robot and the environment.Therefore, this project aims at developing a unifying theory of impedance control for robots that are in dynamic contact with uncertain environments. The generic method that can start to perform stable hybrid position/force control on an uncertain environment with partially known dynamics and recursively build a robust internal model to perform stable position/force control on an environment that changed its stiffness, viscosity, and inertia. Then an algorithm will be developed to use a locally linearised model of the above coupled dynamic system to estimate the MFPT of the robot and the environment. This MFPT will then be used in a novel real-time algorithm to adapt a bank of candidate impedance parameter sets and adaptively choose the best parameter set to suit the environment in order to maximise the MFPT. Rigorous theoretical proofs of stability and experimental validation of methods will be given. The project will use a custom built experimental platform to evaluate and refine the fundamental theories and algorithms that will be developed in this project. The PI will closely collaborate with Shadow Robotics Company, a UK based SME who develops biomimetic robotic hands, and the robotics group led by Professor Darwin Caldwell at the Italian Institute of Technology, where the researchers strive to enable the humanoid robot i-Cub to interact with natural uncertain environments. Therefore, this project will benefit from a wealth of experiences the collaborators have already gathered on real robots interacting with natural environments.

    Powered by Usage counts
  • Funder: UK Research and Innovation Project Code: EP/M02993X/1
    Funder Contribution: 98,357 GBP

    Superresolution encompasses a range of techniques for transcending the resolution limit of a sensor and earned the 2014 Nobel Prize in Chemistry (for superresolved fluorescence microscopy). Superresolution is analogous to biological hyperacuity of vision and touch where the discrimination is finer than the spacing between sensory receptors. Superresolution research in visual imaging has impacted science from cell biology to medical scanning 'in ways unthinkable in the mid-90s' (Editorial, Nature 2009). The success of this proposal will enable the widespread uptake of superresolution techniques in the domain of artificial tactile sensing, potentially impacting multiple application areas across robotics from autonomous quality control in manufacturing to sensorized grippers for autonomous manipulation to sensorized prosthetic hands and medical probes in healthcare. Proposed research More specifically, the development of robust and accurate artificial touch is required for autonomous robotic systems to interact physically with complex environments, underlying the future robotization of broad areas of manufacturing, food production, healthcare and assisted living that presently rely on human labour. Currently, there are many designs for tactile sensors and various methodologies for perception, from which general principles are emerging, such as taking inspiration from human touch (Dahiya et al 2012), using statistical approaches to capture sensor and environment uncertainty and combining tactile sensor control and perception (Prescott et al 2012). All application areas of robot touch are currently limited by the capabilities of tactile sensors. This first grant proposal aims to demonstrate that tactile superresolution can radically improve tactile sensor performance and thus potentially impact all areas of robotics involving physical interaction with complex environments. Visual superresolution has revolutionised the life sciences by enabling the imaging of nanoscale features within cells. Tactile superresolution has the potential to drive a step-change in tactile robotics, with applications from quality control and autonomous manipulators in manufacturing (Yousef et al 2011) to sensorized prosthetics and probes in healthcare. Proposed initial application domain Currently, across the entire automobile industry, gap and flush quality controls are made manually by human operators using their hands to check the alignment between vehicle parts. Experts in the industry have informed me that human hands are used because modern vision-based measuring technologies (such as laser scanners) do not robustly detect sub-millimetre misalignments between parts of differing reflectivity and refractivity. An automated system using robot touch would be more reliable, enable traceability of defects, and move production towards a fully automated paradigm. The proposed research will culminate in a pilot study demonstrating that tactile superresolution will enable readily available tactile sensors to make gap and flush measurements of the requisite sub-millimetre tolerance and how the sensors should be controlled during the tactile perception task. This will constitute a first step towards building a consortium between academic and industrial partners to develop a fully working prototype for test installation on a production line.

    Powered by Usage counts
  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-001

    Grasping rigid objects has been reasonably studied under a wide variety of settings. The common measure of success is a check of the robot to hold an object for a few seconds. This is not enough. To obtain a deeper understanding of object manipulation, we propose (1) a task-oriented part-based modelling of grasping and (2) BURG - our castle* of setups, tools and metrics for community building around an objective benchmark protocol. The idea is to boost grasping research by focusing on complete tasks. This calls for attention on object parts since they are essential to know how and where the gripper can grasp given the manipulation constraints imposed by the task. Moreover, parts facilitate knowledge transfer to novel objects, across different sources (virtual/real data) and grippers, providing for a versatile and scalable system. The part-based approach naturally extends to deformable objects for which the recognition of relevant semantic parts, regardless of the object actual deformation, is essential to get a tractable manipulation problem. Finally, by focusing on parts we can deal easier with environmental constraints that are detected and used to facilitate grasping. Regarding benchmarking of manipulation, so far robotics suffered from uncomparable grasping and manipulation work. Datasets cover only the object detection aspect. Object sets are difficult to get, not extendible, and neither scenes nor manipulation tasks are replicable. There are no common tools to solve the basic needs of setting up replicable scenes or reliably estimate object pose. Hence, with the BURG benchmark we propose to focus on community building through enabling and sharing tools for reproducible performance evaluation, including collecting data and feedback from different laboratories for studying manipulation across different robot embodiments. We will develop a set of repeatable scenarios spanning different levels of quantifiable complexity that involve the choice of the objects, tasks and environments. Examples include fully quantified settings with layers of objects, adding deformable objects and environmental constraints. The benchmark will include metrics defined to assess the performance of both low-level primitives (object pose, grasp point and type, collision-free motion) as well as manipulation tasks (stacking, aligning, assembling, packing, handover, folding) requiring ordering as well as common sense knowledge for semantic reasoning.

    Powered by Usage counts
  • Funder: UK Research and Innovation Project Code: BB/H023569/1
    Funder Contribution: 99,539 GBP

    The functional intricacy of the central nervous system (CNS) arises from the complex anatomical and dynamic interactions between different types of neurones involved in specific networks. Hence, the encoding of information in neural circuits occurs as a result of interactions between individual neurones as well as through the interplay within both microcircuits (made of few neurones) and large scale networks involving thousands to millions of cells. One of the great challenges of neuroscience nowadays is to understand how these neural networks are formed and how they operate. Such challenge can be resolved only through simultaneous recording from thousands of neurones that become active during specific neuronal tasks. One of the experimental approaches to fulfil this goal is to use multielectrode arrays (MEAs) that consist of several channels (electrodes) that can each record (and/or stimulate) from few adjacent neurones within a particular area of the CNS. MEAs can be used in vitro to record from dissociated neuronal cultures or from brain slices or isolated retinas. These MEAs consist of assemblies of electrodes embedded in planar substrates. Typical commercial MEAs consist of 60-128 electrodes with a spacing of 100-200 um. Considering that a generic neurone in the mammalian CNS has a diameter of about 10 um, it is obvious that such MEAs cannot convey information on the activity of all neurones involved in a specific network, but rather just from a sample of these cells. To overcome this activity under-sampling, in this project, we will use the Active Pixel Sensor (APS) MEA, a novel type of MEA platform developed in a NEST-EU Project by our collaborator Luca Berdondini (Italian Institute of Technology, Genova). This MEA consists of 4,096 electrodes with near cellular resolution (21x21 um, 42 um centre-to-centre separation, covering an active area of 2.5 mm x 2.5 mm), where recording is possible from all channels at the same time. We will use the APS MEA to record spontaneous waves of activity that are present in the neonatal vertebrate retina. These waves occur during a short period of development during perinatal weeks and they are known to play an important role in guiding the precise wiring of neural connections in the visual system, both at the retinal and extra-retinal levels. The APS-MEA, thanks to its unmet size and resolution, will enable us to reach new insights into the precise dynamics of these waves as never achieved before. Recordings from such large scale networks at near cellular resolution generate extremely rich datasets with the drawback that these datasets are very large and difficult to handle, thus necessitating the development of new powerful analytical tools enabling to decode in a fast, efficient and user-friendly way how cellular elements interact in the network. The development of such computational tools is the central goal of this project, while the experimental work on the retina defines a challenging and unique scientific context. The tools we plan to develop will yield parameters that will help us reach better understanding of network function, from the temporal firing patterns of individual neurones to how activity precisely propagates within the network. We will also develop novel tools for easier visualisation of the dynamical behaviour of the activity within the network. These tools will be developed in a language that could be easily utilized by other investigators using the same recording system or other platforms of their choice. Finally, to ensure that these tools are accessible to the wide neurophysiology community, they will be deployed on CARMEN (Code Analysis, Repository and Modelling for e-Neuroscience), a new internet-based neurophysiology sharing resource designed for facilitating worldwide communication between collaborating neurophysiologists.

  • Funder: UK Research and Innovation Project Code: EP/S001921/1
    Funder Contribution: 633,926 GBP

    Synthetic Biology (SynBio) is an emerging engineering discipline with an ambitious goal: empowering scientists with the ability to programme new functions into cells, just like we would do with computers. Despite a thriving community and notable successes, however, writing "functioning algorithms" for cells remains extremely time-consuming. This is a roadblock towards the engineering of mammalian cells, an area uniquely positioned to develop potentially groundbreaking therapeutic applications. This translates into high development costs that, in turn, are limiting the pace at which Synthetic Biology progresses towards applications. Model-Based System Engineering (MBSE) is the answer the engineering community found to similar problems and is widely used to streamline manufacturing. In this framework, mathematical models are used to screen candidate designs via simulations and bring to testing only the most promising solutions. Despite being an engineering discipline, SynBio lacks a MBSE framework. This is largely due to three connected issues: (a) the scarcity of accurate mathematical models of parts (e.g. promoters) in the first place. Such a shortage (b) makes it difficult to "reverse engineer" the connection between the DNA sequence and the kinetics of the transcribed mRNA (e.g. promoter sequence and leakiness of expresion). This means that (c) the inverse "re-design" problem, i.e. finding the optimal DNA sequence of a part, cannot be solved, let alone automatically. With this fellowship, I aim at filling this gap and develop a "Model-Based Biosystem Engineering" (MBBE) framework to automate the Design-Build-Test-Learn (DBTL) cycle in Synthetic Biology. Given their role in cell and gene therapy, with my team, we will focus on synthetic promoters for mammalian cells. Prompted by the recent successes and challenges of CAR T cells -immune cells engineered to kill cancer cells, we will use the framework to engineer a hypoxia-inducible promoter that optimises a set of criteria we will determine and prioritise with our collaborator Prof. Chen at UCLA. We will first focus on the development of the MBBE framework; to this aim we will tackle the three issues mentioned above by: (a) developing a high-throughput microfluidic device that allows to infer, with minimum experimental efforts (via Optimal Experimental Design), reliable mathematical models of hundreds of variants of a promoter, (b) using these results to automatically learn/predict gene expression dynamics from promoter sequence via machine learning and (c) combining this prediction scheme with computational optimisation to identify and refine promoter sequences so that they satisfy given specifications and maximise pre-determined objectives. To develop a hypoxia-inducible promoter, we will start from an initial pool of 600 sequences -designed to cover a fraction of the design space as big as possible, and we will iterate twice over our automatic DBTL loop to finally obtain promoter(s) that can be used to overcome the current limitations of CAR T cells. Besides automating the DBTL cycle, the approach I propose has three main benefits: it allows to obtain, and publicly share, reliable models (1) faster -as we will use Optimal Experimental Design methods to minimise experimental efforts, (2) cost-effectively -as microfluidics drastically reduces the use of reagents and automation renders human intervention unnecessary; (3) in a reproducible way -as all the data and the steps in the inference are tracked and immediately made publicly available.

    Powered by Usage counts

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.