Powered by OpenAIRE graph
Found an issue? Give us feedback

Shadow Robot Company Ltd

Shadow Robot Company Ltd

28 Projects, page 1 of 6
  • Funder: UK Research and Innovation Project Code: EP/N03211X/2
    Funder Contribution: 263,165 GBP

    Living beings share the same embodiment for sensing and action. For instance, the spindle sensors that provide the feeling of a joint angle and speed are embedded on the muscles that actuate this joint. The tendon sensors that provide the feeling of force too are directly involved in actuation of the joint. Do the function of these sensors change when the muscles are activated to take action? Does the co-activation of antagonistic muscles play a role not only in actuation, but also in perception? This project will investigate these questions through targeted experiments with human participants and controllable stiffness soft robots that provide greater access to internal variables. Recent experiments we have conducted on localising hard nodules in soft tissues using soft robotic probes have shown that tuning the stiffness of the probe can maximise information gain of perceiving the hard nodule. We have also noticed that human participants use distinct force-velocity modulation strategies in the same task of localising a hard nodule in a soft tissue using the index finger. This raises the question as to whether we can find quantitative criteria to control the internal impedance of a soft robotic probe to maximise the efficacy of manipulating a soft object to perceive its hidden properties like in physical examination of a patient's abdomen. In this project, we will thus use carefully designed probing tasks done by both human participants and a soft robotic probe with controllable stiffness to access various levels of measurable information such as muscle co-contraction, change of speed and force, to test several hypotheses about the role of internal impedance in perception and action. Finally, we will use a human-robot collaborative physical examination task to test the effectiveness of a new soft robotic probe with controllable stiffness together with its stiffness and behaviour control algorithms. We will design and fabricate the novel soft robotic probe so that we can control the stiffness of its soft tissue in which sensors will be embedded to obtain embodied haptic perception. We will also design and fabricate a novel soft abdomen phantom with controllable stiffness internal organs to conduct palpation experiments. The innovation process of the above two designs - the novel probe and the abdomen phantom - will be done in collaboration with three leading industrial partners in the respective areas. The new insights will make a paradigm shift in the way we design soft robots that can share the controllable stiffness embodiment for both perception and action in a number of applications like remote medical interventions, robotic proxies in shopping, disaster response, games, museums, security screening, and manufacturing.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S00453X/1
    Funder Contribution: 310,597 GBP

    Across the past 50 years, the use of robots in industry has monotonically increased, and it has literally boomed in the last 10 years. In 2016, the average robot density (i.e. number of robot units per 10,000 employees) in the manufacturing industries worldwide was 74; by regions, this was 99 units in Europe, 84 in the Americas and 63 in Asia, with an average annual growth rate (between 2010 and 2016) of 9% in Asia, 7% in the Americas and 5% in Europe. From 2018 to 2020, global robot installations are estimated to increase by at least 15% on average per year. The main market so far has been the automotive industry (i.e. an example of heavy manufacturing), where simple and repetitive robotic manipulation tasks are performed in very controlled settings by big and expensive robots, in dedicated areas of the factories where human workers are not allowed to enter for safety reasons. New growing markets for robots are consumer-electronics and food/beverages (i.e. examples of light manufacturing) as well as other small and medium sized enterprises (SMEs): in particular, the food and beverage industry has increased robot orders by 12% each year between 2011 and 2015, and by 20% in 2016. However, in many cases the production processes of these industries require delicate handling and fine manipulations of several different items, posing serious challenges to the current capabilities of commercial robotic systems. With 71 robot units per 10,000 employees (in 2016), the UK is the only G7 country with a robot density below the world average of 74, ranking 22nd. The industry and SME sector is highly in need of a modernization that would increase productivity and improve the working conditions (e.g. safety, engagement) of the human workers: this requires the development and deployment of novel robotic technologies that could meet the needs of those businesses in which current robots are yet not effective. One of the main reasons why robots are not effective in those applications is the lack of robot intelligence: the ability to learn and adapt that is typical of humans. Indeed, robotic manipulation can be enhanced by relying on humans, both through interaction (i.e. humans as direct teachers) and through inspiration (i.e. humans as models). Therefore, the aim of this project is to develop a system for natural human demonstration of robotic manipulation tasks, combining immersive Virtual Reality technologies and smart wearable devices (to interface the human with the robot) with robot sensorimotor learning techniques and multimodal artificial perception (inspired by the human sensorimotor system). The robotic system will include a set of sensors that allow to reconstruct the real world, in particual by integrating 3D vision with tactile information about contacts; the human user will access this artificial reconstruction through an immersive Virtual Reality that will combine both visual and haptic feedback. In other words, the user will see through the eyes of the robot, and will feel through the hands of the robot. Also, users will be able to move the robot just by moving their own limbs. This will allow human users to easily teach complex manipulation tasks to robots, and robots to learn efficient control strategies from the human demonstrations, so that they can then repeat the task autonomously in the future. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in a much larger applicability (e.g. food handling). Therefore, the expected results of this project will boost productivity in a large number of industrial processes (economic impact) and improve working conditions and quality of life of the human workers in terms of safety and engagement (social impact).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R02572X/1
    Funder Contribution: 12,256,900 GBP

    Nuclear facilities require a wide variety of robotics capabilities, engendering a variety of extreme RAI challenges. NCNR brings together a diverse consortium of experts in robotics, AI, sensors, radiation and resilient embedded systems, to address these complex problems. In high gamma environments, human entries are not possible at all. In alpha-contaminated environments, air-fed suited human entries are possible, but engender significant secondary waste (contaminated suits), and reduced worker capability. We have a duty to eliminate the need for humans to enter such hazardous environments wherever technologically possible. Hence, nuclear robots will typically be remote from human controllers, creating significant opportunities for advanced telepresence. However, limited bandwidth and situational awareness demand increased intelligence and autonomous control capabilities on the robot, especially for performing complex manipulations. Shared control, where both human and AI collaboratively control the robot, will be critical because i) safety-critical environments demand a human in the loop, however ii) complex remote actions are too difficult for a human to perform reliably and efficiently. Before decommissioning can begin, and while it is progressing, characterization is needed. This can include 3D modelling of scenes, detection and recognition of objects and materials, as well as detection of contaminants, measurement of types and levels of radiation, and other sensing modalities such as thermal imaging. This will necessitate novel sensor design, advanced algorithms for robotic perception, and new kinds of robots to deploy sensors into hard-to-reach locations. To carry out remote interventions, both situational awareness for the remote human operator, and also guidance of autonomous/semi-autonomous robotic actions, will need to be informed by real-time multi-modal vision and sensing, including: real-time 3D modelling and semantic understanding of objects and scenes; active vision in dynamic scenes and vision-guided navigation and manipulation. The nuclear industry is high consequence, safety critical and conservative. It is therefore critically important to rigorously evaluate how well human operators can control remote technology to safely and efficiently perform the tasks that industry requires. All NCNR research will be driven by a set of industry-defined use-cases, WP1. Each use-case is linked to industry-defined testing environments and acceptance criteria for performance evaluation in WP11. WP2-9 deliver a variety of fundamental RAI research, including radiation resilient hardware, novel design of both robotics and radiation sensors, advanced vision and perception algorithms, mobility and navigation, grasping and manipulation, multi-modal telepresence and shared control. The project is based on modular design principles. WP10 develops standards for modularisation and module interfaces, which will be met by a diverse range of robotics, sensing and AI modules delivered by WPs2-9. WP10 will then integrate multiple modules onto a set of pre-commercial robot platforms, which will then be evaluated according to end-user acceptance criteria in WP11. WP12 is devoted to technology transfer, in collaboration with numerous industry partners and the Shield Investment Fund who specialise in venture capital investment in RAI technologies, taking novel ideas through to fully fledged commercial deployments. Shield have ring-fenced £10million capital to run alongside all NCNR Hub research, to fund spin-out companies and industrialisation of Hub IP. We have rich international involvement, including NASA Jet Propulsion Lab and Carnegie Melon National Robotics Engineering Center as collaborators in USA, and collaboration from Japan Atomic Energy Agency to help us carry out test-deployments of NCNR robots in the unique Fukushima mock-up testing facilities at the Naraha Remote Technology Development Center.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R031193/1
    Funder Contribution: 303,126 GBP

    How do you grasp a bottle of milk, nestling behind some yoghurt pots, within a cluttered fridge? Whilst humans are able to use visual information to plan and select such skilled actions with external objects with great ease and rapidity - a facility acquired in the history of the species and as a child develops - *robots struggle*. Indeed, whilst artificial intelligence has made great leaps in beating the best of humanity in tasks such as chess and Go, the planning and execution abilities of today's robotic technology is trumped by the average toddler. Given the complex and unpredictable world within which we find ourselves situated, these apparently trivial tasks are the product of highly sophisticated neural computations that generalise and adapt to changing situations: continually engaging in a process of selecting between multiple goals and action options. Our aim is to investigate how such computations could be transferred to robots to enable them to manipulate objects more efficiently, in a more human-like way than is presently the case, and to be able to perform manipulation presently beyond the state of the art. Let us return to the fridge example: You need to first decide what yoghurt pot is best to remove to allow access to the milk bottle and then generate the appropriate movements to grasp the pot safely- the *pre-contact *phase of prehension. You then need to decide what type of forces to apply to the pot (push it to the left or the right, nudge it or possibly lift it up and place the pot on another shelf etc) i.e. the *contact* phase. Whilst these steps happen with speed and automaticity in real time, we will probe these processes in laboratory controlled situations to systematically examine the pre-contact and contact phases of prehension to determine what factors (spatial position, size of pot, texture of pot etc) bias humans to choose one action (or series of actions) over other possibilities. We hypothesise that we can extract a set of high level rules, expressed using qualitative spatio-temporal formalisms which can capture the essence of such expertise, in combination with more quantitative lower-level representations and reasoning. We will develop a computational model to provide a formal foundation for testing hypotheses about the factors biasing behaviour and ultimately use this model to predict the behaviour that will most probably occur in response to a given perceptual (visual) input in this context. We reason that a computational understanding of how humans perform these actions can bridge the robot-human skill gap. State-of-the-art robot motion/manipulation planners use probabilistic methods (random sampling e.g. RRTs, PRMs, is the dominant motion planning approach in the field today). Hence, planners are not able to explain their decisions, similar to the "black box" machine learning methods mentioned in the call which produce inscrutable models. However, if robots can generate human-like interactions with the world, and if they can use knowledge of human action selection for planning, then this would allow robots to explain why they perform manipulations in a particular way, and also facilitate "legible manipulation" - i.e. action which is predictable by humans since it closely corresponds to how humans would behave, a goal of some recent research in the robotics community. The work will shed light on the use of perceptual information in the control of action - a topic of great academic interest and simultaneously have direct relevance to a number of practical problems facing roboticists seeking to control robots working in cluttered environments: from a robot picking items in a warehouse, to novel surgical technologies requiring discrimination between healthy and cancerous tissue.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V024868/1
    Funder Contribution: 1,518,510 GBP

    Despite being far from having reached 'artificial general intelligence' - the broad and deep capability for a machine to comprehend our surroundings - progress has been made in the last few years towards a more specialised AI: the ability to effectively address well-defined, specific goals in a given environment, which is the kind of task-oriented intelligence that is part of many human jobs. Much of this progress has been enabled by deep reinforcement learning (DRL), one of the most promising and fast-growing areas within machine learning. In DRL, an autonomous decision maker - the "agent" - learns how to make optimal decisions that will eventually lead to reaching a final goal. DRL holds the promise of enabling autonomous systems to learn large repertoires of collaborative and adaptive behavioural skills without human intervention, with application in a range of settings from simple games to industrial process automation to modelling human learning and cognition. Many real-world applications are characterised by the interplay of multiple decision-makers that operate in the same shared-resources environment and need to accomplish goals cooperatively. For instance, some of the most advanced industrial multi-agent systems in the world today are assembly lines and warehouse management systems. Whether the agents are robots, autonomous vehicles or clinical decision-makers, there is a strong desire for and increasing commercial interest in these systems: they are attractive because they can operate on their own in the world, alongside humans, under realistic constraints (e.g. guided by only partial information and with limited communication bandwidth). This research programme will extend the DRL methodology to systems comprising of many interacting agents that must cooperatively achieve a common goal: multi-agent DRL, or MADRL.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.