
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
handle: 2117/14016 , 10261/96813
Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.
:Informàtica::Automàtica i control [Àrees temàtiques de la UPC], Classificació INSPEC::Automation, Semantic scene graphs Unsupervised learning, :Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes [Àrees temàtiques de la UPC], Pattern recognition systems, Object categorization, Action recognition, :Pattern recognition [Classificació INSPEC], Automation, Àrees temàtiques de la UPC::Informàtica::Automàtica i control, Àrees temàtiques de la UPC::Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes, Reconeixement de formes (Informàtica), :Automation [Classificació INSPEC], Classificació INSPEC::Pattern recognition, automation pattern recognition, Affordances, Object–action complexes (OACs), Automatització
:Informàtica::Automàtica i control [Àrees temàtiques de la UPC], Classificació INSPEC::Automation, Semantic scene graphs Unsupervised learning, :Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes [Àrees temàtiques de la UPC], Pattern recognition systems, Object categorization, Action recognition, :Pattern recognition [Classificació INSPEC], Automation, Àrees temàtiques de la UPC::Informàtica::Automàtica i control, Àrees temàtiques de la UPC::Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes, Reconeixement de formes (Informàtica), :Automation [Classificació INSPEC], Classificació INSPEC::Pattern recognition, automation pattern recognition, Affordances, Object–action complexes (OACs), Automatització
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 140 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 1% | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
views | 145 | |
downloads | 206 |