Downloads provided by UsageCounts
handle: 2117/423623
This work discusses segmentation and gesture recognition in human-driven robotic trajectories, a technique with applications in several sectors such as in robot-assisted minimally invasive surgery (RMIS) training. By decomposing entire movements-gestures-into smaller actions -sub-gestures-, we can address gesture recognition accurately. This paper extends a bottom-up approach used in surgical gesture segmentation and incorporates natural language processing (NLP) techniques to match sub-gestures with letters and treating gestures as words. We evaluated our algorithm using two different datasets with trajectories on 2D and 3D. This NLP-inspired model obtains an average F1-score of 94.25% in the segmentation tasks, an accuracy of 87.05% in the learning stage, and an overall accuracy of 88.79% in the fully automated execution. These results indicate that the method effectively identifies and interprets new surgical gestures autonomously without the need for human intervention.
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
This work was partially funded by CSIC Project 202350E080 ClothIRI: Robotic Cloth Manipulation at IRI. G. Tapia was also funded by JAE Intro 2023 Grant with reference EX1208.
Peer Reviewed
Robot programming, Classification algorithms, Àrees temàtiques de la UPC::Informàtica::Robòtica, Natural language processing, Trajectory, Classificació INSPEC::Automation::Robots, Robot kinematics, Gesture recognition, Minimally invasive surgery, Training, Three-dimensional displays, Manuals, Robots, Accuracy
Robot programming, Classification algorithms, Àrees temàtiques de la UPC::Informàtica::Robòtica, Natural language processing, Trajectory, Classificació INSPEC::Automation::Robots, Robot kinematics, Gesture recognition, Minimally invasive surgery, Training, Three-dimensional displays, Manuals, Robots, Accuracy
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 43 | |
| downloads | 9 |

Views provided by UsageCounts
Downloads provided by UsageCounts