Clinically useful and efficient assessment of balance during standing and walking is especially challenging in patients with neurological disorders. However, rehabilitation robots could facilitate assessment procedures and improve their clinical value. We present a short overview of balance assessment in clinical practice and in posturography. Based on this overview, we evaluate the potential use of robotic tools for such assessment. The novelty and assumed main benefits of using robots for assessment are their ability to assess ‘severely affected’ patients by providing assistance-as-needed, as well as to provide consistent perturbations during standing and walking while measuring the patient’s reactions. We provide a classification of robotic devices on three aspects relevant to their potential application for balance assessment: 1) how the device interacts with the body, 2) in what sense the device is mobile, and 3) on what surface the person stands or walks when using the device. As examples, nine types of robotic devices are described, classified and evaluated for their suitability for balance assessment. Two example cases of robotic assessments based on perturbations during walking are presented. We conclude that robotic devices are promising and can become useful and relevant tools for assessment of balance in patients with neurological disorders, both in research and in clinical use. Robotic assessment holds the promise to provide increasingly detailed assessment that allows to individually tailor rehabilitation training, which may eventually improve training effectiveness. Journal of NeuroEngineering and Rehabilitation, 14 (1) ISSN:1743-0003
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-017-0273-7&type=result"></script>');
-->
</script>
Green | |
gold |
citations | 37 | |
popularity | Top 10% | |
influence | Top 10% | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-017-0273-7&type=result"></script>');
-->
</script>
Collaborative robots, or cobots, are designed to work alongside humans and to alleviate their physical burdens, such as lifting heavy objects or performing tedious tasks. Ensuring the safety of human–robot interaction (HRI) is paramount for effective collaboration. To achieve this, it is essential to have a reliable dynamic model of the cobot that enables the implementation of torque control strategies. These strategies aim to achieve accurate motion while minimizing the amount of torque exerted by the robot. However, modeling the complex non-linear dynamics of cobots with elastic actuators poses a challenge for traditional analytical modeling techniques. Instead, cobot dynamic modeling needs to be learned through data-driven approaches, rather than analytical equation-driven modeling. In this study, we propose and evaluate three machine learning (ML) approaches based on bidirectional recurrent neural networks (BRNNs) for learning the inverse dynamic model of a cobot equipped with elastic actuators. We also provide our ML approaches with a representative training dataset of the cobot's joint positions, velocities, and corresponding torque values. The first ML approach uses a non-parametric configuration, while the other two implement semi-parametric configurations. All three ML approaches outperform the rigid-bodied dynamic model provided by the cobot's manufacturer in terms of torque precision while maintaining their generalization capabilities and real-time operation due to the optimized sample dataset size and network dimensions. Despite the similarity in torque estimation of these three configurations, the non-parametric configuration was specifically designed for worst-case scenarios where the robot dynamics are completely unknown. Finally, we validate the applicability of our ML approaches by integrating the worst-case non-parametric configuration as a controller within a feedforward loop. We verify the accuracy of the learned inverse dynamic model by comparing it to the actual cobot performance. Our non-parametric architecture outperforms the robot's default factory position controller in terms of accuracy.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3389/fnbot.2023.1166911&type=result"></script>');
-->
</script>
Green | |
gold |
citations | 2 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3389/fnbot.2023.1166911&type=result"></script>');
-->
</script>
The analysis of multimodal data collected by innovative imaging sensors, Internet of Things devices, and user interactions can provide smart and automatic distant monitoring of Parkinson's and Alzheimer's patients and reveal valuable insights for early detection and/or prevention of events related to their health. This article describes a novel system that involves data capturing and multimodal fusion to extract relevant features, analyze data, and provide useful recommendations. The system gathers signals from diverse sources in health monitoring environments, understands the user behavior and context, and triggers proper actions for improving the patient's quality of life. The system offers a multimodal, multi-patient, versatile approach not present in current developments. It also offers comparable or improved results for detection of abnormal behavior in daily motion. The system was implemented and tested during 10 weeks in real environments involving 18 patients.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/mmul.2018.011921232&type=result"></script>');
-->
</script>
Green | |
hybrid |
citations | 40 | |
popularity | Top 10% | |
influence | Top 10% | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/mmul.2018.011921232&type=result"></script>');
-->
</script>
End-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient's hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm.The proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815-28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the 'PUPArm' planar robot with three post-stroke patients.The proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients.The proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-018-0348-0&type=result"></script>');
-->
</script>
Green | |
gold |
citations | 29 | |
popularity | Top 10% | |
influence | Top 10% | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-018-0348-0&type=result"></script>');
-->
</script>
pmid: 31689205
The integration of wearable devices in humans' daily lives has grown significantly in recent years and still continues to affect different aspects of high-quality life. Thus, ensuring the reliability of the decisions becomes essential in biomedical applications, while representing a major challenge considering battery-powered wearable technologies. Transferring the complex and energy-consuming computations to fogs or clouds can significantly reduce the energy consumption of wearable devices and result in a longer lifetime of these systems with a single battery charge. In this work, we aim to distribute the complex and energy-consuming machine-learning computations between the edge, fog, and cloud, based on the notion of self-awareness that takes into account the complexity and reliability of the algorithm. We also model and analyze the trade-offs in terms of energy consumption, latency, and performance of different Internet of Things (IoT) solutions. We consider the epileptic seizure detection problem as our real-world case study to demonstrate the importance of our proposed self-aware methodology.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/tbcas.2019.2951222&type=result"></script>');
-->
</script>
Green | |
hybrid |
citations | 28 | |
popularity | Top 10% | |
influence | Top 10% | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/tbcas.2019.2951222&type=result"></script>');
-->
</script>
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-016-0148-3&type=result"></script>');
-->
</script>
Green | |
gold |
citations | 132 | |
popularity | Top 1% | |
influence | Top 10% | |
impulse | Top 1% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-016-0148-3&type=result"></script>');
-->
</script>
Robotic hands embedding human motor control principles in their mechanical design are getting increasing interest thanks to their simplicity and robustness, combined with good performance. Another key aspect of these hands is that humans can use them very effectively thanks to the similarity of their behavior with real hands. Nevertheless, controlling more than one degree of actuation remains a challenging task. In this paper, we take advantage of these characteristics in a multi-synergistic prosthesis. We propose an integrated setup composed of Pisa/IIT SoftHand 2 and a control strategy which simultaneously and proportionally maps the human hand movements to the robotic hand. The control technique is based on a combination of non-negative matrix factorization and linear regression algorithms. It also features a real-time continuous posture compensation of the electromyographic signals based on an IMU. The algorithm is tested on five healthy subjects through an experiment in a virtual environment. In a separate experiment, the efficacy of the posture compensation strategy is evaluated on five healthy subjects and, finally, the whole setup is successfully tested in performing realistic daily life activities.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/icorr.2017.8009437&type=result"></script>');
-->
</script>
Green |
citations | 11 | |
popularity | Average | |
influence | Average | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/icorr.2017.8009437&type=result"></script>');
-->
</script>
The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project “The Hand Embodied” (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1016/j.plrev.2016.02.001&type=result"></script>');
-->
</script>
Green | |
hybrid |
citations | 198 | |
popularity | Top 1% | |
influence | Top 10% | |
impulse | Top 1% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1016/j.plrev.2016.02.001&type=result"></script>');
-->
</script>
For stroke survivors, balance deficits that persist after the completion of the rehabilitation process lead to a significant risk of falls. We have recently developed a balance-assessment robot (BAR-TM) that enables assessment of balancing abilities during walking. The purpose of this study was to test feasibility of using the BAR-TM in an experimental perturbed-balance training program with a selected high-functioning stroke survivor.A control and an individual with right-side chronic hemiparesis post-stroke were studied. The individual post-stroke underwent thirty sessions of balance-perturbed training that involved walking on an instrumented treadmill while the BAR-TM delivered random pushes to the participant's pelvis; these pushes were in various directions, at various speeds, and had various perturbation amplitudes. We assessed kinematics, kinetics, electromyography, and spatio-temporal responses to outward-directed perturbations of amplitude 60 N (before training) and 60 N and 90 N (after training) commencing on contact of either the nonparetic-left foot (LL-NP/L perturbation) or the paretic-right foot (RR-P/R perturbation) while the treadmill was running at a speed of 0.4 m/s.Before training, the individual post-stroke primarily responded to LL-NP/L perturbations with an in-stance response on the non-paretic leg in a similar way to the control participant. After training, the individual post-stroke added adequate stepping by making a cross-step with the paretic leg that enabled successful rejection of the perturbation at lower and higher amplitudes. Before training, the individual post-stroke primarily responded to RR-P/R perturbations with fast cross-stepping using the left, non-paretic leg while in-stance response was entirely missing. After training, the stepping with the non-paretic leg was supplemented by partially recovered ability to exercise in-stance responses on the paretic leg and this enabled successful rejection of the perturbation at lower and higher amplitudes. The assessed kinematics, kinetics, electromyography, and spatio-temporal responses provided insight into the relative share of each balancing strategy that the selected individual post-stroke used to counteract LL-NP/L and RR-P/R perturbations before and after the training.The main finding of this case-control study is that robot-based perturbed-balance training may be a feasible approach. It resulted in an improvement the selected post-stroke participant's ability to counteract outward-directed perturbations.ClinicalTrials.gov Identifier: NCT03285919 - retrospectively registered.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-018-0373-z&type=result"></script>');
-->
</script>
Green | |
gold |
citations | 22 | |
popularity | Top 10% | |
influence | Top 10% | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1186/s12984-018-0373-z&type=result"></script>');
-->
</script>
doi: 10.1117/12.2255090 , 10.15488/2540
A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance. © 2017 SPIE.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1117/12.2255090&type=result"></script>');
-->
</script>
Green |
citations | 6 | |
popularity | Average | |
influence | Average | |
impulse | Top 10% |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1117/12.2255090&type=result"></script>');
-->
</script>