Skip to main content

Systematic review of AI/ML applications in multi-domain robotic rehabilitation: trends, gaps, and future directions

Abstract

Robotic technology is expected to transform rehabilitation settings, by providing precise, repetitive, and task-specific interventions, thereby potentially improving patients’ clinical outcomes. Artificial intelligence (AI) and machine learning (ML) have been widely applied in different areas to support robotic rehabilitation, from controlling robot movements to real-time patient assessment. To provide an overview of the current landscape and the impact of AI/ML use in robotics rehabilitation, we performed a systematic review focusing on the use of AI and robotics in rehabilitation from a broad perspective, encompassing different pathologies and body districts, and considering both motor and neurocognitive rehabilitation. We searched the Scopus and IEEE Xplore databases, focusing on the studies involving human participants. After article retrieval, a tagging phase was carried out to devise a comprehensive and easily-interpretable taxonomy: its categories include the aim of the AI/ML within the rehabilitation system, the type of algorithms used, and the location of robots and sensors. The 201 selected articles span multiple domains and diverse aims, such as movement classification, trajectory prediction, and patient evaluation, demonstrating the potential of ML to revolutionize personalized therapy and improve patient engagement. ML is reported as highly effective in predicting movement intentions, assessing clinical outcomes, and detecting compensatory movements, providing insights into the future of personalized rehabilitation interventions. Our analysis also reveals pitfalls in the current use of AI/ML in this area, such as potential explainability issues and poor generalization ability when these systems are applied in real-world settings.

Background

Rehabilitation refers to a multidisciplinary approach aimed at restoring, improving, or maintaining an individual's physical, cognitive, emotional, and social functioning following illness, injury, or disability [1]. The origins of rehabilitation sciences trace back to the early twentieth century. Over the decades, the scope of rehabilitation has broadened significantly, encompassing various disciplines to cater to the diverse needs of individuals. This evolution reflects a paradigm shift from a predominantly medical model of rehabilitation to a more holistic, patient-centered approach that considers the physical, psychological, and social dimensions of recovery. Traditionally, rehabilitation has been classified into two main categories: motor and cognitive rehabilitation. Motor rehabilitation primarily focuses on restoring physical functioning and mobility, while cognitive rehabilitation targets cognitive processes such as memory, attention, and executive functions.

Despite the beneficial effects of rehabilitation, traditional rehabilitation approaches suffer from several limitations [2], such as high clinical demand [3], a clinical-centered model of rehabilitation [4], and limited adaptability to patients’ needs and characteristics [5].

In recent years, technological advances have overcome some barriers to the implementation of rehabilitation. For example, telerehabilitation can improve accessibility [6] and digital technologies can improve compliance and monitoring of home exercise [7].

Among the currently available technologies, robotics has arguably had the most transformative impact on how rehabilitation is provided. Indeed, robotic neurorehabilitation addresses the major challenges of traditional rehabilitation by offering precise, repetitive, and task-specific interventions, enhancing the potential for neurorecovery [8]. These devices are often equipped with sensors to monitor and adapt to patients' performance, facilitating personalized and adaptive rehabilitation regimens [9]. Furthermore, sensors allow to monitor different physiological signals, thus providing an objective, operator-independent, and measurable assessment of the patient both to design a proper rehabilitation plan spanning multiple sessions and to monitor the rehabilitation treatment adapting it while the single session is being performed [10]. Interestingly, the application of robotic devices has been quite pervasive and with a broad scope, as their application to several different conditions, such as stroke and autism [11] demonstrates.

More recently, the integration of artificial intelligence (AI) and machine learning (ML) into robotic rehabilitation is bringing forth a wide range of opportunities to address the shortcomings of traditional approaches. The rationale for integrating AI into robotic rehabilitation lies mainly in the need for more personalized, dynamic, and responsive interventions [12]. AI algorithms, with their ability to analyze real-time data, adapt to individual progress, and optimize therapeutic protocols, address the limitations of traditional rehabilitation approaches. This combination of AI and robotics offers a synergistic platform for enhancing clinical outcomes [13].

In this work, our objective is to overview the existing landscape of AI/ML usage in robotics rehabilitation encompassing various rehabilitative settings, ranging from motor to cognitive rehabilitation, thus highlighting trends and gaps in this field. In particular, we are interested in how AI/ML is embedded in robotics assistive devices that were developed and/or tested on human subjects, and what are the state-of-the-art performances across various tasks.

Related work

Some previous reviews have focused on the use of AI and robotics in rehabilitation. In Table 1 a summary of the characteristics of these reviews is reported, together with the ones of our study.

Table 1 Key characteristics of previous relevant reviews and our study

Three previous reviews [14,15,16] focused specifically on robotics and AI. However, these reviews did not perform a systematic analysis of the literature and/or are related to a single specific domain, e.g. upper limb [15] or cognitive [17], or to specific rehabilitation setting, e.g. occupational rehabilitation [14]. The review by Huo et al. [3] is not focused specifically on robotics and machine learning but on technologies in general. Moreover, it addresses only motor rehabilitation. Three of the related previous reviews are indeed systematic [18,19,20] and included 48, 28 and 35 studies respectively. The review by Rahman et al. [18] is exclusively focused on stroke while the review by Sumner et al. [19] is not specifically addressing robotics but, more broadly, technology in general and addresses only physical rehabilitation. Mennella et al. [20] conducted a systematic review of the usage of AI to specifically support remote rehabilitation. However, no previous studies have systematically examined the broad usage of AI in robotic-assisted rehabilitation. Furthermore, none have specifically focused on on the reported performance of AI across various rehabilitative-related tasks, which is crucial to support the development of new methods.

Aim and contributions

To the best of our knowledge, this is the only systematic review that analyzes how AI and ML are currently exploited in robotics rehabilitation, spanning multiple diseases. Our review is not focused on a specific medical domain or body district but spans broadly across domains. Moreover, we did not consider only motor rehabilitation, but we also address neurocognitive rehabilitation, in light of the novel concept of an integrated neuromotor rehabilitation paradigm. The aim of our work is to provide a broad and comprehensive overview of the current state of integration of ML into robotic assistive devices targeted at rehabilitation.

In particular, the main contributions of our review are:

  • We classify and discuss the different AI algorithms employed by robotic devices, according to the specific and well-established taxonomy of the ML field;

  • We analyze the state-of-the-art of AI/ML in rehabilitation robotics, highlighting current reported performance.

  • We dedicate specific attention to the explainability of AI algorithms for rehabilitation robotics;

Additionally, we discuss robotics coupled with integrated sensors and/or wearable sensors for patients’ assessment and evaluation; to achieve the aforementioned contributions, we focus on AI-enabled robotics for rehabilitation across several medical domains and districts providing a comprehensive overview of the field;

This review may support researchers by summarizing AI/ML use and performance to support the development and implementation of robotics-assisted rehabilitation.

Methods

Search strategy

A literature search was conducted in IEEE Xplore (https://ieeexplore.ieee.org/Xplore) and Scopus (https://www.scopus.com/) databases on October 26th, 2023. An advanced search was implemented in each electronic database concerning AI/ML methods applied in the rehabilitation robotics context. We used the same search string for IEEE Xplore and Scopus, with the only difference due to the specific syntax required by the two databases. The queries performed are reported in Table S1. Each query has 3 components, combined with a logical AND operator. One component captures the AI/ML context, where we outlined the different synonyms usually employed in this field, as well as explicit mentions to specific AI/ML algorithms, such as “random forest” or “neural network”. The second components represent the rehabilitation concept nuances, and the third component is the robotics aspect.

Article selection and screening process

The article selection process was based on PRISMA guidelines [22] and is represented in Fig. 1.

Fig. 1
figure 1

PRISMA diagram for systematic review

We removed duplicated articles and those not written in English. Titles and abstracts were screened by 5 reviewers with Abstrackr (http://abstrackr.cebm.brown.edu), a semi-automated tool that allows reviewers to independently screen abstracts retrieved [23, 24]. Each record was screened by one reviewer independently, with records assigned randomly. This first screening was performed to filter out papers that did meet simple exclusion criteria, verifiable from the abstract itself, such as reviews, conference proceedings and articles presenting prototypes. Subsequently, full-text screening was conducted by the 5 reviewers according to the inclusion and exclusion criteria for eligibility outlined below.

The inclusion criteria were the following: (i) articles describing the use of AI and ML for robotic-assisted rehabilitation; (ii) articles with specific applications in health; (iii) articles where a physical device is presented/discussed, (iv) articles involving human subjects (healthy individuals or patients) for system development and/or validation.

The exclusion criteria were the following: (i) articles that describe a generic robotics AI system without an explicit application in rehabilitation; (ii) conference proceedings, as well as tutorials and conference panels; (iii) articles describing systems that are developed/validated only on simulated data; (iv) the system development involved less than 5 human subjects; (v) articles describing systems based on sensors only (without an actual robot); (vi) related to surgery; (vii) related to sports; (viii) articles describing wheelchair devices not to be used within rehabilitation exercises.

Tagging strategy

To devise a taxonomy of ML for rehabilitation robotics, we assigned different tags in various categories to the selected papers. These tags encompass different relevant aspects, outlined in Table 2. Each tag was assigned using the Zotero reference manager (https://www.zotero.org).

Table 2 List of tags applied to each included article and examples

Results

A total of 201 papers met the inclusion criteria and were included in this review (Fig. 1). In the following we analyze the papers in depth, leveraging the assigned tags to categorize articles and provide further insights. Note that, even within the same tag type (listed in Table 2), a paper may have been labeled with more than one value per tag. The current section has been organized according to the most prevalent aims (see aim tag in Table 2 and Fig. 2) identified in our review, in order to give better structure to the presentation of the results, and organize them in a taxonomy of uses of ML in robotic-assisted rehabilitation. The complete list of retrieved papers, along with their tags, is reported in the Supplementary File.

Fig. 2
figure 2

Most prevalent aims for which AI/ML is used in rehabilitation robotics

We identified 20 different aims of the AI/ML systems embedded in robotics rehabilitation (Fig. 3a). Here, we comment on the most prevalent ones. A good proportion of the screened papers (19%) used AI/ML to classify upper (UL) and/or lower lib (LL) movements. Most papers (47%) employed AI/ML to control the robot itself in various ways: by predicting user intention and movement trajectory [25, 26], by learning the arm support needed during training in a personalized and adaptative setting [27, 28], by implementing supervised [29,30,31,32,33,34], regression-based [35,36,37,38,39,40,41,42,43], and reinforcement learning-based controllers [44]. ML can also control the robot by modulating stiffness [45], regulating synergies in robotic hands [46], predicting force from EMG signals [47], joint angles [48,49,50,51,52] and torque [53,54,55,56], or by compensating for dynamic interactions [57]. Exoskeleton control can be achieved by generating personalized gait trajectories through Neural Network (NN) [58] or Gaussian processes [59]. Control of a hip exoskeleton by predicting ground reaction forces and moments through NN, Support Vector Machines (SVM) and Random Forest (RF) algorithms was proposed by [60], while control of an upper limb exoskeleton based on voice commands and recurrent network (RNN) was proposed in [61]. Robot control can be driven by user intention from EEG [62,63,64,65], by predicting movement-based EMG signals [66,67,68] or based on kinematics features derived from robots and Inertial Measurement Unit (IMU) [69]. NNs are particularly implemented to predict end-effector orientation from joint angles [70]. Robot control can greatly support mirror therapy, when one side of the patient is more affected by disability in comparison with the other side [71] robotic mirror therapy (RMT) transfers the motion of the healthy limb (HL) to the impaired limb (IL), in which a robot interacts with and assists the IL to mimic the action of the HL to stimulate the active participation of the injured muscles [72]. [73] uses NN to control the impaired lower limb in hemiplegic patients.

Fig. 3
figure 3figure 3

a For each “aim” category, the number of papers using AI/ML for the specific aim is reported. b For each AI/ML algorithm, the number of papers using the specific algorithm is reported. c For each input data type, the number of papers indicating that input data for their AI/ML system is reported

Movement classification

A range of 29 studies performed supervised ML to identify hand gestures [74,75,76,77,78,79], manual tasks [80,81,82] grasping [83,84,85], and finger movement [86]. In Table S2, we report the complete list of papers using AI/ML to predict hand movement, along with the number of subjects involved and the performance reported by the authors, often in terms of accuracy. For instance, authors in [87] implemented a SVM to recognize a set of grasp gestures based on input data from the SCRIPT exoskeleton to predict the trajectory of the robot. The system was trained and tested on 10 healthy and 8 stroke subjects. Notably, the recall of the SVM in healthy individuals was 91% on average, while the same metrics decreased to 75% in stroke patients. A decrease in performance between healthy subjects and amputees was reported also by [88] (90% of accuracy vs 68%), where authors implemented a k-Nearest Neighbors (k-NN) to classify 7 different gestures trained on EMG signals. Also in [89] authors implemented a system for grasp prediction, with the aim of controlling a robotic arm based on EMG signals. In this case, data from 5 healthy subjects were collected to train and test a RF, that showed 92% accuracy, in line with the one reported in [87] for healthy subjects. These studies focused on different hand movement classes for prediction: for instance, in [90] six different hand motion patterns were predicted (hand closing, hand opening, thumb, index and middle fingers closing and opening, middle, ring and little fingers opening and closing), while in [91,92,93] authors binary predict whether the subject wearing hand exoskeleton is opening or closing the hand. In other works, grasping with objects interaction is shown [94]. As the predicted classes vary across studies, it is difficult to compare performance results in an unbiased way. Four studies evaluated the performance both online (i.e. when the subjects' signals are collected in real-time and the deployed ML model is exploited for prediction in real-time) and offline [78, 95,96,97], all reporting a decrease in performance in the offline settings in comparison with the online settings, even up to 7% in accuracy (Table S2). 6 studies trained and/or tested their classification system specifically on patients, and not only on healthy individuals, such as stroke patients [87, 98], amputees [88, 94] and children with autism [99]. 21 studies exploited as input for the ML the EMG signals, while 3 of them used EEG signals. [100] compared the accuracy of an EMG-trained NN with an EEG-trained NN, finding that the EMG-based classifier has higher performance (Table S2). 12 studies compared multiple ML classifiers. SVM is selected as the classifier in 16 studies, while k-NN in 6. NNs, Multilayer Perceptrons (MLP) and convolutional neural networks (CNN) are employed in 11 cases each. Temporal Convolutional Network (TCN) was used by [101].

15 studies investigated supervised ML approaches to identify specific arm gesture [102, 103] extensions [104, 105], wrist [106, 107] and elbow movement (Table S3). For instance, [108] developed a NN able to predict shoulder and elbow position thus discriminating flexion, pronation, grasping, etc. The input of the model was EMG signals, and the performance was recorded both on healthy subjects and on patients with central cord syndromes (CCS). Also in this case, as for the hand gesture recognition studies [87, 88] the authors reported a strong decrease in the performance of their method, which was initially trained on healthy individuals, on CCS patients, as the accuracy on healthy subjects was 90%, while for CCS patients it degraded to 68%. Two papers compared the performance of offline vs online settings, confirming a lower accuracy in the latter case [109, 110]. As for hand recognition, the most popular algorithms were SVM and NNs [111] (Fig. 3b).

Lower limb movement recognition is either referred to specifically identifying gait, gait phases and patterns, or to recognizing different action modes, such as sitting or lying [112], turning in specific directions, start and stopping walking [113] (Table S4). Many of the related articles focused on gait recognition: gait recognition has been treated as a multi-class classification [114,115,116,117,118,119,120,121,122,123,124] or a binary classification problem [125,126,127], or even as an anomaly detection problem using One Class SVM to detect abnormal gait patterns [128]. In the first case, the supervised model predicts the gait phases or whether the subject is walking at level ground or ascending/descending stairs and ramps, and the predicted classes are either stance or swing. As in upper limb recognition studies, most of the wforks (92%) trained and tested the ML on healthy subjects. [126] tested a Logistic Regression (LG) for movement recognition on 10 healthy participants and 3 stroke patients, finding a decrease in accuracy of around 5% on patients. A strong decrease in performance between online and offline settings is also reported [127, 129]. While for upper limb movement prediction, the most prevalent ML input type is EMG signals [130], for lower limb kinematics data, pressure and joint angles are also exploited. [111] demonstrated that the combination of EMG signals and joint angles as input of the model leads to an increase in performance in comparison with models trained on EMG signals alone. SVM and NN are the most used algorithms for lower limb movement recognition.

Movement trajectory prediction

Table S5 reports the studies where AI/ML is used to predict a movement trajectory. In 7 cases, the region of interest of the robot was the lower limb [131,132,133,134,135,136,137,138] while in 9 cases the aim was to predict the trajectory of the moving upper limb [139,140,141,142,143,144]. Since trajectory prediction is a regression problem, most studies evaluated the performance in terms of Mean Squared Error (MSE) computed between the true trajectory and the predicted trajectory. Deep learning models were the most used for this specific aim, and many works employed RNNs such as LSTM. Input data vary from anthropometric features combined with joint angles [145] or gait features [134, 135] to images [132, 139] and EMG [140]. Notably, none of the selected papers trained or tested the algorithm on patients, but only on healthy individuals, except for [146]. Similarly to the case of UL and LL movement classification, [147] reported decreased performance in the online setting compared to the offline one.

Patient assessment during or after rehabilitation

A variety of studies (19) used ML to assess patients during or after the rehabilitation session (Table S6). [120] designed a robotic walker able to discriminate gait asymmetries. [148] proposed a fuzzy NN to predict upper limbs levels of motor ability to evaluate rehabilitation outcomes without the need of a therapist. ML is also applied to directly predict relevant clinical scale. In [149], an eXtreme Gradient Boosting model (XGBoost) is trained to predict a set of popular clinical evaluation measures, in particular, the 6-min walk distance (6MWD) and the Fugl-Meyer assessment lower-limb sub scale (FMA-LE), of stroke patients. The 6MWD test is commonly conducted to assess functional exercise capacity, measuring the distance (in meters) that a patient can walk over a period of six minutes. The Figl-Meyer Assessment is a stroke-specific scale to measure impairment over five different domains, including motor and sensory functioning, balance, joint range motion and joint pain. The AI system takes as input the gait parameters and joint torque and it was tested in a clinical trial with 66 stroke patients. [150] developed an ensemble of NN models to predict various clinical scales, including Fugl-Meyer, from kinematics and kinetics measurements taken from the robot. The system was trained on 208 stroke patients and tested on data from the same cohort. Yet, we cannot compare the results with [149], since the performance metrics reported are different (MSE vs R2). Also in [151], EMG signals are the input of a network that predicts the FMA and the Modified Ashworth Scale (MAS). The system was trained and tested on 29 stroke patients, and evaluated in terms of correlation between the ML-generated prediction and the clinical scores computed by a therapist. Barthel index predicted from clinical characteristics and rehabilitative session assessment of post-stoke patients was proposed by [152], while [153] trained ensemble NNs to predict the Chedoke-McMaster scale in stroke patients. [154] used different ML algorithms to predict clinical evaluations of a rehabilitative exercise in stroke patients, finding that the most performing algorithm in terms of accuracy was k-NN. [155] and [156] applied SVM and k-means on torques and angular positions of paralyzed wrists, collected during the rehabilitative exercises performed by patients to predict the Brunnstrom stage, a clinical score describing the development of the brain’s ability to move and to reorganize after stroke. AI/ML can be used also to evaluate patients in terms of energy expenditure, as in [157], when the authors trained LSTM and CNN to infer energy expenditure during a rehabilitation session.

In [158] the authors employed logistic regression to analyze the association between several clinically relevant covariates, such as sex, age, BMI, history of diabetes, hypertension, and poor motor function [158]. Notably, in this work 205 patients with cerebral hemorrhage were recruited and randomized into case and control groups: the case group performed robotics rehabilitation of the hand, while the control group was treated with standard care rehabilitation. Also in [159], a randomized controlled trial was performed, with 50 subacute stroke patients undergoing 4 weeks of treatment with the GaitTrainer robot, and 50 patients treated with standard care. The objective of the study was to identify the clinical characteristics of patients who could benefit from robotic walking training with respect to conventional walking therapy. In [160], the authors used post-stroke patients’ clinical data and rehabilitative session data (such as speed and force) from Lokomat, a wearable robot for lower limb rehabilitation, to train different ML algorithms, such as Decision Tree (DT), RF, and SVM and predict rehabilitation outcome at the 12th rehabilitative session. Authors found that the most important characteristic to determine the outcome was body weight. An observational study on 55 stroke patients who performed robotics-assisted rehabilitation trained a logistic regression model to determine the most important factors towards positive rehabilitation outcome, finding that gender and Box and Block Test (BBT) score were the most important covariates [161]. Also in [162] authors investigated the importance of different clinical characteristics and robot-related measures on rehabilitation outcomes for stroke patients. Motor recovery after stroke using NN and k-NN was proposed by [163], finding that time since injury, baseline functional and motor ability may support the identification of patients most likely to benefit from the rehabilitation intervention. [164] used linear regression to detect the period of inactivity during patients’ rehabilitation sessions, which can serve as a proxy for patients’ evaluation. Muscle recruitment was predicted through MLP from kinematics data [165] in 7 patients with cerebral palsy. Real-time audio-visual biofeedback of the patient’s planar flexor recruitment was provided during rehabilitation, thanks to an MLP prediction.

Prediction of patient intention

Twentyfour different works employed AI/ML to predict user movement intention (Table S7). In this case, all the retrieved studies tested the approach on healthy subjects, and none featured actual patients undergoing rehabilitation. Most of them used EEG (7 cases) [166,167,168,169,170,171] or EMG (9 cases) [172,173,174,175,176,177,178] as input signals. [179] predicted upper limb intention to move towards right or left by using an SVM fed with optical brain function imaging, while [180] exploited 3D skeletal angles from Kinect. [181, 182] used IMU-derived signals and forces and [183] exploited kinematics features to infer the intention to sit or stand, while [184] used trunk motion data as input. [168] both predicted intention vs non-intention to move, and the desired speed (fast vs slow). All the studies reported high accuracy, but only [166] tested the ML models both offline and online, confirming a decrease in performance in the online settings, as also reported in studies predicting movement (see 3.1).

Personalized rehabilitation

Several studies (11) show that ML can also support personalized therapy (Table S8), by estimating motion and model parameters [185] and the appropriate control gains based on subject’s characteristics [186,187,188] or by predicting a specific exercise [189]. [72, 190] implemented a Support Vector Regression to estimate model parameters of pelvic motion based on robotics-extracted features. In [191], the authors implemented an ensemble of LSTM and CNN to estimate personalized gait speed and stride length from joint angles. In [192] reinforcement learning algorithm is proposed to adapt movement trajectory parameters to varying patient performance, thus optimizing robot’s trajectory and stiffness. In [193], a controller based on Gaussian Network is developed to model the functional capability of subjects and to provide a coherent task to challenge them. Personalized rehabilitation includes also approaches aimed at personalized assessment (see Sect. "Patient assessment during or after rehabilitation"), as in [194], where authors integrate NNs with a rule-based model to assess the performance of exercises for personalized post-stroke therapy. [195] applied unsupervised clustering techniques to define task motion based on patient’s trajectories.

Compensation detection

Four articles used ML to detect compensatory postures or motions that can lead to suboptimal recovery outcomes. In particular, [196] applied a multi-label k-Nearest Neighbor classifier and a multi-label Decision Tree classifier to detect compensatory postures in ten patients with stroke. To this aim kinematics data collected by an RGB camera and the OpenPose system were used. The performance of the two classifiers was similar and they could detect quite accurately (accuracy: 85%) some compensatory postures. Forward trunk displacement and trunk rotation were the easiest compensatory movements to detect, followed by shoulder elevation. In [197] motion compensation was detected by using pressure signals and applying an SVM algorithm. Experiments were performed in subjects with stroke both online and offline. Good classification performance was obtained in both offline (F1-score: 98.60%) and online (F1-score: 98.64%) compensation analysis; in the online test, a rehabilitation robot also provided an assistive force to patients to reduce compensation thus decreasing trunk movements during exercises. The same group applied an analogous strategy to detect posture compensation in eight subjects with stroke during an online task [198]. Also in this case good performance (F1-score around 95%) was obtained. In addition, the authors demonstrated the effectiveness of reducing compensation by applying force feedback with a robot or audio feedback using virtual reality. Finally, in [199], compensation in patients with dyskinesia was detected by using a trunk restraint belt, acquiring sEMG, angular displacement, and force, and applying Linear Discriminant Analysis (LDA), k-NN and SVM classifiers. SVM was the top-performing algorithm in detecting different types of compensatory motions (F1-score: 97.58%). In [200] compensation detection was performed using SVM and RNN on input data from Kinect.

Support patient motivation

AI and ML have also been used to support patient’s motivation during robotic therapy. Four papers addressed this aim. In [201], clinical data as well as data acquired by the robot were collected while subjects with stroke wore the SUBAR, a gait training robot, and performed robot-assisted gait training. A neuro-fuzzy algorithm was trained to provide the right verbal clue on the basis of these collected data and provided good performances in the testing phase (accuracy: 93.7%). [202] implemented a modified version of the’Simon Says’ game, which has the function of motivating patients, making therapies more engaging. In particular, elderly subjects had to imitate some exercises performed by the robot. The Kinect was used to record subject’s positions and DT, KNN and SVM were applied for posture classification. DT resulted had higher performances in comparison with the other algorithms in the classification task (accuracy: 99.61%). In [203] the authors attempted to predict the desired level of difficulty in order to increase the motivation of the subject while performing a robot-assisted reaching task. The prediction of desirable difficulty according to the patient was done based on motor performance and physiological metrics, applying a fuzzy NN approach. By practicing the task at their desirable difficulties, subjects reported lower required effort to complete the task. An interesting application is reported in [204], where ML was applied to predict the behavior of an infant towards a robot. Data obtained by the Kinect were used to train a DT, and then a Markovian model for robot control was developed where predictors were used to promote action-based goals for the infants.

Assess patient participation

Patients’ participation in a robotic task is important to increase the effect of the treatment. In [205] a lower limb rehabilitation robot using joint torque sensors and six-dimensional force sensors on the foot soles were used to acquire force information. These signals were used to train a hybrid quantum particle swarm optimization and SVM algorithm. Data from 10 healthy volunteers performing different difficulty training tasks were used to predict both the level of participation and the task difficulty for two other volunteers obtaining an accuracy of 80%. In [206] EEG signals were collected in healthy volunteers and used to assess cognitive engagement during the execution of an adaptive Go/No-Go paradigm while interacting with the Bionik InMotion Arm rehabilitation robot. A CNN was applied to predict the level of cognitive engagement for two classes (cognitively engaged vs disengaged) obtaining an accuracy of 88%, while [207] compared SVM, Naïve Bayes, RF and MLP for predicting rest, clench, or attention based on EEG signals using data from 5 healthy individuals and achieving performance from 73% (RF) to 77% (SVM) of accuracy.

Emotion recognition

The emotional status of the patient can greatly affect rehabilitation outcomes. ML and AI can support therapists by predicting patient’s emotion during the rehabilitation exercise. [208] developed an SVM to predict 3 anxiety levels in patients with stroke using multimodal physiological signals including EMG, ECG, skin conductance, and respiration. The model reached an accuracy of around 80% in 12 stroke patients. Emotion recognition in stroke was performed also in [209] where camera data were obtained while the subjects performed rehabilitation tasks with a hand exoskeleton. An SVM model was applied for emotion classification reaching an accuracy of 86%. [210] applied a supervised artificial NN to classify facial emotions acquired using infrared thermal images of healthy individuals performing rehabilitation robotic therapy integrated with games obtaining an accuracy of 92.6%. [211] developed a CNN for emotion recognition while subjects with ADHD interacted with the humanoid robot Pepper. The model was trained on a public dataset and tested on 5 ADHD children, albeit the performance achieved in the test was not reported in the paper.

Other notable aims for ML in robotics-assisted rehabilitation

ML-based anomaly detection, whose aim is to identify rare events, has been employed [212] to capture robotic prosthesis malfunctioning based on sensor data, with the future goal of designing a fault detection system. In particular, the authors applied the one Class SVM and a Malanobis distance-based classifier.

Within the autism domain, [213] implemented an NN to identify the patient playing with modular robotics tiles based on how they interact with the tiles. The cohort consisted of 7 children with different types of autistic disorders.

In [214] authors explore how ML can support not only the control of a robotic prosthetic arm but also the generation of vibrotactile feedback regarding the arm's contact with its workspace. The task performance of the ML-based system on healthy subjects was significantly higher in comparison with the purely reactive feedback from the device. A similar attempt to leverage biofeedback has been proposed in [165].

To demonstrate the effectiveness of the robot during gait rehabilitation of children with cerebral palsy, a Gaussian process regressor applied to functional near-infrared spectroscopy (fNIRS) data was used to test whether the assessed changes in the brain activity of patients were associated with modifications in the motor abilities [215].

ML can be also applied for fall detection during robotic rehabilitation or for predicting balance loss. In [216] a deep NN was applied to detect fall during the rehabilitation with a walking-aid robot. Force signals were used as input for the model which obtained an accuracy of 98.8%. To avoid injury to the patients, [217] trained an LSTM to predict early emergency stop during robotic gait rehabilitation.

In [218] the authors implemented an LSTM that mimics the therapist-patient interaction and the therapist’s behavior to provide robotic assistance during trajectory tracking. [219, 220] used NNs to estimate slope incline in different terrains for a lower limb exoskeleton. [221] proposed the use of SVM to predict the type of rehabilitative session (active, passive or resistive) from EMG data. [222] exploited several supervised models, such as Decision Trees and k-NN to recognize speech for guiding therapy.

Discussion

With this systematic review, we described the current usage of AI/ML in robotics-assisted rehabilitation.

We found that most of the retrieved works (146 studies, 72%) involved the participation of healthy individuals for data collection, training, and testing. Only 55 studies involved actual patients with a medical condition, mainly stroke patients (see Supplementary File). Among these, the median number of patients involved in the studies is 9 (with a value of 18 for the 75th percentile) highlighting that validation studies for AI in robotics are still carried out on rather small patient cohorts. Studies using ML to assess patient clinical status during/after rehabilitation (Sect. "Patient assessment during or after rehabilitation") were those reporting the higher number of patients, with a median of 66 individuals. Few studies have recruited more than 100 patients: [163] and [150] recruited 293 and 208 stroke patients respectively, [158] 205 patients with cerebral hemorrhage on basal ganglia). Only one study [99] was focused on 7 children with autism: here the rehabilitative setup consists of a humanized robot performing different hand gestures the children were supposed to replicate.

36% of the studies did not explicitly explain the training and evaluation strategy adopted by the authors. The cross-subject setting was adopted in 16% of the studies, i.e. the data collected from a specific individual were used exclusively in either the training or the test set. On the contrary, in a non-cross-subject setting, multiple measures collected from a single participant may be assigned randomly to the training and the test set, and it was adopted in 48% of the studies. In this latter case, there is the possibility that the ML model learns user-specific characteristics to perform inference instead of rules that can generalize well on data from new individuals. This is especially true when using ML for movement classification and trajectory predictions [223,224,225]. Among the papers that carefully describe their training and testing strategy, [69, 183, 215] adopted a Leave-One-Subject-Out (LOSO) Cross Validation, where one subject is kept for testing and the remaining for training iteratively. [132, 186] specifically, select a subset of individuals for training and a distinct subset for testing. Notably, none of the retrieved papers explicitly stated that the TRIPOD-AI checklist [226] for reporting clinical models based on ML was followed. Only 9 studies (4%) openly shared their data, and 4 studies (2%) made their code publicly available. As code and data were rarely shared, there was little opportunity for the research community to reproduce the results and implement new systems based on data previously collected by other studies. Only four of the analyzed papers performed case–control studies [203, 227].

Another relevant aspect regarding the performance of ML models applied in rehabilitative settings emerged from our review: all 8 studies that compared “offline” vs “online” performance reported an important decrease in performance in the latter case (see Supplementary Tables). Decreases in performance were also reported when the AI/ML was applied to patients, in comparison with the performance on healthy individuals [87, 88]. These findings are of significant interest as they suggest that the ML performance estimated during development may relevantly underestimate the performance of the system during deployment and usage in clinical practice. Notably, (Chowdhury et al. 2018) recognized the potential negative impact of dataset shifts and addressed it by designing a specific ML classifier that can adapt its classification procedure when dataset shifts occur. Therefore, we advocate for the implementation of strategies for monitoring the performance over time, and detect out-of-distribution samples [228,229,230]. Supplementary Table S2-S8 show, for each study, the reported performance of AI across different tasks (hand gesture recognition, upper limb movement recognition, gait prediction and lower limb movement recognition, trajectory prediction, patient intention prediction and personalized rehabilitation). Relevant information, such as number of subject, region of interest, and type of disease are also reported.

Most of the AI/ML systems analyzed process input data from sensors (Fig. 3c). Neural networks and deep learning approaches are the most frequently applied algorithms (Fig. 3b), representing the most employed models to solve robot control tasks, in particular to control upper limb exoskeletons [67]. We further examined whether simpler models were favored over more complex algorithms, such as deep networks, in portable systems where hardware limitations might restrict the feasibility of running complex algorithms. When analyzing by rehabilitative system type (stationary vs. portable), we found that deep networks were predominantly used across both categories, irrespective of hardware constraints. However, simpler algorithms like decision trees appeared more frequently in portable devices (12%) compared to stationary ones (7%). Additionally, we stratified the analysis based on whether the AI/ML system operated online (i.e., during a rehabilitative session) or offline. Here, too, we observed no significant differences in algorithmic preferences between the two operational modes, potentially indicating that even complex algorithms achieve adequate runtime performance in both settings. While deep networks often prove to be highly performing, their intrinsic “black box” nature may hamper the transparency and explainability of the predictions, which is a crucial aspect of promoting trust in AI/ML and its adoption in the medical domain, including rehabilitation. Trustworthiness and transparency have been recently outlined among the requirements for AI/ML medical applications by the AI act, the first binding regulations of AI promoted by the European Union.

This is also relevant in robotics applications where the correct interpretation of AI algorithms may lead to an improvement in human–robot interactions limiting potential consequences of errors and providing human-interpretable feedback to encourage human oversight of rehabilitation technology. In our review, some studies have implemented explainable AI models to improve user feedback in robot fault recovery [231, 232], while very few studies have addressed the problem of explaining the output of the model in the field of robotic neurorehabilitation. For example, in [233] an interpretable deep learning model was applied to decode neural activity preceding balance loss during standing with a lower-limb exoskeleton, while in [234] an interpretable approach based on Grad-CAM was used to predict balance loss while wearing an exoskeleton using electroencephalographic signals. Interpretable-by-design models may also be useful to highlight relevant prognostic factors, as in [159], where the authors found that a patient’s reduced autonomy was a negative prognostic factor for conventional therapy, but not for robotic rehabilitation, by fitting a binary logistic regression. Thus, for future research in AI applied to robotic neurorehabilitation, there is the need to focus on developing algorithms that are not only well performing, but also interpretable. Interpretation of ML models can improve clinicians' confidence in AI technologies, facilitating their adoption in clinical settings. Explainability enables clinicians to understand the rationale behind AI-driven decisions, facilitating a more collaborative approach to patient care and enabling more nuanced interventions. Current AI-based rehabilitative systems often lack inclusivity, with underrepresented populations, such as pediatric, geriatric, or minority groups, being insufficiently addressed. To bridge this gap, tailored AI models should be developed and validated for specific subgroups to ensure their effectiveness and safety. For example, algorithms trained on adult data should be systematically adapted and tested for pediatric populations to prevent performance degradation. Data diversity could be achieved also thanks to global collaborations and data-sharing initiatives. Additionally, lack lack of standardized evaluation metrics and openly available benchmarks limits the comparability and reproducibility of AI-driven systems. Developing open-access benchmarks specific to rehabilitation robotics would enable researchers to evaluate their algorithms against well-defined standards.

Integrating AI technologies into clinical practice demands careful consideration of various ethical aspects. Among these, safeguarding data privacy is essential to uphold patient autonomy and ensure the ethical use of sensitive information [235]. However, this imperative often conflicts with the principles of open science, which advocate for data sharing in open repositories to promote research transparency and reproducibility. Balancing privacy concerns with open science standards thus represents a complex challenge that the field must address [236]. Another ethical consideration lies in mitigating automation bias—the tendency to over-rely on AI outputs without critical evaluation. While AI offers substantial potential to support clinicians across diverse tasks, it is essential to foster a culture of critical engagement with AI recommendations to prevent undue reliance and potential errors. By training clinicians to use AI as a support tool, rather than a definitive decision-maker, the risk of automation bias can be minimized, thereby enhancing both patient safety and clinical outcome [235]. By proactively tackling these issues, the adoption of AI in rehabilitation can proceed responsibly, with a focus on building trustworthy and equitable healthcare solutions. Future research should explore several key areas to advance AI-based rehabilitative systems. One critical area is the generalizability of these systems across diverse patient populations, ensuring they are adaptable and effective for varying demographics and clinical needs. Additionally, integrating AI-driven rehabilitation tools with Electronic Health Records (EHRs) and other clinically relevant repositories could enable a more comprehensive, multimodal analysis of patient data. Such integration would facilitate a holistic view of patient health, improve the continuity of care, and enhance personalized treatment strategies. These future directions hold the potential to broaden the scope and impact of AI-enhanced rehabilitation across diverse clinical contexts.

Conclusion

We have performed a systematic review to outline the current landscape of AI/ML usage within robotics-assisted rehabilitation, by analyzing different dimensions, such as the aim of the AI/ML system, the algorithm types, and input data types. For specific groups of papers, such as those using AI/ML to classify hand gestures, and arm movements, or to predict trajectories, we also provide reference performance metrics as Supplementary Tables, in order to enable researchers in the field to easily retrieve current state-of-the-art performance, and benchmark their own work.

Despite the prevalence of use of AI/ML in this field, we found several issues that still need to be addressed. Only a minority of studies involve actual patients, with the majority of evaluations focusing instead on healthy volunteers. Children are significantly underrepresented, appearing in only 3% of the studies. This lack of representation makes it difficult to rule out, or even quantify, significant deterioration of AI/ML performance when technologies tested on adults are applied to a pediatric population. Furthermore, there is a lack of standard procedures for training and testing the AI/ML systems that hampers the comparison of predictive results across studies in the rehabilitation field. Limited sharing of data and code hinders open science and reproducibility, as well as easy design and execution of follow-up studies by independent investigators. Even though Deep Learning is one of the most applied techniques in this field, we posit that better integration of XAI methods should be promoted. Additionally, poor generalization ability often emerged: systems to monitor the performance over time are therefore needed to promote safe application within clinical practice.

Availability of data and materials

No datasets were generated or analysed during the current study.

References

  1. Wade DT. What is rehabilitation? An empirical investigation leading to an evidence-based description. Clin Rehabil. 2020;34(5):571–83.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Jack K, McLean SM, Moffett JK, Gardiner E. Barriers to treatment adherence in physiotherapy outpatient clinics: a systematic review. Man Ther. 2010;15(3):220–8.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Huo CC, Zheng Y, Lu WW, Zhang TY, Wang DF, Xu DS, et al. Prospects for intelligent rehabilitation techniques to treat motor dysfunction. Neural Regen Res. 2021;16:264–9.

    Article  PubMed  Google Scholar 

  4. Nicholson S, Sniehotta FF, van Wijck F, Greig CA, Johnston M, McMurdo MET, et al. A systematic review of perceived barriers and motivators to physical activity after stroke. Int J Stroke. 2013;8(5):357–64.

    Article  PubMed  Google Scholar 

  5. Meisingset I, Bjerke J, Taraldsen K, Gunnes M, Sand S, Hansen AE, et al. Patient characteristics and outcome in three different working models of home-based rehabilitation: a longitudinal observational study in primary health care in Norway. BMC Health Serv Res. 2021;21(1):887.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Buckingham S, Anil K, Demain S, Gunn H, Jones RB, Kent B, et al. Telerehabilitation for people with physical disabilities and movement impairment: development and evaluation of an online toolkit for practitioners and patients. Disabil Rehabil. 2023;45(11):1885–92.

    Article  PubMed  Google Scholar 

  7. Argent R, Daly A, Caulfield B. Patient involvement with home-based exercise programs: can connected health interventions influence adherence? JMIR Mhealth Uhealth. 2018;6(3): e47.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Reinkensmeyer DJ. JNER at 15 years: analysis of the state of neuroengineering and rehabilitation. J NeuroEng Rehabil. 2019;16(1):144.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Mehrholz J, Hädrich A, Platz T, Kugler J, Pohl M. Electromechanical and robot-assisted arm training for improving generic activities of daily living, arm function, and arm muscle strength after stroke. Cochrane Database Syst Rev. 2012. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/14651858.CD006876.pub3/abstract.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Wei S, Wu Z. The application of wearable sensors and machine learning algorithms in rehabilitation training: a systematic review. Sensors. 2023;23(18):7667.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Yoo SD, Lee HH. The effect of robot-assisted training on arm function, walking, balance, and activities of daily living after stroke: a systematic review and meta-analysis. Brain Neurorehabil. 2023;16(3): e24.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms mean and how will they impact health care? J Arthroplasty. 2018;33(8):2358–61.

    Article  PubMed  Google Scholar 

  13. Jones M, Collier G, Reinkensmeyer DJ, DeRuyter F, Dzivak J, Zondervan D, et al. Big data analytics and sensor-enhanced activity management to improve effectiveness and efficiency of outpatient medical rehabilitation. Int J Environ Res Public Health. 2020;17(3):748.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Fong J, Ocampo R, Gross DP, Tavakoli M. Intelligent robotics incorporating machine learning algorithms for improving functional capacity evaluation and occupational rehabilitation. J Occup Rehabils. 2020;30:362–70.

    Article  Google Scholar 

  15. Ai Q, Liu Z, Meng W, Liu Q, Xie SQ. Machine learning in robot assisted upper limb rehabilitation: a focused review. IEEE Trans Cogn Dev Syst. 2021;1:1.

    Google Scholar 

  16. Denecke K, Baudoin CR. A review of artificial intelligence and robotics in transformed health ecosystems. Front Med. 2022;9:795957.

    Article  Google Scholar 

  17. Yuan F, Klavon E, Liu Z, Lopez RP, Zhao X. A systematic review of robotic rehabilitation for cognitive training. Front Robot AI. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/frobt.2021.605715.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Rahman S, Sarker S, Haque AKMN, Uttsha MM, Islam MF, Deb S. AI-driven stroke rehabilitation systems and assessment: a systematic review. IEEE Trans Neural Syst Rehabil Eng. 2023;31:192–207.

    Article  PubMed  Google Scholar 

  19. Sumner J, Lim HW, Chong LS, Bundele A, Mukhopadhyay A, Kayambu G. Artificial intelligence in physical rehabilitation: a systematic review. Artif Intell Med. 2023;146: 102693.

    Article  PubMed  Google Scholar 

  20. Mennella C, Maniscalco U, De Pietro G, Esposito M. The role of artificial intelligence in future rehabilitation services: a systematic literature review. IEEE Access. 2023;11:11024–43.

    Article  Google Scholar 

  21. Zhang Y, Liu X, Qiao X, Fan Y. Characteristics and emerging trends in research on rehabilitation robots from 2001 to 2020: bibliometric study. J Med Int Res. 2023;25:e429021.

    Google Scholar 

  22. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Rathbone J, Hoffmann T, Glasziou P. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst Rev 2015;4:80. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13643-015-0067-6

  24. Rathbone J, Hoffmann T, Glasziou P. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst Rev. 2015;4(1):80.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Dwivedi A, Lara J, Cheng LK, Paskaranandavadivel N, Liarokapis M. High-Density Electromyography Based Control of Robotic Devices: On the Execution of Dexterous Manipulation Tasks. Proceedings - IEEE International Conference on Robotics and Automation. 2020. p. 3825–31. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85092727140&doi=10.1109%2fICRA40945.2020.9196629&partnerID=40&md5=a2b821c30c99426cada150c65b706cf7

  26. Menner M, Neuner L, Lünenburger L, Zeilinger MN. Using human ratings for feedback control: a supervised learning approach with application to rehabilitation robotics. IEEE Trans Rob. 2020;36(3):789–801.

    Article  Google Scholar 

  27. Guidali M, Schlink P, Duschau-Wicke A, Riener R. Online learning and adaptation of patient support during ADL training. In: 2011 IEEE International Conference on Rehabilitation Robotics. 2011. p. 1–6.

  28. Pan L, Song A, Xu G, Li H, Xu B. Intelligent prescription-diagnosis function for rehabilitation training robot system. Vol. 7507 LNAI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2012. p. 11–20.

  29. Markovic M, Varel M, Schweisfurth MA, Schilling AF, Dosen S. Closed-loop multi-amplitude control for robust and dexterous performance of myoelectric prosthesis. IEEE Trans Neural Syst Rehabil Eng. 2020;28(2):498–507.

    Article  PubMed  Google Scholar 

  30. Siu HC, Arenas AM, Sun T, Stirling LA. Implementation of a surface electromyography-based upper extremity exoskeleton controller using learning from demonstration [Internet]. Sensors (Switzerland). 2018;18(2):67.

    Article  Google Scholar 

  31. Resquín F, Gonzalez-Vargas J, Ibáñez J, Brunetti F, Dimbwadyo I, Carrasco L, et al. Adaptive hybrid robotic system for rehabilitation of reaching movement after a brain injury: a usability study. J NeuroEng Rehabil. 2017;14:1–15.

    Article  Google Scholar 

  32. Zhang Y, Li S, Nolan KJ, Zanotto D. Reinforcement Learning Assist-as-needed Control for Robot Assisted Gait Training. In: 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob). 2020. p. 785–90.

  33. Wang W, Zhang J, Kong D, Su S, Yuan X, Zhao C. Research on control method of upper limb exoskeleton based on mixed perception model. Robotica. 2022;40:3669–85.

    Article  Google Scholar 

  34. Zhao C. Control design of upper limb rehabilitation exoskeleton robot based on long and short-term memory network. J Phys: Conf Series. 2021;1986(1):12134.

    Google Scholar 

  35. De Miguel-Fernández J, Salazar-Del Rio M, Rey-Prieto M, Bayón C, Guirao-Cano L, Font-Llagunes JM, et al. Inertial sensors for gait monitoring and design of adaptive controllers for exoskeletons after stroke: a feasibility study. Front Bioeng Biotechnol. 2023;11:1208561.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Molazadeh V, Zhang Q, Bao X, Sharma N. An iterative learning controller for a switched cooperative allocation strategy during sit-to-stand tasks with a hybrid exoskeleton. IEEE Trans Control Syst Technol. 2022;30(3):1021–36.

    Article  PubMed  Google Scholar 

  37. Cao Y, Huang J, Xiong C. Single-layer learning-based predictive control with echo state network for pneumatic-muscle-actuators-driven exoskeleton. IEEE Trans Cogn Dev Syst. 2021;13(1):80–90.

    Article  Google Scholar 

  38. Chen S, Yi J, Liu T. Muscle Synergy-Based Control of Human-Manipulator Interactions. In: 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). 2020. p. 667–72.

  39. Zhang X, Yin G, Li H, Dong R, Hu H. An adaptive seamless assist-as-needed control scheme for lower extremity rehabilitation robots. Proc Instit Mecha Eng. 2021;235:723–34.

    Google Scholar 

  40. Gijsberts A, Bohra R, Sierra González D, Werner A, Nowak M, Caputo B, et al. Stable myoelectric control of a hand prosthesis using non-linear incremental learning. Front Neurorobot. 2014;8:8.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Yu S, Guo J, Guo S, Fu Q. Design of Control System for Lower Limb Rehabilitation Robot on the Healthy Side sEMG Signal. In: 2023 IEEE International Conference on Mechatronics and Automation (ICMA). 2023. p. 1038–43.

  42. Zhang M, Huang J, Cao Y, Xiong CH, Mohammed S. Echo state network-enhanced super-twisting control of passive gait training exoskeleton driven by pneumatic muscles. IEEE/ASME Trans Mechatron. 2022;27(6):5107–18.

    Article  Google Scholar 

  43. Lin CJ, Sie TY. Design and experimental characterization of artificial neural network controller for a lower limb robotic exoskeleton. Actuators. 2023;12(2):55.

    Article  Google Scholar 

  44. Alili A, Nalam V, Li M, Liu M, Feng J, Si J, et al. A novel framework to facilitate user preferred tuning for a robotic knee prosthesis. IEEE Trans Neural Syst Rehabil Eng. 2023;31:895–903.

    Article  PubMed  Google Scholar 

  45. Zhu Y, Bai S. Human compatible stiffness modulation of a novel VSA for physical human-robot interaction. IEEE Robot Auto Lett. 2023;8(5):3023–30.

    Article  Google Scholar 

  46. Burns MK, Pei D, Vinjamuri R. Myoelectric control of a soft hand exoskeleton using kinematic synergies. IEEE Trans Biomed Circuits Syst. 2019;13(6):1351–61.

    Article  PubMed  Google Scholar 

  47. Zou Y, Cheng L, Li Z. A multimodal fusion model for estimating human hand force: comparing surface electromyography and ultrasound signals. IEEE Robot Autom Mag. 2022;29(4):10–24.

    Article  Google Scholar 

  48. Wang C, He B, Wei W, Yi Z, Li P, Duan S, et al. Prediction of contralateral lower-limb joint angles using vibroarthrography and surface electromyography signals in time-series network. IEEE Trans Auto Sci Eng. 2023;20:901–8.

    Article  Google Scholar 

  49. Casas R, Martin K, Sandison M, Lum PS. A tracking device for a wearable high-DOF passive hand exoskeleton. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2021. p. 6643–6.

  50. Luu TP, Lim HB, Qu X, Hoon KH, Low KH. Subject-specific lower limb waveforms planning via artificial neural network. IEEE International Conference on Rehabilitation Robotics. 2011.

  51. Hamza A, Moutacalli MT, Chebak A. Exoskeleton for Hemiplegic Patients: Mechatronic Approach to Move One Disabled Lower Limb with Posture Recognition Neural Network for More Safety. In: 2020 8th International Conference on Control, Mechatronics and Automation (ICCMA). 2020. p. 185–90.

  52. Ma X, Wang C, Zhang R, Wu X. A real-time gait switching method for lower-limb exoskeleton robot based on sEMG signals. Commun Comput Inf Sci. 2019;1005:511–23.

    Google Scholar 

  53. Li X, Liu S, Chang Y, Li S, Fan Y, Yu H. A human joint torque estimation method for elbow exoskeleton control. Int J Human Robot. 2020;17(3):195039.

    Article  Google Scholar 

  54. Peña GG, Consoni LJ, dos Santos WM, Siqueira AAG. Feasibility of an optimal EMG-driven adaptive impedance control applied to an active knee orthosis. Robot Auton Syst. 2019;112:98–108.

    Article  Google Scholar 

  55. Wu Q, Chen B, Wu H. Neural-network-enhanced torque estimation control of a soft wearable exoskeleton for elbow assistance. Mechatronics. 2019;63:102279.

    Article  Google Scholar 

  56. Yang N, Li J, Xu P, Zeng Z, Cai S, Xie L. Design of Elbow Rehabilitation Exoskeleton Robot with sEMG-based Torque Estimation Control Strategy. In: 2022 6th International Conference on Robotics and Automation Sciences (ICRAS). 2022. p. 105–13.

  57. Seo KH, Lee JJ. The development of two mobile gait rehabilitation systems. IEEE Trans Neural Syst Rehabil Eng. 2009;17(2):156–66.

    Article  PubMed  Google Scholar 

  58. Akkawutvanich C, Manoonpong P. Personalized symmetrical and asymmetrical gait generation of a lower limb exoskeleton. IEEE Trans Industr Inf. 2023;19(9):9798–808.

    Article  Google Scholar 

  59. Hong J, Chun C, Kim SJ. Gaussian process gait trajectory learning and generation of collision-free motion for assist-as-needed rehabilitation. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). 2015. p. 181–6.

  60. Mahdavian M, Arzanpour S, Park EJ. Motion Generation of a Wearable Hip Exoskeleton Robot Using Machine Learning-Based Estimation of Ground Reaction Forces and Moments. In: 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). 2019. p. 796–801.

  61. Herath HMCMB, Nishshanka NMPM, Madhumali PVNU, Gunawardena S. Voice Control System for Upper Limb Rehabilitation Robots using Machine Learning. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT). 2021. p. 729–34.

  62. Hernández-Rojas LG, Montoya OM, Antelis JM. Anticipatory detection of self-paced rehabilitative movements in the same upper limb from EEG signals. IEEE Access. 2020;8:119728–43.

    Article  Google Scholar 

  63. Chowdhury A, Raza H, Meena YK, Dutta A, Prasad G. Online covariate shift detection-based adaptive brain-computer interface to trigger hand exoskeleton feedback for neuro-rehabilitation. IEEE Trans Cogn Dev Syst. 2018;10(4):1070–80.

    Article  Google Scholar 

  64. Lin CJ, Lin CH. Classification of EEG Signals Using a Common Spatial Pattern Based Motor-Imagery for a Lower-limb Rehabilitation Exoskeleton. In: IEEE EUROCON 2023 - 20th International Conference on Smart Technologies. 2023. p. 764–9.

  65. Huang HP, Huang TH, Liu YH, Kang ZH, Teng JT. A brain-controlled rehabilitation system with multiple kernel learning. Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics. 2011. p. 591–6.

  66. Cai S, Chen Y, Huang S, Wu Y, Zheng H, Li X, et al. SVM-based classification of sEMG signals for upper-limb self-rehabilitation training. Front Neurorobot. 2019;13:31.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Triwiyanto T, Caesarendra W, Abdullayev V, Ahmed AA, Herianto H. Single lead EMG signal to control an upper limb exoskeleton using embedded machine learning on raspberry Pi. J Robot Control (JRC). 2023;4:35–45.

    Article  Google Scholar 

  68. Zhang Q, Lambeth K, Sun Z, Dodson A, Bao X, Sharma N. Evaluation of a fused sonomyography and electromyography-based control on a cable-driven ankle exoskeleton. IEEE Trans Rob. 2023;39(3):2183–202.

    Article  Google Scholar 

  69. Hassani RH, Bolliger M, Rauter G. Recognizing Motion Onset During Robot-assisted Body-weight Unloading is Challenging but Seems Feasible. In: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 2022. p. 666–71.

  70. Prado A, Zhang H, Agrawal SK. Artificial Neural Networks to Solve Forward Kinematics of a Wearable Parallel Robot with Semi-rigid Links. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). 2021. p. 14524–30.

  71. Xu J, Xu L, Li Y, Cheng G, Shi J, Liu J, et al. A multi-channel reinforcement learning framework for robotic mirror therapy. IEEE Robot Auto Lett. 2020;5:5385–92.

    Article  Google Scholar 

  72. Xu J, Xu L, Ji A, Cao K. Learning robotic motion with mirror therapy framework for hemiparesis rehabilitation. Inf Proc Manag. 2023;60(2):103244.

    Article  Google Scholar 

  73. Xu J, Xu L, Cheng G, Shi J, Liu J, Liang X, et al. A robotic system with reinforcement learning for lower extremity hemiparesis rehabilitation. Ind Robot. 2021;48:388–400.

    Article  Google Scholar 

  74. Guo K, Orban M, Lu J, Al-Quraishi MS, Yang H, Elsamanty M. Empowering hand rehabilitation with AI-powered gesture recognition: a study of an sEMG-based system. Bioengineering. 2023;10(5):557.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Chen X, Gong L, Wei L, Yeh SC, Da Xu L, Zheng L, et al. A wearable hand rehabilitation system with soft gloves. IEEE Trans Industr Inf. 2021;17(2):943–52.

    Article  Google Scholar 

  76. Castiblanco JC, Mondragon IF, Alvarado-Rojas C, Colorado JD. Assist-as-needed exoskeleton for hand joint rehabilitation based on muscle effort detection. Sensors. 2021;21(13):4372.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Arteaga MV, Castiblanco JC, Mondragon IF, Colorado JD, Alvarado-Rojas C. EMG-based adaptive trajectory generation for an exoskeleton model during hand rehabilitation exercises. Proc IEEE RAS and EMBS Int Conf Biomed Robot Biomecha. 2020;416:21.

    Google Scholar 

  78. Naseer N, Ali F, Ahmed S, Iftikhar S, Khan RA, Nazeer H. EMG Based Control of Individual Fingers of Robotic Hand [Internet]. 3rd International Conference on Sustainable Information Engineering and Technology, SIET 2018 - Proceedings. 2018. p. 6–9.

  79. Schabron B, Desai J, Yihun Y. Wheelchair-mounted upper limb robotic exoskeleton with adaptive controller for activities of daily living. Sensors. 2021;21(17):5738.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Jiang YC, Ma R, Qi S, Ge S, Sun Z, Li Y, et al. Characterization of bimanual cyclical tasks from single-trial EEG-fNIRS measurements. IEEE Trans Neural Syst Rehabil Eng. 2022;30:146–56.

    Article  PubMed  Google Scholar 

  81. Cesqui B, Tropea P, Micera S, Krebs HI. EMG-based pattern recognition approach in post stroke robot-aided rehabilitation: a feasibility study. J NeuroEng Rehabil. 2013;10:1–15.

    Article  Google Scholar 

  82. Boehm JR, Fey NP, Majewicz A. Inherent kinematic features of dynamic bimanual path following tasks. IEEE Trans Human-Machine Syst. 2020;50(6):613–22.

    Article  Google Scholar 

  83. Irastorza-Landa N, Sarasola-Sanz A, Shiman F, López-Larraz E, Klein J, Valencia D, et al. EMG discrete classification towards a myoelectric control of a robotic exoskeleton in motor rehabilitation. Biosysts Biorobot. 2017;15:159–63.

    Article  Google Scholar 

  84. Xiong P, Gao S, Liu Z, Hu L, Ding X. A novel scheme of finger recovery based on symmetric rehabilitation: Specially for hemiplegia. In: 2016 10th International Conference on Sensing Technology (ICST). 2016. p. 1–5.

  85. Ma Z, Ben-Tzvi P, Danoff J. Hand rehabilitation learning system with an exoskeleton robotic glove. IEEE Trans Neural Syst Rehabil Eng. 2016;24:1323–32.

    Article  PubMed  Google Scholar 

  86. Celadon N, Došen S, Binder I, Ariano P, Farina D. Proportional estimation of finger movements from high-density surface electromyography. J NeuroEng Rehabil. 2016;13:1–9.

    Article  Google Scholar 

  87. Leon B, Basteris A, Infarinato F, Sale P, Nijenhuis S, Prange G, et al. Grasps recognition and evaluation of stroke patients for supporting rehabilitation therapy. BioMed Res Int. 2014;2014(1):318016.

    PubMed  PubMed Central  Google Scholar 

  88. Cipriani C, Antfolk C, Controzzi M, Lundborg G, Rosen B, Carrozza MC, et al. Online myoelectric control of a dexterous hand prosthesis by transradial amputees. IEEE Trans Neural Syst Rehabil Eng. 2011;19(3):260–70.

    Article  PubMed  Google Scholar 

  89. Liarokapis MV, Artemiadis PK, Kyriakopoulos KJ. Task discrimination from myoelectric activity: A learning scheme for EMG-based interfaces. In: 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR). 2013. p. 1–6.

  90. Lu Z, Tong KY, Zhang X, Li S, Zhou P. Myoelectric pattern recognition for controlling a robotic hand: a feasibility study in stroke. IEEE Trans Biomed Eng. 2019;66:365–72.

    Article  PubMed  Google Scholar 

  91. Lee J, Mukae N, Arata J, Iwata H, Iramina K, Iihara K, et al. A multichannel-near-infrared-spectroscopy-triggered robotic hand rehabilitation system for stroke patients. IEEE International Conference on Rehabilitation Robotics. 2017. p. 158–63.

  92. Furukawa Y, Bandara DSV, Nogami H, Arata J. Realtime EMG signal processing with OneClassSVM to extract motion intentions for a hand rehabilitation robot. In: 2023 IEEE/SICE International Symposium on System Integration (SII). 2023. p. 1–5.

  93. Jumphoo T, Uthansakul M, Duangmanee P, Khan N, Uthansakul P. Soft robotic glove controlling using brainwave detection for continuous rehabilitation at home. Comput, Mat Continua. 2021;66:961–76.

    Google Scholar 

  94. Fang B, Wang C, Sun F, Chen Z, Shan J, Liu H, et al. Simultaneous sEMG recognition of gestures and force levels for interaction with prosthetic hand. IEEE Trans Neural Syst Rehabil Eng. 2022;30:2426–36.

    Article  PubMed  Google Scholar 

  95. Li K, Li Z, Zeng H, Wei N. Control of newly-designed wearable robotic hand exoskeleton based on surface electromyographic signals. Front Neurorobot. 2021;15:711047.

    Article  PubMed  PubMed Central  Google Scholar 

  96. Guo B, Ma Y, Yang J, Wang Z, Zhang X. Lw-CNN-based myoelectric signal recognition and real-time control of robotic arm for upper-limb rehabilitation. Comput Intell Neurosci. 2020;2020(1):8846021.

    PubMed  PubMed Central  Google Scholar 

  97. Lu Z, Stampas A, Francisco GE, Zhou P. Offline and online myoelectric pattern recognition analysis and real-time control of a robotic hand after spinal cord injury. J Neural Eng. 2019;16(3):036014.

    Article  Google Scholar 

  98. Ma L, Zhao X, Li Z, Zhao M, Xu Z. A sEMG-based Hand Function Rehabilitation System for Stroke Patients. In: 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). 2018. p. 497–502.

  99. Petric F, Miklić D, Cepanec M, Cvitanović P, Kovačić Z. Functional imitation task in the context of robot-assisted Autism Spectrum Disorder diagnostics: Preliminary investigations. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 2017. p. 1471–8.

  100. Jo JH, Deji DM, Park HJ, Lee B. Development of FPGA-based deep learning orthosis actuating system using bio signal data. In: 2022 22nd International Conference on Control, Automation and Systems (ICCAS). 2022. p. 1457–60.

  101. Liu X, Wang J, Han T, Lou C, Liang T, Wang H, et al. Real-time control of intelligent prosthetic hand based on the improved TCN. Appl Bionics Biomechan. 2022;2022(1):6488599.

    Google Scholar 

  102. Hernández LG, Antelis JM. Self-paced movement intention recognition from EEG signals during upper limb robot-assisted rehabilitation. In: 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER). 2019. p. 69–72.

  103. Trigili E, Grazi L, Crea S, Accogli A, Carpaneto J, Micera S, et al. Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks. J NeuroEng Rehabil. 2019;16:1–6.

    Article  Google Scholar 

  104. Treussart B, Geffard F, Vignais N, Marin F. Controlling an Exoskeleton with EMG Signal to Assist Load Carrying: A Personalized Calibration. In: 2019 International Conference on Mechatronics, Robotics and Systems Engineering (MoRSE). 2019. p. 246–52.

  105. Wang F, Zhang D, Hu S, Zhu B, Han F, Zhao X. Brunnstrom Stage Automatic Evaluation for Stroke Patients by Using Multi-Channel sEMG. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2020. p. 3763–6.

  106. Yang D, Liu H. An EMG-based deep learning approach for multi-DOF wrist movement decoding. IEEE Trans Industr Electron. 2022;69(7):7099–108.

    Article  Google Scholar 

  107. Lee SB, Kim H, Jeong JH, Wang IN, Lee SW, Kim DJ. Recurrent convolutional neural network model based on temporal and spatial feature for motor imagery classification. 7th International Winter Conference on Brain-Computer Interface, BCI 2019. 2019.

  108. Smith A, Brown EE. Myoelectric control techniques for a rehabilitation robot. Appl Bionics Biomechan. 2011;8:21–37.

    Article  Google Scholar 

  109. Song J, Zhu A, Tu Y, Wang Y, Arif MA, Shen H, et al. Human body mixed motion pattern recognition method based on multi-source feature parameter fusion. Sensors (Switzerland). 2020;20(2):537.

    Article  Google Scholar 

  110. Zhou Y, Chen C, Alshahrani Y, Cheng M, Xu G, Li M, et al. Real-time Multiple-Channel Shoulder EMG Processing for a Rehabilitative Upper-limb Exoskeleton Motion Control Using ANN Machine Learning. In: 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP). 2021. p. 498–503.

  111. Zhang L, Guo Z, Wang C, Yuan Y, Wu X. Arm Motion Classifiction Based on sEMG and Angle Signal for A Lower Limb Exoskeleton Control System. In: 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI). 2019. p. 105–10.

  112. Wang B, Ou C, Xie N, Wang L, Yu T, Fan G, et al. Lower limb motion recognition based on surface electromyography signals and its experimental verification on a novel multi-posture lower limb rehabilitation robots. Comput Elect Eng. 2022;101:108067.

    Article  Google Scholar 

  113. Gonçalves C, Lopes JM, Moccia S, Berardini D, Migliorelli L, Santos CP. Deep learning-based approaches for human motion decoding in smart walkers for rehabilitation. Expert Syst Applic. 2023;228:120288.

    Article  Google Scholar 

  114. Ma L, Leng Y, Zhang K, Qian Y, Fu C. Multi-Gait Recognition for a Soft Ankle Exoskeleton with Limited Sensors. In: 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM). 2021. p. 566–71.

  115. Yin Z, Zheng J, Huang L, Gao Y, Peng H, Yin L. Sa-SVM-based locomotion pattern recognition for exoskeleton robot. Appl Sci (Switzerland). 2021;11(12):5573.

    CAS  Google Scholar 

  116. Zheng E, Wang Q. Noncontact capacitive sensing-based locomotion transition recognition for amputees with robotic transtibial prostheses. IEEE Trans Neural Syst Rehabil Eng. 2017;25(2):161–70.

    Article  PubMed  Google Scholar 

  117. Beil J, Ehrenberger I, Scherer C, Mandery C, Asfour T. Human Motion Classification Based on Multi-Modal Sensor Data for Lower Limb Exoskeletons. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018. p. 5431–6.

  118. Liu X, Wang Q. Real-time locomotion mode recognition and assistive torque control for unilateral knee exoskeleton on different terrains. IEEE/ASME Trans Mechatron. 2020;25(6):2722–32.

    Article  Google Scholar 

  119. Zeng D, Qu C, Ma T, Qu S, Yin P, Zhao N, et al. Research on a gait detection system and recognition algorithm for lower limb exoskeleton robot. J Brazil Soc Mechan Sci Eng. 2021;43(6):298.

    Article  Google Scholar 

  120. Paulo J, Peixoto P, Nunes UJ. ISR-AIWALKER: robotic walker for intuitive and safe mobility assistance and gait analysis. IEEE Trans Human-Machine Syst. 2017;47(6):1110–22.

    Article  Google Scholar 

  121. Zheng E, Wang Q, Qiao H. Locomotion mode recognition with robotic transtibial prosthesis in inter-session and inter-day applications. IEEE Trans Neural Syst Rehabil Eng. 2019;27(9):1836–45.

    Article  PubMed  Google Scholar 

  122. Zhang Z, Wang Z, Lei H, Gu W. Gait phase recognition of lower limb exoskeleton system based on the integrated network model. Biomed Signal Proc Cont. 2022;76:103693.

    Article  Google Scholar 

  123. Li J, Gao T, Zhang Z, Wu G, Zhang H, Zheng J, et al. A Novel Method of Pattern Recognition Based on TLSTM in lower limb exoskeleton in Many Terrains. In: 2022 4th International Conference on Intelligent Control, Measurement and Signal Processing (ICMSP). 2022. p. 733–7.

  124. Figueiredo J, Santos CP, Urendes E, Pons JL, Moreno JC. Implementation of feature extraction methods and support vector machine for classification of partial body weight supports in overground robot-aided walking. In: 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER). 2015. p. 763–6.

  125. Tortora S, Tonin L, Sieghartsleitner S, Ortner R, Guger C, Lennon O, et al. Effect of lower limb exoskeleton on the modulation of neural activity and gait classification. IEEE Trans Neural Syst Rehabil Eng. 2023;31:2988–3003.

    Article  PubMed  Google Scholar 

  126. García-Cossio E, Severens M, Nienhuis B, Duysens J, Desain P, Keijsers N, et al. Decoding sensorimotor rhythms during robotic-assisted treadmill walking for brain computer interface (BCI) applications. PLoS ONE. 2015;10:13790.

    Article  Google Scholar 

  127. Jung JY, Heo W, Yang H, Park H. A neural network-based gait phase classification method using sensors equipped on lower limb exoskeleton robots. Sensors (Switzerland). 2015;15:27738–59.

    Article  Google Scholar 

  128. Paulo J, Asvadi A, Peixoto P, Amorim P. Human gait pattern changes detection system: a multimodal vision-based and novelty detection learning approach. Biocybernetics Biomed Eng. 2017;37:701–17.

    Article  Google Scholar 

  129. Choi J, Kim KT, Jeong JH, Kim L, Lee SJ, Kim H. Developing a motor imagery-based real-time asynchronous hybrid BCI controller for a lower-limb exoskeleton. Sensors (Switzerland). 2020;20:1–15.

    Article  Google Scholar 

  130. Bamdad M, Mokri C, Abolghasemi V. Joint mechanical properties estimation with a novel EMG-based knee rehabilitation robot: a machine learning approach. Med Eng Phys. 2022;110:103933.

    Article  PubMed  Google Scholar 

  131. Ben-Tzvi P, Danoff J, Ma Z. The design evolution of a sensing and force-feedback exoskeleton robotic glove for hand rehabilitation application. J Mechan Robot. 2016;8(5):051019.

    Article  Google Scholar 

  132. Tsepa O, Burakov R, Laschowski B, Mihailidis A. Continuous Prediction of Leg Kinematics during Walking using Inertial Sensors, Smart Glasses, and Embedded Computing. In: 2023 IEEE International Conference on Robotics and Automation (ICRA). 2023. p. 10478–82.

  133. Khan A, Hebert M. Learning safe recovery trajectories with deep neural networks for unmanned aerial vehicles. In: 2018 IEEE Aerospace Conference. 2018. p. 1–9.

  134. Zou C, Huang R, Cheng H, Qiu J. Learning gait models with varying walking speeds. IEEE Robot Auto Lett. 2021;6(1):183–90.

    Article  Google Scholar 

  135. He Y, Wu X, Ma Y, Cao W, Li N, Li J, et al. GC-IGTG: A Rehabilitation Gait Trajectory Generation Algorithm for Lower Extremity Exoskeleton. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO). 2019. p. 2031–6.

  136. Liu DX, Wu X, Wang C, Chen C. Gait trajectory prediction for lower-limb exoskeleton based on Deep Spatial-Temporal Model (DSTM). In: 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM). 2017. p. 564–9.

  137. Zou C, Huang R, Peng Z, Qiu J, Cheng H. Synergetic Gait Prediction for Stroke Rehabilitation with Varying Walking Speeds. IEEE International Conference on Intelligent Robots and Systems. 2021. p. 7231–7.

  138. Khan RA, Naseer N, Qureshi NK, Noori FM, Nazeer H, Khan MU. FNIRS-based neurorobotic interface for gait rehabilitation. J NeuroEng Rehabil. 2018;15:1–17.

    Article  Google Scholar 

  139. Li H, Guo S, Bu D, Wang H, Kawanishi M. Subject-independent estimation of continuous movements using CNN-LSTM for a home-based upper limb rehabilitation system. IEEE Robot Auto Lett. 2023;8(10):6403–10.

    Article  Google Scholar 

  140. Ebers MR, Rosenberg MC, Kutz JN, Steele KM. A machine learning approach to quantify individual gait responses to ankle exoskeletons. J Biomechan. 2023;157:111693.

    Article  Google Scholar 

  141. Luciani B, Roveda L, Braghin F, Pedrocchi A, Gandolla M. Trajectory learning by therapists’ demonstrations for an upper limb rehabilitation exoskeleton. IEEE Robot Auto Lett. 2023;8(8):4561–8.

    Article  Google Scholar 

  142. Zhang Y, Cheng L. Online Adaptive and Attention-based Reference Path Generation for Upper-limb Rehabilitation Robot. China Automation Congress (CAC). 2021. p. 5268–73.

  143. Tang Y, Hao D, Cao C, Shi P, Yu H, Luan X, et al. Glenohumeral joint trajectory tracking for improving the shoulder compliance of the upper limb rehabilitation robot. Med Eng Phys. 2023;113:103961.

    Article  PubMed  Google Scholar 

  144. Tang Z, Zhang K, Sun S, Gao Z, Zhang L, Yang Z. An upper-limb power-assist exoskeleton using proportional myoelectric control. Sensors (Switzerland). 2014;14:6677–94.

    Article  Google Scholar 

  145. Anwar T, Jumaily AA. System identification and damping coefficient estimation from EMG based on ANFIS to optimize human exoskeleton interaction [Internet]. 2016 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2016. 2016. p. 844–9.

  146. Pilarski PM, Dick TB, Sutton RS. Real-time prediction learning for the simultaneous actuation of multiple prosthetic joints. In: 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR). 2013. p. 1–8.

  147. Ren JL, Chien YH, Chia EY, Fu LC, Lai JS. Deep Learning based Motion Prediction for Exoskeleton Robot Control in Upper Limb Rehabilitation. In: 2019 International Conference on Robotics and Automation (ICRA). 2019. p. 5076–82.

  148. Cai H, Guo S, Yang Z, Guo J. A motor recovery training and evaluation method for the upper limb rehabilitation robotic system. IEEE Sens J. 2023;23(9):9871–9.

    Article  Google Scholar 

  149. Zhang S, Fan L, Ye J, Chen G, Fu C, Leng Y. An intelligent rehabilitation assessment method for stroke patients based on lower limb exoskeleton robot. IEEE Trans Neural Syst Rehabil Eng. 2023;31:3106–17.

    Article  PubMed  Google Scholar 

  150. Agrafiotis DK, Yang E, Littman GS, Byttebier G, Dipietro L, DiBernardo A, et al. Accurate prediction of clinical stroke scales and improved biomarkers of motor impairment from robotic measurements. PLoS ONE. 2021;16(1):245874.

    Article  Google Scholar 

  151. Ye F, Yang B, Nam C, Xie Y, Chen F, Hu X. A data-driven investigation on surface electromyography based clinical assessment in chronic stroke. Front Neurorobot. 2021;15:648855.

    Article  PubMed  PubMed Central  Google Scholar 

  152. Campagnini S, Liuzzi P, Galeri S, Montesano A, Diverio M, Cecchi F, et al. Cross-Validation of Machine Learning Models for the Functional Outcome Prediction after Post-Stroke Robot-Assisted Rehabilitation. In: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2022. p. 4950–3.

  153. Jung JY, Glasgow JI, Scott SH. Trial map : A visualization approach for verification of stroke impairment assessment database. In: 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). 2008. p. 4114–7.

  154. Zhang M, Chen J, Ling Z, Zhang B, Yan Y, Xiong D, et al. Quantitative evaluation system of upper limb motor function of stroke patients based on desktop rehabilitation robot. Sensors. 2022;22(3):1170.

    Article  PubMed  PubMed Central  Google Scholar 

  155. Kim J, Park W, Kim J. Quantitative evaluation of stroke patients’ wrist paralysis by estimation of kinematic coefficients and machine learning. Sens Mat. 2020;32:981–90.

    Google Scholar 

  156. Kim J, Lee G, Jo H, Park W, Jin YS, Kim HD, et al. A wearable soft robot for stroke patients’ finger occupational therapy and quantitative measures on the joint paralysis. Int J Precision Eng Manuf. 2020;21:2419–26.

    Article  Google Scholar 

  157. Lopes JM, Figueiredo J, Fonseca P, Cerqueira JJ, Vilas-Boas JP, Santos CP. Deep learning-based energy expenditure estimation in assisted and non-assisted gait using inertial, EMG, and heart rate wearable sensors. Sensors. 2022;22(20):7913.

    Article  PubMed  PubMed Central  Google Scholar 

  158. Jin P, Jiang W, Bao Q, Wei W, Jiang W. Predictive nomogram for soft robotic hand rehabilitation of patients with intracerebral hemorrhage. BMC Neurol. 2022;22(1):334.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  159. Morone G, Masiero S, Coiro P, De Angelis D, Venturiero V, Paolucci S, et al. Clinical features of patients who might benefit more from walking robotic training. Rest Neurol Neurosci. 2018;36:293–9.

    Google Scholar 

  160. Kuo CY, Liu CW, Lai CH, Kang JH, Tseng SH, Su ECY. Prediction of robotic neurorehabilitation functional ambulatory outcome in patients with neurological disorders. J NeuroEng Rehabil. 2021;18(1):174.

    Article  PubMed  PubMed Central  Google Scholar 

  161. Hsieh YW, Lin KC, Wu CY, Lien HY, Chen JL, Chen CC, et al. Predicting clinically significant changes in motor and functional outcomes after robot-assisted stroke rehabilitation. Arch Phys Med Rehabil. 2014;95:316–21.

    Article  PubMed  Google Scholar 

  162. Camardella C, Cappiello G, Curto Z, Germanotta M, Aprile I, Mazzoleni S, et al. A Random Tree Forest decision support system to personalize upper extremity robot-assisted rehabilitation in stroke: a pilot study. IEEE International Conference on Rehabilitation Robotics. 2022. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85138902193&doi=10.1109%2fICORR55369.2022.9896509&partnerID=40&md5=eac063db759c5122c416d26226f7ff0c

  163. Thakkar HK, Liao WW, Wu CY, Hsieh YW, Lee TH. Predicting clinically significant motor function improvement after contemporary task-oriented interventions using machine learning approaches. J NeuroEng Rehabil. 2020;17(1):131.

    Article  PubMed  PubMed Central  Google Scholar 

  164. Delgado P, Yihun Y. Integration of task-based exoskeleton with an assist-as-needed algorithm for patient-centered elbow rehabilitation. Sensors. 2023;23(5):2460.

    Article  PubMed  PubMed Central  Google Scholar 

  165. Harshe K, Williams JR, Hocking TD, Lerner ZF. Predicting neuromuscular engagement to improve gait training with a robotic ankle exoskeleton. IEEE Robot Auto Lett. 2023;8(8):5055–60.

    Article  Google Scholar 

  166. Song Y, Cai S, Yang L, Li G, Wu W, Xie L. A practical eeg-based human-machine interface to online control an upper-limb assist robot. Front Neurorobot. 2020;14:32.

    Article  PubMed  PubMed Central  Google Scholar 

  167. Hernandez-Rojas LG, Cantillo-Negrete J, Mendoza-Montoya O, Carino-Escobar RI, Leyva-Martinez I, Aguirre-Guemez AV, et al. Brain-computer interface controlled functional electrical stimulation: evaluation with healthy subjects and spinal cord injury patients. IEEE Access. 2022;10:46834–52.

    Article  Google Scholar 

  168. Kumar N, Michmizos KP. Deep Learning of Movement Intent and Reaction Time for EEG-informed Adaptation of Rehabilitation Robots [Internet]. Vols 2020-November, Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics. 2020. p. 527–32. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85095570838&doi=10.1109%2fBioRob49111.2020.9224272&partnerID=40&md5=c1f96152f715e2633c32f96739b65bb5

  169. Sanguantrakul J, Soontreekulpong N, Trakoolwilaiwan T, Wongsawat Y. Development of BCI System for Walking Substitution via Humanoid Robot. In: 2020 8th International Electrical Engineering Congress (iEECON). 2020. p. 1–4.

  170. Liu D, Chen W, Lee K, Chavarriaga R, Bouri M, Pei Z, et al. Brain-actuated gait trainer with visual and proprioceptive feedback. J Neural Eng. 2017;14(5):056017.

    Article  PubMed  Google Scholar 

  171. Adithya K, Kuruvila SJ, Pramode S, Krupa N. Brain Computer Interface for Neurorehabilitation with Kinesthetic Feedback [Internet]. 2020 5th International Conference on Robotics and Automation Engineering, ICRAE 2020. 2020. p. 153–7. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85100344478&doi=10.1109%2fICRAE50850.2020.9310801&partnerID=40&md5=5fe7ca59645c803d9159113d81abc31c

  172. Khairuddin IM, Sidek SN, Majeed APPA, Puzi AA. Classifying Motion Intention from EMG signal: A k-NN Approach [Internet]. 2019 7th International Conference on Mechatronics Engineering, ICOM 2019. 2019. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85078825226&doi=10.1109%2fICOM47790.2019.8952042&partnerID=40&md5=2168e4b230f6bb8044e57cfc4fdfabf1

  173. Lin CJ, Chuang HC, Hsu CW, Chen CS. Pneumatic artificial muscle actuated robot for lower limb rehabilitation triggered by electromyography signals using discrete wavelet transformation and support vector machines. Sens Mat. 2017;29:1625–36.

    Google Scholar 

  174. Xiao F. Proportional myoelectric and compensating control of a cable-conduit mechanism-driven upper limb exoskeleton. ISA Trans. 2019;89:245–55.

    Article  PubMed  Google Scholar 

  175. Meng W, Zhu Y, Zhou Z, Chen K, Ai Q. Active interaction control of a rehabilitation robot based on motion recognition and adaptive impedance control [Internet]. IEEE International Conference on Fuzzy Systems. 2014. p. 1436–41. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84912558041&doi=10.1109%2fFUZZ-IEEE.2014.6891705&partnerID=40&md5=0413adef21c835ce36aa25602583690e

  176. Sierotowicz M, Lotti N, Nell L, Missiroli F, Alicea R, Zhang X, et al. EMG-driven machine learning control of a soft glove for grasping assistance and rehabilitation. IEEE Robot Auto Lett. 2022;7(2):1566–73.

    Article  Google Scholar 

  177. Guo Z, Wang C, Song C. A real-time stable-control gait switching strategy for lower-limb rehabilitation exoskeleton. PLoS ONE. 2020;15(8):e0238247.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  178. Wei M, Liu Q, Zhou Z, Ai Q. Active interaction control applied to a lower limb rehabilitation robot by using EMG recognition and impedance model. Indust Robot. 2014;14:465–79.

    Google Scholar 

  179. Abibullaev B, An J, Lee SH, Jin SH, Moon JI. A study on the BCI-Robot assisted stroke rehabilitation framework using brain hemodynamic signals. In: 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). 2012. p. 500–4.

  180. Taati B, Wang R, Huq R, Snoek J, Mihailidis A. Vision-based posture assessment to detect and categorize compensation during robotic rehabilitation therapy. In: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob). 2012. p. 1607–13.

  181. Xia K, Chen X, Chang X, Liu C, Guo L, Xu X, et al. Hand exoskeleton design and human-machine interaction strategies for rehabilitation. Bioengineering. 2022;9(11):682.

    Article  PubMed  PubMed Central  Google Scholar 

  182. Zhang P, Gao X, Miao M, Zhao P. Design and control of a lower limb rehabilitation robot based on human motion intention recognition with multi-source sensor information. Machines. 2022;10(12):1125.

    Article  Google Scholar 

  183. Huang J, Yan S, Yang D, Wu D, Wang L, Yang Z, et al. Proxy-based control of intelligent assistive walker for intentional sit-to-stand transfer. IEEE/ASME Trans Mechatron. 2022;27(2):904–15.

    Article  Google Scholar 

  184. Ai X, Santamaria V, Chen J, Hu B, Zhu C, Agrawal SK. A deep-learning based real-time prediction of seated postural limits and its application in trunk rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2023;31:260–70.

    Article  PubMed  PubMed Central  Google Scholar 

  185. Li N, Chen W, Yang Y, Wang Y, Yang T, Yu P, et al. Model-agnostic personalized knowledge adaptation for soft exoskeleton robot. IEEE Trans Med Rob Bio. 2023;5:353–62.

    Article  Google Scholar 

  186. Barkana DE, Sarkar N. Towards a smooth human-robot interaction for rehabilitation robotic systems. Adv Rob. 2009. https://doiorg.publicaciones.saludcastillayleon.es/10.1163/016918609X12496339893931.

    Article  Google Scholar 

  187. Erol D, Mallapragada V, Sarkar N, Uswatte G, Taub E. Autonomously adapting robotic assistance for rehabilitation therapy. Proceedings of the First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, 2006, BioRob 2006. 2006. p. 567–72.

  188. Erol D, Sarkar N. Smooth Human-Robot Interaction in Robot-Assisted Rehabilitation. In: 2007 IEEE 10th International Conference on Rehabilitation Robotics. 2007. p. 5–15.

  189. Dowling AV, Barzilay O, Lombrozo Y, Wolf A. An adaptive home-use robotic rehabilitation system for the upper body. IEEE J Trans Eng Health Med. 2014;2:1–10.

    Article  Google Scholar 

  190. Guan D. Pelvic trajectory analysis for lower limbs rehabilitation robot. IEEE 4th Adv Inf Technol Elect Automation Control Conf (IAEAC). 2019;1:2002–5.

    Google Scholar 

  191. Seo K. Real-Time Estimation of Walking Speed and Stride Length Using an IMU Embedded in a Robotic Hip Exoskeleton. Proceedings - IEEE International Conference on Robotics and Automation. 2023. p. 12665–71. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85168702922&doi=10.1109%2fICRA48891.2023.10160770&partnerID=40&md5=ca7d2b91d02d2c4d44c562cba4d98775

  192. Xu J, Xu L, Ji A, Li Y, Cao K. A DMP-based motion generation scheme for robotic mirror therapy. IEEE/ASME Trans Mechat. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TMECH.2023.3255218.

    Article  Google Scholar 

  193. Luo L, Peng L, Wang C, Hou ZG. A greedy assist-as-needed controller for upper limb rehabilitation. IEEE Trans Neural Networks Learn Syst. 2019. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TNNLS.2019.2892157.

    Article  Google Scholar 

  194. Hun Lee M, Siewiorek DP, Smailagic A, Bernardino A. Design, development, and evaluation of an interactive personalized social robot to monitor and coach post-stroke rehabilitation exercises. User Mod User-Adapted Int. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11257-022-09348-5.

    Article  Google Scholar 

  195. Zhao P, Zhang Y, Guan H, Deng X, Chen H. Design of a single-degree-of-freedom immersive rehabilitation device for clustered upper-limb motion. J Mecha Rob. 2021;13(3):031006.

    Article  Google Scholar 

  196. Fu Y, Wang X, Zhu Z, Tan J, Zhao Y, Ding Y, et al. Vision-based Automatic Detection of Compensatory Postures of after-Stroke Patients During Upper-extremity Robot-assisted Rehabilitation: A Pilot Study in Reaching Movement. In: 2020 International Conference on Assistive and Rehabilitation Technologies (iCareTech). 2020. p. 62–6.

  197. Cai S, Li G, Su E, Wei X, Huang S, Ma K, et al. Real-time detection of compensatory patterns in patients with stroke to reduce compensation during robotic rehabilitation therapy. IEEE J Biomed Health Inf. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/JBHI.2019.2963365.

    Article  Google Scholar 

  198. Cai S, Wei X, Su E, Wu W, Zheng H, Xie L. Online compensation detecting for real-time reduction of compensatory motions during reaching: a pilot study with stroke survivors. J NeuroEng Rehabil. 2020;17:1.

    Article  Google Scholar 

  199. Xu P, Xia D, Zheng B, Huang L, Xie L. A novel compensatory motion detection method using multiple signals and machine learning. IEEE Sens J. 2022;22(17):17162–72.

    Article  Google Scholar 

  200. Zhi YX, Lukasik M, Li MH, Dolatabadi E, Wang RH, Taati B. Automatic detection of compensation during robotic stroke rehabilitation therapy. IEEE J Trans Eng Health Med. 2018;6:1–7.

    Article  Google Scholar 

  201. Chang M, Kim TW, Beom J, Won S, Jeon D. AI therapist realizing expert verbal cues for effective robot-assisted gait training. IEEE Trans Neural Syst Rehabil Eng. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TNSRE.2020.3038175.

    Article  PubMed  PubMed Central  Google Scholar 

  202. Mogena E, Nunez P, Gonzalez JL. Automatic human body feature extraction in serious games applied to rehabilitation robotics. J Phys Agents. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.14198/JoPha.2017.8.1.04.

    Article  Google Scholar 

  203. Shirzad N, Van Der Loos HFM. Evaluating the user experience of exercising reaching motions with a robot that predicts desired movement difficulty. J Motor Behavior. 2016. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/00222895.2015.1035430.

    Article  Google Scholar 

  204. Kokkoni E, Arnold AJ, Baxevani K, Tanner HG. Infants respond to robot’s need for assistance in pursuing action-based goals. ACM/IEEE International Conference on Human-Robot Interaction. 2021. p. 47–51. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85102733732&doi=10.1145%2f3434074.3447126&partnerID=40&md5=58f84488a43fb3cf4112ee2d641ee8d0

  205. Yan H, Wang H, Vladareanu L, Lin M, Vladareanu V, Li Y. Detection of participation and training task difficulty applied to the multi-sensor systems of rehabilitation robots. Sensors (Switzerland). 2019;19(21):4681.

    Article  Google Scholar 

  206. Kumar N, Michmizos KP. Machine Learning for Motor Learning: EEG-based Continuous Assessment of Cognitive Engagement for Adaptive Rehabilitation Robots [Internet]. Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics. 2020. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85095614107&doi=10.1109%2fBioRob49111.2020.9224368&partnerID=40&md5=e117142448979a0bae19f7b492941403

  207. Latif MY, Naeem L, Hafeez T, Raheel A, Saeed SMU, Awais M, et al. Brain computer interface based robotic arm control. In: 2017 International Smart Cities Conference (ISC2). 2017. p. 1–5.

  208. Xu G, Gao X, Pan L, Chen S, Wang Q, Zhu B, et al. Anxiety detection and training task adaptation in robot-assisted active stroke rehabilitation. Int J Adv Rob Syst. 2018;15(6):172988.

    Google Scholar 

  209. Sun W, Peng H, Liu Q, Guo Z, Ibrah OO, Wu F, et al. Research on Facial Emotion Recognition System Based on Exoskeleton Rehabilitation Robot. In: 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). 2020. p. 481–4.

  210. Appel VCR, Belini VL, Jong DH, Magalhães DV, Caurin GAP. Classifying emotions in rehabilitation robotics based on facial skin temperature. In: 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. 2014. p. 276–80.

  211. Amato F, Di Gregorio M, Monaco C, Sebillo M, Tortora G, Vitiello G. Socially Assistive Robotics combined with Artificial Intelligence for ADHD. In: 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC). 2021. p. 1–6.

  212. Naseri A, Liu M, Lee IC, Liu W, Huang H. Characterizing prosthesis control fault during human-prosthesis interactive walking using intrinsic sensors. IEEE Rob Auto Lett. 2022;7(3):8307–14.

    Article  Google Scholar 

  213. Lund HH, Pedersen MD, Beck R. Modular robotic tiles - Experiments for children with autism [Internet]. Proceedings of the 13th International Symposium on Artificial Life and Robotics, AROB 13th’08. 2008. p. 5–10. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-78449265455&partnerID=40&md5=21441e84b8589fc73191aad312769e8f

  214. Parker ASR, Edwards AL, Pilarski PM. Exploring the Impact of Machine-Learned Predictions on Feedback from an Artificial Limb. In: 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR). 2019. p. 1239–46.

  215. Perpetuini D, Russo EF, Cardone D, Palmieri R, Filippini C, Tritto M, et al. Identification of functional cortical plasticity in children with cerebral palsy associated to robotic-assisted gait training: an fNIRS study. J Clin Med. 2022;11(22):6790.

    Article  PubMed  PubMed Central  Google Scholar 

  216. Cao B. Deep Learning Using for Fall Detection on the Rehabilitation Walking-Aid Robot [Internet]. Vol. 2, Proceedings - 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics, IHMSC 2019. 2019. p. 194–7. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85078267191&doi=10.1109%2fIHMSC.2019.10141&partnerID=40&md5=c36f4e545000ac7b286f8a86fad5c233

  217. Cha B, Lee KH, Ryu J. Deep-learning-based emergency stop prediction for robotic lower-limb rehabilitation training systems. IEEE Trans Neural Syst Rehabil Eng. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TNSRE.2021.3087725.

    Article  PubMed  Google Scholar 

  218. Pareek S, Kesavadas T. iART: learning from demonstration for assisted robotic therapy using LSTM. IEEE Rob Auto Lett. 2020;5(2):477–84.

    Article  Google Scholar 

  219. Lee D, Kang I, Molinaro DD, Yu A, Young AJ. Real-time user-independent slope prediction using deep learning for modulation of robotic knee exoskeleton assistance. IEEE Rob Auto Lett. 2021;6(2):3995–4000.

    Article  Google Scholar 

  220. Kang I, Kunapuli P, Hsu H, Young AJ. Electromyography (EMG) signal contributions in speed and slope estimation using robotic exoskeletons. IEEE International Conference on Rehabilitation Robotics. 2019. p. 548–53. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85071191847&doi=10.1109%2fICORR.2019.8779433&partnerID=40&md5=84084c7fc9fec2b1cd913b74c2f4a066

  221. Li X, Lu Q, Chen P, Gong S, Yu X, He H, et al. Assistance level quantification-based human-robot interaction space reshaping for rehabilitation training. Front Neurorob. 2023;17:1161007.

    Article  Google Scholar 

  222. Castillo JC, Álvarez-Fernández D, Alonso-Martín F, Marques-Villarroya S, Salichs MA. Social robotics in therapy of Apraxia of speech. J Healthcare Eng. 2018;18(1):7075290.

    Google Scholar 

  223. Xu L, Xu M, Ke Y, An X, Liu S, Ming D. Cross-dataset variability problem in EEG decoding with deep learning. Front Hum Neurosci. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fnhum.2020.00103.

    Article  PubMed  PubMed Central  Google Scholar 

  224. Chaibub Neto E, Pratap A, Perumal TM, Tummalacherla M, Snyder P, Bot BM, et al. Detecting the impact of subject characteristics on machine learning-based diagnostic applications. NPJ Digit Med. 2019;2(1):1–6.

    Article  Google Scholar 

  225. Yang Z, Qu M, Pan Y, Huan R. Comparing cross-subject performance on human activities recognition using learning models. IEEE Access. 2022;10:95179–96.

    Article  Google Scholar 

  226. Collins GS, Moons KGM, Dhiman P, Riley RD, Beam AL, Calster BV, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;16(385): e078378.

    Article  Google Scholar 

  227. Liu G, Cai H, Leelayuwat N. Intervention effect of rehabilitation robotic bed under machine learning combined with intensive motor training on stroke patients with hemiplegia. Front Neurorob. 2022;9:16.

    Google Scholar 

  228. Nicora G, Bellazzi R. A Reliable machine learning approach applied to single-cell classification in acute myeloid leukemia. AMIA Annu Symp Proc. 2021;25(2020):925–32.

    Google Scholar 

  229. Nicora G, Rios M, Abu-Hanna A, Bellazzi R. Evaluating pointwise reliability of machine learning prediction. J Biomed Inf. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jbi.2022.103996.

    Article  Google Scholar 

  230. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1):195.

    Article  PubMed  PubMed Central  Google Scholar 

  231. Tabrez A, Hayes B. Improving Human-Robot Interaction Through Explainable Reinforcement Learning. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2019. p. 751–3.

  232. Das D, Banerjee S, Chernova S. Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery. In: 2021 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2021. p. 351–60.

  233. Sujatha Ravindran A, Malaya CA, John I, Francisco GE, Layne C, Contreras-Vidal JL. Decoding neural activity preceding balance loss during standing with a lower-limb exoskeleton using an interpretable deep learning model. J Neural Eng. 2022;19(3):036015.

    Article  Google Scholar 

  234. Ravindran AS, Cestari M, Malaya C, John I, Francisco GE, Layne C, et al. Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss. In: 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2020. 268–73.

  235. Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. 2024;10(4): e26297.

    Article  PubMed  PubMed Central  Google Scholar 

  236. Dennis S, Garrett P, Yim H, Hamm J, Osth AF, Sreekumar V, et al. Privacy versus open science. Behav Res Methods. 2019;51(4):1839.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

This work was supported by the Italian Ministry of Research, under the complementary actions to the NRRP “Fit4MedRob—Fit for Medical Robotics” Grant (# PNC0000007). S. Pe, and G. Santangelo are PhD students enrolled in the National PhD program in Artificial Intelligence, XXXIX cycle, course on Health and life sciences, organized by Università Campus Bio-Medico di Roma.

Funding

This work was supported by the Italian Ministry of Research, under the complementary actions to the NRRP “Fit4MedRob—Fit for Medical Robotics” Grant (# PNC0000007).

Author information

Authors and Affiliations

Authors

Contributions

EP, SQ, RB and GN design the work. GN, SP, GS, LB screened the papers. GN, SP and LB drafted the work. EP supervised the work. RB, SQ, IGA and MG revised the work.

Corresponding author

Correspondence to Giovanna Nicora.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nicora, G., Pe, S., Santangelo, G. et al. Systematic review of AI/ML applications in multi-domain robotic rehabilitation: trends, gaps, and future directions. J NeuroEngineering Rehabil 22, 79 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12984-025-01605-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12984-025-01605-z

Keywords