Neurological
The machine learning model reliably assesses tremors at rest and bradykinesia in Parkinson’s disease
According to a study published in Neurology, a support vector machine (SVM) model outperformed an untrained human evaluator in reliably and accurately studying the tremors of rest and bradykinesia in patients with Parkinson’s disease (PD).
The study’s researchers wanted to find out whether the SVM model, a machine learning-based automatic rating system, was a suitable alternative to the Movement Disorder Society (MDS) -Unified Parkinson Disease Rating Scale (UPDRS), which is often inefficient.
A MDS-UPDRS certified movement disorder specialist viewed resting tremor video clips in 55 patients with PD and finger-tapping video clips in 55 patients with PD and assessed them with MDS-UPDRS. An un-certified neurologist evaluated the clips separately without knowing the specialist’s results.
Continue reading
The study researchers measured the maximum and mean amplitude of hand movement in order to analyze the tremors at rest and the mean, maximum and minimum knocking and knocking fatigue per 3-second segment, as well as the knocking fatigue frequency. They tested the SVM model by training it on 80 sets of the 110 video clips to score the remaining 30 sets.
The absolute agreement rate in the UPDRS rating and the interrater reliability between the gold standard rating, the untrained human rater and the SVM model were then measured.
In the manual evaluation system, the logarithms of the maximum and mean amplitudes of the tremor at rest correlated positively with the gold standard evaluation (β = 1.20, P <0.001; β = 1.20, P <0.001). The fingertip speed correlated negatively with the gold standard rating (β = -0.87, P <0.001), as did the mean, maximum, and minimum fingertip amplitudes (β = -0.31; β = -0.42; β = -0.15 ; P <0.001 for all). Finger tap fatigue did not correlate with the gold standard rating.
The SVM model had both higher absolute and relative agreement rates with the gold standard rating (63%, 100%) than the untrained human assessor with the gold standard rating (46%, 97%). The reliability between the evaluators between the SVM model and the gold standard rating was also higher than that between the untrained human evaluator and the gold standard rating (0.791 and 0.662, weighted with κ, respectively).
This was also the case with intra-class correlation coefficients (ICC). Study researchers found that the SVM model and gold standard had an ICC of 0.927, while the untrained human rater and gold standard rater had an ICC of 0.861.
However, the SVM model was less reliable with finger tapping than with quiescent. The study’s researchers attributed this to the higher number of parameters needed for assessment and the closer proximity between test and training sets when examining tremor analysis compared to finger tapping. Alternatively, the untrained human evaluator may have considered the patient’s demographics and hypomimia when analyzing bradykinesia.
The limitations of the study included technical errors, their retrospective nature, some lack of compliance with the MDS-UPDRS protocols, and the assessment of only two factors of MDS-UPDRS Part III.
“Machine learning-based algorithms that automatically rate key PD symptoms are more accurate than untrained human assessments,” the study’s researchers concluded.
reference
Park KW, Lee E., Lee JS, et al. Automatic assessment of the main symptoms of Parkinson’s disease based on machine learning. Neurology. 2021; 96 (13): e1761-e1769. doi: 10.1212 / WNL.0000000000011654