O1257 Multidimensional ground reaction forces and moments from wearable sensor accelerations via deep learning
William Johnson1, Ajmal Mian2, David Lloyd3, Jacqueline Alderson1,4
1School of Human Sciences, The University of Western Australia, Perth, Australia. 2School of Computer Science and Software Engineering, The University of Western Australia, Perth, Australia. 3School of Allied Health Sciences and Gold Coast Orthopaedic Research and Education Alliance, Menzies Health Institute Queensland, Griffith University, Gold Coast, Australia. 4Auckland University of Technology, Sports Performance Research Institute New Zealand (SPRINZ), Auckland, New Zealand

Abstract

Introduction
It is currently not possible to record accurate 3D ground reaction forces and moments (GRF/Ms) in the field of play [1; 2]. By harvesting archive laboratory-based motion capture and force plate data, this proof of concept investigated using cutting-edge machine learning to predict GRF/Ms from just three wearable sensors. To assess the potential of this approach, the study aimed to achieve average correlations rGRFmean and rGRMmean greater than 0.80.

Methods
The CaffeNet deep learning convolutional neural network (CNN) trained on 150,000 ImageNet images [3] was used first to fine-tune a multivariate regression model between marker-based laboratory motion capture of 2,355 sidestepping left trials using eight retro-reflective passive markers (Vicon, Oxford, UK) and associated force plate recorded GRF/Ms (AMTI, Watertown, MA, USA).

From this CNN model, a subsequent transfer learning technique (double-cascade) was applied to associate acceleration magnitudes to ground truth GRF/Ms for the same sidestep, with acceleration data either synthesised or recorded at three locations: upper back, sacrum, and lateral shank. The model was trained using accelerations synthesised from 2,548 trials of marker data via double-differentiation of displacements, and predictions made with five trials of accelerations recorded by Xsens MTw inertial sensors (Xsens, Enschede, The Netherlands). The acceleration magnitude (Euclidean norm) was used to avoid the difference in coordinate systems (global versus sensor-independent) between the synthetic and recorded accelerometer data.

Results
The correlations between GRF/Ms recorded by the force plate, and those predicted by the double-cascade model were rGRFmean 0.86 and rGRMmean 0.81 (Figure 1). By comparison, Karatsidis et al. [1] reported rGRFmean 0.94 and rGRMmean 0.82 (11 trials, 17 sensors, full-body suit, walking gait) using a linear approach. Whereas the CNN achieved stronger correlations for the horizontal forces Fx and Fy (and corresponding moments), Karatsidis and colleagues were more accurate in Fy and Fz.

Discussion
The study’s goal of attaining average correlations greater than 0.80 was achieved, demonstrating the potential for deep learning to mine existing high-fidelity laboratory data capture to increase the accuracy and validity of unidimensional wearable sensor outputs. Within the limitations of correlation analysis, the strong results from acceleration magnitudes encourage subsequent investigation using aligned directional components. Improving the correspondence of training markers to test accelerometers, and using more recent sensor technology is expected to produce further improvements. These results herald on-field practical application of wearable sensor technologies for laboratory-fidelity biomechanical analyses.

Acknowledgements
NVIDIA Corporation GPU Grant Program; ARC DP160101458; and the Australian Government Research Training Program.

References
1. Boudreaux, B. D., et al., (2017) MSSE.
2. Karatsidis, A., et al., (2016) Sensors.
3. Krizhevsky, A., et al., (2012) NIPS.