Member
About/Bio
 
 

Silvia Zaccardi is currently a PhD student in biomedical engineering at Vrije Universiteit Brussel (VUB). The PhD is in collaboration with both the Department of Electronics and Informatics (ETRO) and the Rehabilitation Research (RERE) group of VUB. Silvia received her MSc in e-Health Biomedical Engineering from the Polytechnic Institute of Turin in 2021, and prior to that, earned her Bachelor’s in Clinical Engineering from Sapienza University of Rome in 2018. Between March and July of 2021, she worked as a research fellow at the Laboratory for Engineering of the Neuromuscular System (LISiN) at the Polytechnic of Turin. She is deeply passionate about the intersection of computer vision and augmented reality technologies within the healthcare sector.

The quantification of gait parameters is a powerful tool to evaluate the effects of both long-term interventions (e.g. surgery and prosthesis) and short-time interventions (e.g. gait rehabilitation). Thus far, the state-of-the-art system to perform gait analysis is marker-based motion capture, where multiple infrared cameras track retro-reflective markers placed on the patient’s skin. Marker-based systems have several drawbacks: they are expensive, not portable and difficult to operate; moreover, the gait analysis is often performed offline.

To overcome these limitations, markerless motion capture systems, such as Kinect, are currently being explored in clinics. Such systems leverage deep learning models to perform human pose estimation from RGB and/or depth images. Once the positions of human body parts in the 3D space are estimated (HPE), it is possible to extract meaningful motion parameters during gait. Markerless motion capture, compared to marker-based, is less expensive, easier to set up and can provide real-time feedback. However, its accuracy is not adequate enough to be used in clinical applications. Thus far, HPE from monocular images and gait parameters extraction are being treated as two separate problems; the former is studied in the computer vision domain using deep learning techniques while the latter in biomechanics. Instead, in this PhD study, we will define and train both problems in the same deep-learning framework to reach higher accuracy in both tasks.

We have the ambition to run HPE inference directly and in real-time on augmented reality glasses. In fact, gait parameters that describe the human motion in a 3D space are more intuitively visualized by Augmented Reality (AR) applications. By overlaying holograms onto the subject body, clinicians can monitor in real-time the effect of rehabilitation in a more intuitive way than seeing a video on a screen. The evaluation of Knee Contact Forces (KCFs) is a clinical application that would benefit from the real-time gait analysis coupled with an AR visualization tool. KCFs are, so far, calculated offline through complex modelling simulation. Instead, we propose a completely data driven approach to achieve real time performances.