Publication Details

IEEE Transactions on Instrumentation and measurement

Contribution To Journal


Abstract— Accurately reconstructing 3-D hand shapes of patients is important for immobilization device customization, artificial limb generation, and hand disease diagnosis. Traditional 3-D hand scanning requires multiple scans taken around the hand with a 3-D scanning device. These methods require the patients to keep an open-palm posture during scanning, which is painful or even impossible for patients with impaired hand functions. Once multi-view partial point clouds are collected, expensive post-processing is necessary to generate a high-fidelity hand shape. To address these limitations, we propose a novel deep-learning method dubbed PatientHandNet to reconstruct high-fidelity hand shapes in a canonical open-palm pose from multiple-depth images acquired with a single-depth camera. The hand poses in the depth images may vary, hand movements are allowed, facilitating the 3-D scanning process in particular for patients with difficult conditions. The proposed method has strong operability since it is insensitive to the input pose, allowing for pose variations in the input depth images. We also proposed two novel datasets: a large-scale synthetic dataset to train our model and a real-world dataset with ground-truth hand biometrics extracted by an experienced anthropometrist. Extensive experimental results on the unseen synthetic data and real-world data demonstrate that the proposed method provides robust and easy-to-use hand shape reconstruction and outperforms the state-of-the-art methods in biometric accuracy terms.

DOI scopus VUB