Member
About/Bio
 
 
On the fly adaptation of medical image visualization in augmented reality computer assisted surgery 

Promotors:
Prof Dr. Bart Jansen
Prof Dr. Ir. Jef Vandemeulebroucke
Prof Dr. Johnny Duerinck

In a medical context, computer assisted navigation (CAN) is the use of medical imaging data as a map for medical intervention. By and large its use is limited to the preoperative and intraoperative phases of intervention, and within the confines of an equipped hospital operating theater (OR). This allows surgeons to more effectively plan an intervention, proceed with minimal invasiveness, and has led to quantifiably better patient outcomes.

Despite this, these devices often go unused owing to their physical size, complexity, cost, and unintuitive user interfaces (UI). It is this last point which we focus on. Typically, a CAN system will present information to the surgeon via planar displays attached overheard or otherwise ancillary to a mobile computer station. Irregardless, it is often difficult to concurrently see both the patient and navigational data due to their physical separation. In addition, this disunion compounds the already difficult task of mentally transforming the display’s two-dimensional information into three-dimensional actions at the patient.

Fortunately, current paradigms of CAN are being challenged by advances in virtual and augmented reality (VR/AR). Such systems allow information to complement the world around us, and in the case of the former, immerse entirely, rather than one’s world complement the information. It is, dear reader, that the focus of our research is a focus on the later of the two; and of the later, the focus is yet on those solutions which may be worn atop one’s head; the augmented reality head mounted device (AR-HMD).

In recent years, several commercial AR-HMDs have been introduced to the market which integrate all requisite hardware for CAN: data processing, tracking, and visualization. By virtue of transparent displays placed directly within the line of sight of the wearer, and an accurate understanding of that wearer’s orientation within their environment, medical imaging data may be presented in such a way as to fit said environment in a natural manner.  In this way it becomes possible to display full three-dimensional medical imaging data registered to the patient, allowing the wearer intuitive and natural visualizations for planning and navigation.

Predominately the body of research concerning the use of off-the-shelf AR-HMDs has focused on hardware integration, quantifying capabilities, and addressing technical limitations. However, a vein of untapped research exists within this body. Of particular interest are the limitations in both user interaction and visualization. Lest the professional adoption of these headsets ends in vicissitude, we investigate two topics.

First there is no mechanism whereby visuals may be updated to reflect reality. This means that visualized anatomical models fail to adapt to the changes their real-life counterparts undergo during surgery. For example, a femur will be rendered in its entirety irregardless if it is actually exposed to the surgeon at that time, moreover, it will continue to be rendered in its entirety even after modification. These delineations between reality and augmented reality, not only break its very premise by failing to preserve its illusion, but could easily lead to interventional confusion, error, and decreased task performance.

"There are four lights!"

Second is the reliance on hand and voice control as system input. While hand gesture control is indeed one of the major advantages of AR-CAN, it is unrealistic to expect the surgeon’s hands to remain free, coincidentally at such a time that a change in program functionality demands them be. This requires the interruption of surgical workflow so that an appropriate hand gesture be performed. In addition, voice control, while freeing the surgeon’s hands, demands relatively low background noise for effective use. With the plethora of beeps, hums, and communication betwixt the surgical team, the conditions for voice-controlled functionality are less than ideal.

We propose an AR-HMD sub-system which incorporates low level scene awareness in order to dynamically update anatomical models, control program functionality, and introduce enhanced visual cues specific to the field of neuro and orthopedic surgery. In order to achieve this, real-time semantic segmentation masks based on the internal camera maps provide the mechanism for both dynamic anatomical visualization, and real-world depth occlusions. Additionally, class information is used for surgical phase classification, thereby allowing the system a means to automatically adapt functionality to suit interventional phase.