Towards unsupervised medical image segmentation 

Segmentation task, where each voxel in image is assigned to a different anatomical structure or a tissue, is a critical step in medical research and clinical practice workflows for diagnosis, pre-operational treatment, post-operational assessment, etc. Manual image segmentation is time-consuming and error-prone task. Combined with constantly increasing volume of medical imaging data used in medical domain there is a demand for automation of segmentation techniques. Recent advances in machine learning have led to automated medical image segmentation methods demonstrating good level of performance. These methods are mainly based on supervised deep learning which have several disadvantages.   

"Learning to learn with less"

First, training of supervised deep learning model requires large and diverse annotated datasets. Second, the resulting models are limited to successful analysis of images which are similar to the training data. The lack of labeled data is a common problem in the domain of medical imaging. Manual labeling of the images is time-costly process, prone to inter- and intra-observer error. Such labeled data is not often needed for clinical studies; therefore, the source of such data is limited to research datasets. Larger unlabeled datasets may be more widely available. Training deep convolutional neural networks only using the small number of labelled datasets cannot always achieve satisfactory results and does not utilize the large number of available unlabeled datasets.

One of the challenges with semi-supervised learning for medical image segmentation is that most approaches are focused on one or less frequently few applications, which makes it difficult to generalize the results and predict methods performance for a different problem. 

The purpose of the PHD project is two-fold:

  • Increasing generalization potential of semi-supervised segmentation methods.
  • Reducing the need for labeled data, therefore moving towards unsupervised learning.