The problems mentioned bellow seems to have solved by Saleh, Saad & Isa in Procedia Computer Science 159 (2019) 524–533
OverlapSegmentationNet is a UNet model implemented with Keras. The network was trained for 14 epochs (~8 hours on a GT740M GPU), this is not enough to make a good prediction but it is enough to play with the model to understand how a prediction from a single image looks like.
Dataset
The dataset used for training was the first proposed on kaggle. It consists in 13434 pairs of greyscale/ground truth images of size 88x88. Each greyscaled image has a pair of overlapping chromosomes. The overlaps were synthetically generated from images of single chromosomes:
Making a prediction from a single grey scaled image:
The aim is to predict the segmentation labels with a Keras model (OverlapSegmentationNet). The 88x88 greyscaled image must be converted into a 4D numpy array (1, 88, 88, 1).
A prediction is then 4D array of shape ( 1,88,88,4), since there's one component for each label (background, chromosome 1, chromosome2, overlap). A prediction looks like:
where each component is a floating-point image. With another chromosome, we have:
A prediction takes ~ 40 ms for one low resolution image.
Prediction made on an image which do not belong to the train dataset:
A prediction was made on an image belonging to a validation dataset:
To be used for prediction, the image was down sampled and recut to fit 88x88 (and its corresponding ground truth segmentation too):Clearly, with so few epochs the prediction is bad:
The one-hot encoded prediction can be thresholded and back converted into a multilabel segmentation and compared to the ground truth segmentation:
Notebook is here:
More details are available in the notebook:
No comments:
Post a Comment