Thursday, September 1, 2022

First instance segmentation training and prediction with lightning flash

Making an annotated Dataset:

125 grey-scaled images of overlapping pairs of chromosomes were annotated with makesense.ai. Annotation was saved in a single json file in COCO format:

Each image has two instances of a single "chromosome" label.

Loading and training a maskrcnn model

Lightning-flash was used to load and train a mask-rcnn model over the dataset:


Possible issue with the dataset.

The dataset has two instances of the same kind labeled as chromosome. The prediction yields only one mask:



Thursday, October 14, 2021

Installation of lightning-flash

Having anaconda installed on a ubuntu 20.04 box:

Create a virtual environment, specifying the disk:

conda create --prefix /mnt/stockage/Developp/EnvPLFlash

and activate the env with:

conda activate /mnt/stockage/Developp/EnvPLFlash

Then install the libs  starting with pytorch with cuda support:

To have pytorch 1.8 with cuda support:

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch-lts

 then 

pip install icedata

pip install lightning-flash

pip install notebook

pip install voila

Without forgetting to install lightning-flash[image] to get the instance segmentation algorithms

pip install 'icevision' 'lightning-flash[image]'

The installation can be checked running the following notebook:




 

 

 





Wednesday, July 7, 2021

Back to COCO 2: A less minimalistic 125 images +json dataset

125 grey scaled images were chosen from a previous dataset available on github.

An annotation file was generated by hand online with makesense.ai and saved as a unique json file in COCO format. This small dataset is freely available as an archive.

Check annotation file validity with pycococreator.

Pycococreator by Waspinator was used to display the annotations  (aka the segmentation) over a grey scaled image in the following jupyter notebook 

 


validating de facto the annotation file produced with makesense.ai:


Data registration in detectron2:

This  is the next step.

The idea is to follow the tutorial on custom dataset registration, possibly using the balloons example by davamix.


Tuesday, June 15, 2021

Try to unstuck: back to COCO

The last post is from October 2020. The main line of conduct was to progress on chromosome instance segmentation, but a robust semantic, U-net based, would have satisfying too (possibly using Fastai).

Detectron2, PixelLib and many others provide instance segmentation algorithms (Mask RCNN for example). To train a model, the COCO format for the so called ground-truth labels seems to be mandatory. The issue is that the different datasets generated to simulate overlapping chromosomes, the labels are grey scaled images decomposable into binary masks for one-hot encoding:

 
In the last try, the idea was to start from the COCO specs and to write some code to convert the binary masks into COCO files but that was a fail as detectron2 didn't want my minimalist dataset.

Making a minimal valid coco data

From a grey-scaled image a coco file is generated using an interactive online tool as https://www.makesense.ai/:
 

The coco dataset corresponding to this single image is a json file :
 

With a xml viewer in Colab, we can see how the file is structured:
The file corresponds to only one image:"grey0000001.png"
The two chromosomes annotated appeared as id:0 and id:0 in the annotations field:
The contour of one of the two chromosomes is coded a 24 values. Possibly 12 pairs of coordinates:
The chromosome bounding box seems to be defined by two diagonal points, so we have a pair of coordinates:
Finally, there are only one category of instances: "chromosome"

Back to COCO and play with a minimalist valid dataset with pycocotools: