Friday, June 6, 2014

Graph again


Update 06 /11 /2014


The ipython notebook from the previous post was rewritten . A python  function building a graph from a skeleton was modified to generate a multigraph (networkx). This function (available in an ipython notebook) is used as follow:

    Graph= nx.MultiGraph()
    C8_Skeleton_To_Graph_01(Graph, skeleton)


With:
  • Graph : a networkx multigraph object.
  • skeleton : the binary image of skeleton obtained by morphological thining with mahotas.
In the following example (from the previous ipython notebook), the skeleton of the letter 'a' was converted into a networkx multigraph: 

image_test= makeLetterImage('a', 75)
skeleton_test = mh.thin(image_test)
_,_,Bp_test,Ep= SkeletonDecomposition(skeleton_test)
Ep_Ep = skeleton_test*np.logical_not(Ep)
Ep_Ep,_ = mh.label(Ep,Ep)
l_Ep, _ = mh.label(Ep)


Graph_test = nx.MultiGraph()
C8_Skeleton_To_Graph_01(Graph_test, skeleton_test)
print Graph_test.edges(data=True)

figsize(16,8)
subplot(131)
imshow(skeleton_test+l_Ep+3*Bp_test,interpolation='nearest')
subplot(132, xticks=[],yticks=[])
nx.write_dot(Graph_test,'multi.dot')
!neato -T png multi.dot > multi.png
imshow(mh.imread('multi.png'), interpolation='nearest')
subplot(133)
nx.draw(Graph_test)



The skeleton has two edges, linking the same branched points (yellow and red pixels). The graph was exported to build an image of the multigraph (middle), networkx doesn't seem to be capable to dispolay correctly such multigraph (right)




When applied to a lowercase alphabet:



The function produced the corresponding graphs (the letter 'o' excepted):
However,it is possible to find case where the function failed to buil a graph as:

   imTest = makeLetterImage('b', 70)
   skelTest = mh.thin(imTest)
   Ep_Bp, Bp_Bp, Bp, Ep = SkeletonDecomposition(skelTest)


where:
  • skeltest: the image skeleton obtained by thining with mahotas.
  • Bp : image of branched-point(s)
  • Ep: image of end-point(s)
  • Bp_Bp :edge(s) between branched-points
  • Ep_Bp:edge(s) between end-point(s) and branched-point(s)
If the skeleton of the letter 'b' is decomposed as follow:
The function fails to produce a graph due to the presence of a closed edge (right) which should be processed as a self loop for the corresponding branched-point (middle image).

An other graph library: Graph-tool


The graphic representation of a multigraph produced by graphtool is so nice that it may worth to learn the graphtool syntax to make a graph:

Friday, May 30, 2014

Construct a graph from the skeleton image of a binary form

Previously an attempt was made to write a code capable of building a graph from the skeleton of an image and it was not a complete success since some preprocessing (pruning) had to be done to build a graph. Here an other python code was written and at very first sight, it seems to produce a graph without complaining....sometimes (see at the end).
The code was run in a ipython notebook running on sagemath cloud. It relies on several libraries : mahotas and networkx, numpy is provided by default. Scikit image was also tested. Pillow was used to generate images of some letter as model.

The idea is the following:
  • make the skeleton of a binary image.
  • get its branched-points and add them in a graph.
  • get its end-points, add them to the graph.
  • get the edges of the skeleton and decompose them into:
    • edges between end-points and branched-points.
    • edges between branched points.
  • add edges to the graph between end-points and branched-points.
  • add edges to the graph between branched-points.

Images models

Since the code is run on sagemath cloud, Pillow has to be installed. From a terminal :
pip install Pillow
 The verdana true type font was obtained from fontsupply and the file verdana.ttf had to be at the same location than the ipython notebook ( verdana.ttf was uploaded on SMC project at the same location than the ipython notebook).

The code used to generate images and draw them is:

and we get:

What kind of skeleton?

Both mahotas an scikit-image implement morphological operators to generate the skeleton of a binary shape. Scikit-image seems to use a neighbourhood  on 4 pixels (C4) and mahotas on on neighbourhood of 8 pixels (C8). This is why mahotas was retained to produce skeletons:
  • medial axis  of the letter A computed with scikit image (left), branched points and end-points are extracted using hit-or-miss operator implemented in mahotas (middle). The branched-points can be removed from the medial axis and the resulting image can be labelled (left), pixels connected only vertically or horizontally share the same label.
Skeleton from the letter A (left)
  •  If a skeleton is generated by morphological thining with mahotas, we get the left image. The fourth image is the skeleton deprived of its branched points. Scikit-image provide a thining operator with a C4 neighborhood (right image):
 
C8 skeleton by thining (left). C4 skeleton by thining (right).

What's happen if a C8 skeleton is labeled? Here, both mahotas and scikit-image provide the same result:
edges (mahotas:left; scikit-image:right)
The labelled regions fit here what can be seen as an edges.

Prior to start to build a graph, several additionnal images are needed:

Edges decomposition:

Let's decompose the image of edges by considering edges connecting end-points to branched-points and edges connecting only branched points:

Starting to build the graph by the branched-points:

 The branched-points are added first. The branched point of label 1 in the image, is the first vertex in the graph, and so on:

The end points are the added to the graph, there is a shift between the end-points label in the image and their index in the graph. The first end-point in the image will be the fith vertex in the graph:
The vertex of the graph (G) have attributes derived from the images they come from: their position 'pos' define by (row,col), their label in the image of origin. Typing in a ipython cell:
          print G.nodes(data=True)
          print nx.get_node_attributes(G,'label')
yields:
[(1, {'pos': (3, 9), 'label': 1}), (2, {'pos': (3, 12), 'label': 2}), (3, {'pos': (17, 5), 'label': 3}), (4, {'pos': (17, 17), 'label': 4}), (5, {'pos': (1, 9), 'label': 1}), (6, {'pos': (1, 13), 'label': 2}), (7, {'pos': (25, 3), 'label': 6}), (8, {'pos': (25, 21), 'label': 8})]
{1: 1, 2: 2, 3: 3, 4: 4, 5: 1, 6: 2, 7: 6, 8: 8}

Linking end-points to branched points:

 The idea here is to get the neighbourhood of each branched-point in the "edges ep-ep" image and to add an edge in the graph between the corresponding branch-point and end-point(s). This yield:

Linking branched-points

  •  A dictionnary where the keys are the labels of the branched-points and the values are the label of the "edges bp-bp" in the neighbourhood of the branched point is build. Thus there's one key (bp label) for several values (edges labels).
  • But an edge can be seen as one key (the edge label) for two values (the two labels of the two branched-points). So the dictionnary has to be inverted in that very particular way. Fortunately, that problem was solved on stackoverflow.
 Finaly, we have:
 the whole steps are gathered in a function:

Processing other  letters gives for example:

The ipython notebook is here:

View the notebook on ipython.org

Sometimes building the graph fails:

On a larger set of images:
  • For the image corresponding to the letter K, the graph construction fails due to the presence of branched regions made of several pixels instead of one.
  • There's also a problem with the letter D, its graph has no loop.
  • Since there's no branched point in the skeleton of the letter C, its graph is not build.
 The ipython notebook has been reordered and is available on ipython.org

Thursday, April 10, 2014

In search of necklaces for a maximum area quadrilateral

The overlap of two chromosomes can be resolved by the four-point method provided that from these four points, a maximum area quadrilateral can be defined
 
Maximal quadrilateral overlaid on the overlap of two chromosomes


Previously that kind of quadrilateral was found from the 24 permutations of  four points:
 
4!=24 quadrilaterals formed by 4 points chosen randomly.


In these 24 permutations, some of them are derived from the others by circular permutations; these circular permutations define a necklace. The area of ​​quadrilaterals obtained by circular permutation of a necklace, is invariant.
The maximal area quadrilateral can be found in the set of the 6 necklaces.

To find a necklace, let's start from an ordered list of four points :
A, B, C, D
and swap two elements as (A,B), we get a necklace:
B, A, C, D
From four randomly chosen point, the following python function yields the six possible necklaces :
For example:

When run in ipython cell on sagemathcloud, the unique_necklaces function applied on four number took:

%timeit unique_necklaces(1,2,3,4)
100000 loops, best of 3: 15.2 µs per loop
 

Applied on four characters 'a', 'b', 'c', 'd'  , the function yields if the original 4-tuple is included (remove # in line 10):
print unique_necklaces('a','b','c','d')

[['a', 'b', 'c', 'd'], 
['b', 'a', 'c', 'd'], 
['c', 'b', 'a', 'd'], 
['d', 'b', 'c', 'a'], 
['a', 'c', 'b', 'd'], 
['a', 'd', 'c', 'b'], 
['a', 'b', 'd', 'c']]
 
Then a maximal area quadrilateral can be found beyond the six necklaces. As the number of quadrilaterals explored is smaller, the new implementation is faster than the previous one based on brute force:

     points=[(random.randint(0,20),random.randint(0,20)) for i in range(0,4)]
     %timeit maxAreaQuad(*points)
     %timeit maxAreaQuad_brutforce(*points)

     1000 loops, best of 3: 218 µs per loop
     1000 loops, best of 3: 748 µs per loop

It's funny to note that the area of the maximal area quadrilaterals seems to follow an asymetric distribution:

Ipython Notebook : here



Thursday, March 20, 2014

Combining five images into one rgb MFISH image using scikit-image implementation of HSV color space.

MFISH is a molecular cytogenetic technics using combination of fluorescent DNA probes used to recognize chromosomes and to detect some type of rearrangements.
In a previous post, five images corresponding to different fluorochromes were combined using matrix multiplication. The matrix coefficients were chosen empiracally and validated a-posteriori by the aspect of the rgb image produced.
Scikit-image is a python library implementing color space conversions for example between RGB and HSV. To display simultaneously the images of the five spectral components, or the five fluorochromes:


Five greyscaled images of the spectral components shown in false colors (cy55, cy5, SpGreen, SpOrange, TexasRed)
 The five greyscales images were first converted into 8bits rgb colors and their hue were modified with colors chosen prior combining them:
Greyscaled images converted into rgb images
Then the five images were added to produce one rgb MFISH image of type float64:

Let' produce the HSV histograms of the image:
Since the produced image lacks some contrast, the image exposition can be corrected such that the image background becomes black and the colors become brighter:

Now the histograms of the HSV chanels look like:
The linear stretch was also chosen based on the resulting rgb image.

Code:

The whole code is available in a ipython notebook.

Notebook


Code on SageMathCloud:

The ipython notebook can be run on SageMathCloud. For some reasons, it's not possible to download images from github to sagemathcloud, so local images are used.

Public ipython notebook on sagemathcloud:

Images database:

https://github.com/jeanpat/MFISH 

Many thanks to Tony S Yu for his help



Wednesday, February 5, 2014

Making image processing on SageMathCloud. Extraction of chromosomes from multispectral images.

SageMathCloud is a plateform to do mathematics, it also offers the possibility to use a ipython notebook online. Several python libraries are available: numpy, scipy, scikit-image (0.7, 0.93), mahotas 1.1, pandas ...
Once an account is opened (it's free), a project created, some multispectral images were uploaded in a directory of the project. In that project, an ipython notebook was created, this notebook can be run in a project directly in SageMathCloud.

Removing spurious pixels brought by ndimage.find_objects():

The following model image  is a grey scaled image:


Grey image with a pseudo color map

find_objects() succeeds in extracting the letters:

Extracted particles are not convex, their bounding box can contains pixels from other particles in their neighborhood.
For example the letter A, contain pixels belonging to other segmented (labelled) letters. The image of A contains pixels belonging to the letter r and to the letter B. Those spurious pixels have to be eliminated. find_object() provides the bounding box of the segmented particles in a list. The bouding box index seems to correspond to the particle label, this could be used to erase the spurious pixels:

A python function can be written:


It takes two images as arguments: a grey level  image (or stacked images) and a label image and returns a list of images containing the extracted particles from the grey level image. This implementation is much simpler and faster than a previous one .

 

Chromosomes extraction from a MFISH image:

After having uploaded images on cloud.sagemath, for example:

The five components (DAPI excepted) spectral images can be combined in one color image. There is an additional kar image resulting of the segmentation of the MFISH images :
5 spectral images combined into 3 to make a color (rgb) image (left). kar image (right)

Let's use the kar image to produce a label image so that each chromosome is defined by a single label (or grey level):
label image (right)

Now the extraction of individual chromosomes can be done, from the DAPI image:

or from stacked images:

The kar image identifies the chromosomes:

Let's make galleries of chromosomes

The extracted chromosome images have different size. Let's add borders to the images so that the images have all the same size :



Then let's build two galleries, one with MFISH images and one with inverse DAPI chromosomes, the code is visible in the notebook; it yields:
Left: MFISH chromosomes, Right:inverse DAPI

It remains to auto rotate the chromosome. A draft code, at the end of the ipython notebook  in the SageMathCloud project, tries to explore another method than the previously proposed. It remains also to arrange the chromosome according to their classification.

Download the ipython notebook

Go to the notebook on sagemathcloud