# DIP4FISH

This blog is dedicated to Digital Image Processing for fluorescence in-situ hybridization and QFISH and other things about the telomeres.

## Friday, June 19, 2015

## Sunday, May 3, 2015

### Building opencv 3 beta under Ubuntu 15.04

#### Building opencv 3 beta from source:

Several blogs explain how to install opencv3 on Ubuntu or on mac os X. A bunch of libraries has to be installed as mentionned previously for opencv 2.49. I follow the guidelines :- download the source from github
- make a separate directory to build the code

Using CMakegui, choose the source directory and the build directory:

click on

**then on**

*Configure*

*Generate*Check if CUDA, OpenCL, python are detected:

Then from a terminal openned in the build directory, run the command:

make -j4

then:

sudo make install

#### Checking if opencv can be used from python:

From an Ipython notebook, write the small code:

`import cv2`

`print cv2__version__`

3.0.0-dev

## Friday, June 6, 2014

### Graph again

### Update 06 /11 /2014

The ipython notebook from the previous post was rewritten . A python function building a graph from a skeleton was modified to generate a multigraph (networkx). This function (available in an ipython notebook) is used as follow:

Graph= nx.MultiGraph()

**C8_Skeleton_To_Graph_01**(Graph, skeleton)

With:

- Graph : a networkx multigraph object.
- skeleton : the binary image of skeleton obtained by morphological thining with mahotas.

**a**' was converted into a networkx multigraph:

image_test= makeLetterImage('a', 75)

skeleton_test = mh.thin(image_test)

_,_,Bp_test,Ep= SkeletonDecomposition(skeleton_test)

Ep_Ep = skeleton_test*np.logical_not(Ep)

Ep_Ep,_ = mh.label(Ep,Ep)

l_Ep, _ = mh.label(Ep)

Graph_test = nx.MultiGraph()

**C8_Skeleton_To_Graph_01(Graph_test, skeleton_test)**

print Graph_test.edges(data=True)

figsize(16,8)

subplot(131)

imshow(skeleton_test+l_Ep+3*Bp_test,interpolation='nearest')

subplot(132, xticks=[],yticks=[])

nx.write_dot(Graph_test,'multi.dot')

!neato -T png multi.dot > multi.png

imshow(mh.imread('multi.png'), interpolation='nearest')

subplot(133)

nx.draw(Graph_test)

The skeleton has two edges, linking the same branched points (yellow and red pixels). The graph was exported to build an image of the multigraph (middle), networkx doesn't seem to be capable to dispolay correctly such multigraph (right)

When applied to a lowercase alphabet:

The function produced the corresponding graphs (the letter 'o' excepted):

However,it is possible to find case where the function failed to buil a graph as:

imTest = makeLetterImage('b', 70)

skelTest = mh.thin(imTest)

Ep_Bp, Bp_Bp, Bp, Ep = SkeletonDecomposition(skelTest)

where:

- skeltest: the image skeleton obtained by thining with mahotas.
- Bp : image of branched-point(s)
- Ep: image of end-point(s)
- Bp_Bp :edge(s) between branched-points
- Ep_Bp:edge(s) between end-point(s) and branched-point(s)

**'b'**is decomposed as follow:

The function fails to produce a graph due to the presence of a closed edge (right) which should be processed as a self loop for the corresponding branched-point (middle image).

### An other graph library: Graph-tool

The graphic representation of a multigraph produced by graphtool is so nice that it may worth to learn the graphtool syntax to make a graph:

## Friday, May 30, 2014

### Construct a graph from the skeleton image of a binary form

Previously an attempt was made to write a code capable of building a graph from the skeleton of an image and it was not a complete sucess since some preprocessing (pruning) had to be done to build a graph. Here an other python code was written and at very first sight, it seems to produce a graph ~~without complaining~~....sometimes (see at the end).

The code was run in a ipython notebook running on sagemath cloud. It relies on several libraries : mahotas and networkx, numpy is provided by default. Scikit image was also tested. Pillow was used to generate images of some letter as model.

The idea is the following:

- make the skeleton of a binary image.
- get its branched-points and add them in a graph.
- get its end-points, add them to the graph.
- get the edges of the skeleton and decompose them into:
- edges between end-points and branched-points.
- edges between branched points.
- add edges to the graph between end-points and branched-points.
- add edges to the graph between branched-points.

## Images models

Since the code is run on sagemath cloud, Pillow has to be installed. From a terminal :

pip install Pillow

The verdana true type font was obtained from fontsupply and the file verdana.ttf had to be at the same location than the ipython notebook ( verdana.ttf was uploaded on SMC project at the same location than the ipython notebook).

The code used to generate images and draw them is:

and we get:

## What kind of skeleton?

Both mahotas an scikit-image implement morphological operators to generate the skeleton of a binary shape. Scikit-image seems to use a neighbourhood on 4 pixels (C4) and mahotas on on neighbourhood of 8 pixels (C8). This is why mahotas was retained to produce skeletons:

- medial axis of the letter A computed with scikit image (left), branched points and end-points are extracted using hit-or-miss operator implemented in mahotas (middle). The branched-points can be removed from the medial axis and the resulting image can be labelled (left), pixels connected only vertically or horizontally share the same label.

Skeleton from the letter A (left) |

- If a skeleton is generated by morphological thining with mahotas, we get the left image. The fourth image is the skeleton deprived of its branched points. Scikit-image provide a thining operator with a C4 neighborhood (right image):

C8 skeleton by thining (left). C4 skeleton by thining (right). |

What's happen if a C8 skeleton is labeled? Here, both mahotas and scikit-image provide the same result:

edges (mahotas:left; scikit-image:right) |

The labelled regions fit here what can be seen as an edges.

## Prior to start to build a graph, several additionnal images are needed:

### Edges decomposition:

Let's decompose the image of edges by considering edges connecting end-points to branched-points and edges connecting only branched points:

### Starting to build the graph by the branched-points:

The branched-points are added first. The branched point of label 1 in the image, is the first vertex in the graph, and so on:

The end points are the added to the graph, there is a shift between the end-points label in the image and their index in the graph. The first end-point in the image will be the fith vertex in the graph:

The vertex of the graph (G) have attributes derived from the images they come from: their position 'pos' define by (row,col), their label in the image of origin. Typing in a ipython cell:

print G.nodes(data=True)

print nx.get_node_attributes(G,'label')

print nx.get_node_attributes(G,'label')

yields:

### Linking end-points to branched points:

The idea here is to get the neighbourhood of each branched-point in the "edges ep-ep" image and to add an edge in the graph between the corresponding branch-point and end-point(s). This yield:

### Linking branched-points

- A dictionnary where the keys are the labels of the branched-points and the values are the label of the "edges bp-bp" in the neighbourhood of the branched point is build. Thus there's one key (bp label) for several values (edges labels).
- But an edge can be seen as one key (the edge label) for two values (the two labels of the two branched-points). So the dictionnary has to be inverted in that very particular way. Fortunately, that problem was solved on stackoverflow.

the whole steps are gathered in a function:

Processing other letters gives for example:

## The ipython notebook is here:

View the notebook on ipython.org## Sometimes building the graph fails:

On a larger set of images:

- For the image corresponding to the letter
**K**, the graph construction fails due to the presence of branched regions made of several pixels instead of one. - There's also a problem with the letter
**D**, its graph has no loop. - Since there's no branched point in the skeleton of the letter
**C**, its graph is not build.

**ipython.org**

## Friday, May 16, 2014

## Thursday, April 10, 2014

### In search of necklaces for a maximum area quadrilateral

The overlap of two chromosomes can be resolved by the four-point method provided that from these four points, a maximum area quadrilateral can be defined .

Previously that kind of quadrilateral was found from the 24 permutations of four points:

Maximal quadrilateral overlaid on the overlap of two chromosomes |

Previously that kind of quadrilateral was found from the 24 permutations of four points:

4!=24 quadrilaterals formed by 4 points chosen randomly. |

In these 24 permutations, some of them are derived from the others by circular permutations; these circular permutations define a necklace. The area of quadrilaterals obtained by circular permutation of a necklace, is invariant.

The maximal area quadrilateral can be found in the set of the 6 necklaces.

The maximal area quadrilateral can be found in the set of the 6 necklaces.

To find a necklace, let's start from an ordered list of four points :

**A**,

**B**,

**C**,

**D**

and swap two elements as (A,B), we get a necklace:

**B, A, C, D**

From four randomly chosen point, the following python function yields the six possible necklaces :

For example:

When run in ipython cell on sagemathcloud, the unique_necklaces function applied on four number took:

Applied on four characters

Then a maximal area quadrilateral can be found beyond the six necklaces. As the number of quadrilaterals explored is smaller, the new implementation is faster than the previous one based on brute force:

points=[(random.randint(0,20),random.randint(0,20)) for i in range(0,4)]

%timeit maxAreaQuad(*points)

%timeit maxAreaQuad_brutforce(*points)

It's funny to note that the area of the maximal area quadrilaterals seems to follow an asymetric distribution:

For example:

When run in ipython cell on sagemathcloud, the unique_necklaces function applied on four number took:

*%timeit unique_necklaces(1,2,3,4)*

100000 loops, best of 3: 15.2 µs per loop

*'a', 'b', 'c', 'd'*, the function yields if the original 4-tuple is included (remove # in line 10):

*print unique_necklaces('a','b','c','d')*[['a', 'b', 'c', 'd'],

['b', 'a', 'c', 'd'],

['c', 'b', 'a', 'd'],

['d', 'b', 'c', 'a'],

['a', 'c', 'b', 'd'],

['a', 'd', 'c', 'b'],

['a', 'b', 'd', 'c']]

points=[(random.randint(0,20),random.randint(0,20)) for i in range(0,4)]

%timeit maxAreaQuad(*points)

%timeit maxAreaQuad_brutforce(*points)

```
1000 loops, best of 3: 218 µs per loop
1000 loops, best of 3: 748 µs per loop
```

It's funny to note that the area of the maximal area quadrilaterals seems to follow an asymetric distribution:

### Ipython Notebook : here

## Thursday, March 20, 2014

### Combining five images into one rgb MFISH image using scikit-image implementation of HSV color space.

MFISH is a molecular cytogenetic technics using combination of fluorescent DNA probes used to recognize chromosomes and to detect some type of rearrangements.

In a previous post, five images corresponding to different fluorochromes were combined using matrix multiplication. The matrix coefficients were chosen empiracally and validated a-posteriori by the aspect of the rgb image produced.

Scikit-image is a python library implementing color space conversions for example between RGB and HSV. To display simultaneously the images of the five spectral components, or the five fluorochromes:

Five greyscaled images of the spectral components shown in false colors (cy55, cy5, SpGreen, SpOrange, TexasRed) |

The five greyscales images were first converted into 8bits rgb colors and their hue were modified with colors chosen prior combining them:

Greyscaled images converted into rgb images |

Then the five images were added to produce one rgb MFISH image of type float64:

Let' produce the HSV histograms of the image:

Since the produced image lacks some contrast, the image exposition can be corrected such that the image background becomes black and the colors become brighter:

Now the histograms of the HSV chanels look like:

The linear stretch was also chosen based on the resulting rgb image.

##

##
**Code:**

The whole code is available in a

##

The ipython notebook can be run on SageMathCloud

Public ipython notebook on sagemathcloud:

**ipython notebook.**##
**Code on SageMathCloud:**

The ipython notebook can be run on SageMathCloud**.**For some reasons, it's not possible to download images from github to sagemathcloud, so local images are used.Public ipython notebook on sagemathcloud:

## Images database:

## https://github.com/jeanpat/MFISH

Subscribe to:
Posts (Atom)