Skip to main content

image processing - Counting elements which are inside another element on a different colour channel


I have a 3-channel image which shows red and green points (certain labelled proteins) inside blue regions (cell nuclei). After applying some segmentation filters, the resulting image is as follows: enter image description here


The regions circled in blue are separated into components following steps similar to those mentioned in this guide Count Cells


The red and green points are also both separated into components. What would be the best way to count those red and green points which are bound by the blue components?
-EDIT- What I mean is counting the points per blue-bound area, not the entire image as a whole. Sorry for the confusion. A rather brute-force method would be to call SelectComponents with a Do loop using the Label attribute of the components and use that as a mask. Is there a better and more efficient method of doing this?


Do[mask = 
Image[SelectComponents[nuclei, "Label", # == nucList[[i]] &]];
redFoci =
ComponentMeasurements[

SelectComponents[ImageMultiply[redChan, mask], "Count",
2 < # < 100 &], "Centroid"];
greenFoci =
ComponentMeasurements[
SelectComponents[ImageMultiply[greenChan, mask], "Count",
2 < # < 100 &], "Centroid"];
AppendTo[fociList, {Length[redFoci], Length[greenFoci]}];
, {i, Length[nucPos]}];
Print[fociList];

Answer




Here's another way to do what I think you want to do. I don't know how it compares to your method. A lot of the code is for various visualizations. It could be pruned to just get the information you need.


Explanation of the procedure


It's difficult to calculate the number of red and green components for each cell because each cell is not clearly demarcated. Sometimes their boundaries blur together or aren't self-enclosed. My strategy ended up being this:



  1. Find the boundaries by extracting all blue pixels

  2. Fill small gaps by dilating the boundaries and then narrowing them down again.

  3. Use the filling transform to extract the pixels inside each boundary

  4. Neighboring cells will blur together if they share a boundary. By adding back the boundaries on top after filling, the cells are again separated.


The problem is this way of filling small gaps cannot be used to fill larger ones, and so those cells won't be recognized as cells at all. This image demonstrates this:



boundaries


The white boundaries correspond to blue pixels. The colored areas are cells, they are recognized by MorphologicalComponents because they have been sufficiently separated by the method described above. As one can see, this works for almost all cells.


Now, all we have to do is look at each component and extract the red and green pixels respectively, then use MorphologicalComponents and ComponentMeasurements to figure out how many of each type there is.


The Code


extractColor[img_Image, col : "Red" | "Green" | "Blue"] := 
Module[{c = col, i = img, sep},
sep = ColorSeparate[i];
If[c == "Green", sep = RotateLeft[sep]];
If[c == "Blue", sep = RotateRight[sep]];
ImageSubtract[ImageSubtract[#1, #2], #3] & @@ sep

];
healingBrush[img_Image, brushSize_] :=
Binarize[Thinning[Dilation[img, DiskMatrix[brushSize]]], 0.1]
countClusters[
img_Image] := (ComponentMeasurements[MorphologicalComponents[#],
"LabelCount"] & /@ {extractColor[img, "Red"],
extractColor[img, "Green"]})[[All, 1, 2]]

img = Import["http://i.stack.imgur.com/n6Eit.png"];


borders = healingBrush[extractColor[img, "Blue"], 1];
components =
MorphologicalComponents[
ImageMultiply[ColorNegate[Dilation[borders, 0.5]],
FillingTransform@borders]] // Colorize

Show[components,
ColorNegate@SetAlphaChannel[ColorNegate@borders, borders]]

isolatedComponents =

ComponentMeasurements[
components, {"Centroid", "Mask"}] /. (i_ -> {l_, m_}) :> {i, l,
countClusters[ImageMultiply[img, Image[m]]]};

Show[img,
Graphics[{White,
Text["R: " <> ToString[#[[3, 1]]] <> "\nG: " <>
ToString[#[[3, 2]]], #[[2]]] & /@ isolatedComponents}]]

Final result



result


The data used for this visualization is stored in isolatedComponents.


(Someone who'd want to use this would be wise to count manually for a few cells and compare the result. I did not do this.)


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.