Skip to main content

programming - Fastest way to measure Hamming distance of integers


I am looking for a fast and robust way to calculate the Hamming distance of integers. The Hamming distance of two integers is the number of matching bits in their binary representations. I expect that clever methods can easily outpace HammingDistance as it works on vectors instead of integers and on any vector not just binary.


My naive bitwise method is faster than HammingDistance but I'm pretty sure that it can be further optimized. While compilation would help, it won't work on big integers ($\ge 10^{19}$). Nevertheless, I am interested in compiled solutions!


max = 10^10;
n = Length@IntegerDigits[max, 2];
data = RandomInteger[{0, max}, {100000, 2}];

m1 = Map[HammingDistance[IntegerDigits[First@#, 2, n],
IntegerDigits[Last@#, 2, n]] &, data]; // AbsoluteTiming
m2 = Map[Total@IntegerDigits[BitXor @@ #, 2] &, data]; // AbsoluteTiming
m1 === m2


{0.967202, Null}   
{0.624001, Null}
True


It would be nice to work entirely on the binary representations, and I thought that using DigitCount on BitXor would help, but it gave a cruel 3x slowdown compared to the HammingDistance version.


Edit


As an answer to Kirma's comment: I have to calculate the pairwise distance matrix for a set of integers (highly related is Szabolcs's post: Fastest way to calculate matrix of pairwise distances), in the (simplest and most didactive) form:


Outer[hamming[#1, #2], Range[2^20], Range[2^20]]

Now in this case my main problem is of course memory not speed, but it would be nice to see solutions that scale well with this problem. I understand that it is another question, but I want to encourage everyone to post their solutions even if they require vectors or matrices of integers as input.



Answer



Here is another compiled implementation:


hammingDistanceCompiled = Compile[{{nums, _Integer, 1}},
Block[{x = BitXor[nums[[1]], nums[[2]]], n = 0},

While[x > 0, x = BitAnd[x, x - 1]; n++]; n
],
RuntimeAttributes -> Listable, Parallelization -> True,
CompilationTarget -> "C", RuntimeOptions -> "Speed"
];

This appears to outperform the naive approach (Total@IntegerDigits[BitXor @@ nums, 2], as presented in Leonid's answer) by about 2.5 times. If we are serious about compiled approaches, though, we can surely do much better, by taking advantage of the SSE4.2 POPCNT instruction.




Edit: thanks to halirutan, who told me that the pointers returned by the LibraryLink functions are safe to use directly, this updated version is nearly twice as fast (on my computer) as the original attempt due to the removal of unnecessary function calls from the inner loop.


Since nobody else apparently wanted to write an answer using that suggestion, I decided to give it a try myself:



#include "WolframLibrary.h"

DLLEXPORT
mint WolframLibrary_getVersion() {
return WolframLibraryVersion;
}

DLLEXPORT
int WolframLibrary_initialize(WolframLibraryData libData) {
return 0;

}

DLLEXPORT
void WolframLibrary_uninitialize() {
return;
}

inline
mint hammingDistance(mint a, mint b) {
return (mint)__builtin_popcountll((unsigned long long)a ^ (unsigned long long)b);

}

/* To load:
LibraryFunctionLoad["hammingDistance",
"hammingDistance_I_I", {Integer, Integer}, Integer
] */

DLLEXPORT
int hammingDistance_I_I(WolframLibraryData libData,
mint argc, MArgument *args,

MArgument res) {
mint a, b;

if (argc != 2) return LIBRARY_DIMENSION_ERROR;

a = MArgument_getInteger(args[0]);
b = MArgument_getInteger(args[1]);

MArgument_setInteger(res, hammingDistance(a, b));
return LIBRARY_NO_ERROR;

}

/* To load:
LibraryFunctionLoad["hammingDistance",
"hammingDistance_T_T", {{Integer, 2, "Constant"}}, {{Integer, 1, Automatic}}
] */

DLLEXPORT
int hammingDistance_T_T(WolframLibraryData libData,
mint argc, MArgument *args,

MArgument res) {
MTensor in, out;
const mint *dims;
mint i, *indata, *outdata;
int err = LIBRARY_NO_ERROR;

in = MArgument_getMTensor(args[0]);
if (libData->MTensor_getRank(in) != 2) return LIBRARY_DIMENSION_ERROR;
if (libData->MTensor_getType(in) != MType_Integer) return LIBRARY_TYPE_ERROR;
dims = libData->MTensor_getDimensions(in);

if (dims[1] != 2) return LIBRARY_DIMENSION_ERROR;
indata = libData->MTensor_getIntegerData(in);

err = libData->MTensor_new(MType_Integer, 1, dims, &out);
if (err != LIBRARY_NO_ERROR) return err;
outdata = libData->MTensor_getIntegerData(out);

#pragma omp parallel for schedule(static)
for (i = 0; i < dims[0]; i++) {
outdata[i] = hammingDistance(indata[2*i], indata[2*i + 1]);

}

MArgument_setMTensor(res, out);
return LIBRARY_NO_ERROR;
}

We compile it, using gcc (N.B. __builtin_popcount is a gcc extension):


gcc -Wall -fopenmp -O3 -march=native -shared -o hammingDistance.dll hammingDistance.c

Load it into Mathematica:



hammingDistance = LibraryFunctionLoad[
"hammingDistance.dll",
"hammingDistance_I_I", {Integer, Integer}, Integer
];
hammingDistanceListable = LibraryFunctionLoad[
"hammingDistance.dll",
"hammingDistance_T_T", {{Integer, 2, "Constant"}}, {Integer, 1, Automatic}
];

Make sure everything is working:



data = RandomInteger[{0, 2^63 - 1}, {10000, 2}];
hammingDistance @@@ data ===
hammingDistanceListable[data] ===
hammingDistanceCompiled[data] ===
Tr /@ IntegerDigits[BitXor @@@ data, 2]
(* -> True *)

Now for a performance comparison:


dataLarge = RandomInteger[{0, 2^63 - 1}, {10000000, 2}];
hammingDistanceCompiled[dataLarge]; // AbsoluteTiming (* 1.203125 seconds *)

hammingDistanceListable[dataLarge]; // AbsoluteTiming (* 0.063594 seconds *)

That's about 1000 times faster than the code given in the question, so not bad. I'm using an Intel Core 2 CPU, which doesn't actually support the POPCNT instruction, and has only four cores. On more recent CPUs, it will surely be faster still.


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.