Skip to main content

bugs - Why is this parallel evaluation with Dispatch[] so slow?


Can you efficiently parallelize this? The parallel versions are much slower than the sequential version, and I'm not sure why. Does SetSharedVariable allow simultaneous reads for different kernels? It appears that it doesn't even though the documentation says you should use CriticalSection to make separate reads and writes atomic and thread-safe.


dispatch = Dispatch@Table[i -> RandomReal[1, 500], {i, 20000}];

Map[Select[# /. dispatch, # < .1 &] &,

RandomInteger[{1, 20000}, 10000]] (* < 3 secs *)

ParallelMap[Select[# /. dispatch, # < .1 &] &,
RandomInteger[{1, 20000}, 10000]] (* 50 secs *)

SetSharedVariable[dispatch]; ParallelMap[
Select[# /. dispatch, # < .1 &] &, RandomInteger[{1, 20000}, 10000],
DistributedContexts -> None] (* longer than 2 minutes *)

Answer



To answer your actual question, SetSharedVariable does not allow simultaneous reads. It forces the variable to be evaluated on the main kernel, effectively disabling the parallelization.





A more interesting question for me is: why is ParallelMap so slow when not using SetSharedVariable? This observation is not an answer but it's too long for a comment.


Consider this simplified example:


n = 1500;
data = RandomInteger[{1, n}, 10000];

dp = Dispatch@Table[i -> RandomReal[1, 1000], {i, n}];

Map[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming
ParallelMap[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming


On my machine, the Map version takes about 5 seconds, regardless of the size of n (at least for n between 100 and 10000). However, the ParallelMap version depends strongly on the value of n: for n=100 it's much faster than Map, but for n=2000 it already takes a bit longer. If I remove the Dispatch@, it will be faster.


ParallelEvaluate[dp]; is not slow, which suggests that it is not transferring dp to the parallel kernels that takes time.




Update: evaluating Rule on parallel kernels is very slow


Finally I managed to come up with a smaller and more enlightening test case for this slowdown. In a fresh kernel, evaluate:


LaunchKernels[]

data = Table[Rule[1, 1], {1000}];


AbsoluteTiming[Table[data, {1000}];] (* very fast *)
(* ==> {0.001288, Null} *)

DistributeDefinitions[data] (* ParallelEvaluate doesn't auto-distribute *)

ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.939300, Null}, {0.947640, Null}, {0.941881, Null}, {0.931997, Null},
{0.925231, Null}, {0.930565, Null}, {0.930977, Null}, {0.931430, Null}} *)


This is very slow, and it is the parallel kernels that take so long to evaluate it!


Now let's change data to


data = Table[rule[1, 1], {1000}];

DistributeDefinitions[data]

ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.000197, Null}, {0.000212, Null}, {0.000347, Null}, {0.000192, Null},
{0.000220, Null}, {0.000327, Null}, {0.000331, Null}, {0.000197, Null}} *)


This is now very fast. So the culprit is Rule. But why? Is Rule overloaded in the parallel kernels? ParallelEvaluate[Information[Rule]] reveals nothing special.




Does anyone have any ideas what might be going on here?




Update on 2014-03-04:


I received a reply about this from WRI support and they pointed out that the issue is mentioned in the documentation. It's the third entry under "Possible Issues" at DistributeDefinitions.


The example there describes the same problem with InterpolatingFunction objects. Quoting the relevant part of the documentation, without repeating the example code:



Certain objects with an internal state may not work efficiently when distributed. ... Certain objects with an internal state may not work efficiently when distributed. ... Alternatively, reevaluate the data on all subkernels.




For the example above the workaround is as simple as


ParallelEvaluate[data = data;]

This is very simple to @Michael's workaround in the other answer, but it's not even necessary to recreate the expressions form scratch. It's sufficient to just re-assign the variable to itself (and re-evaluate it in the process).


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.