Skip to main content

bugs - Why is this parallel evaluation with Dispatch[] so slow?


Can you efficiently parallelize this? The parallel versions are much slower than the sequential version, and I'm not sure why. Does SetSharedVariable allow simultaneous reads for different kernels? It appears that it doesn't even though the documentation says you should use CriticalSection to make separate reads and writes atomic and thread-safe.


dispatch = Dispatch@Table[i -> RandomReal[1, 500], {i, 20000}];

Map[Select[# /. dispatch, # < .1 &] &,

RandomInteger[{1, 20000}, 10000]] (* < 3 secs *)

ParallelMap[Select[# /. dispatch, # < .1 &] &,
RandomInteger[{1, 20000}, 10000]] (* 50 secs *)

SetSharedVariable[dispatch]; ParallelMap[
Select[# /. dispatch, # < .1 &] &, RandomInteger[{1, 20000}, 10000],
DistributedContexts -> None] (* longer than 2 minutes *)

Answer



To answer your actual question, SetSharedVariable does not allow simultaneous reads. It forces the variable to be evaluated on the main kernel, effectively disabling the parallelization.





A more interesting question for me is: why is ParallelMap so slow when not using SetSharedVariable? This observation is not an answer but it's too long for a comment.


Consider this simplified example:


n = 1500;
data = RandomInteger[{1, n}, 10000];

dp = Dispatch@Table[i -> RandomReal[1, 1000], {i, n}];

Map[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming
ParallelMap[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming


On my machine, the Map version takes about 5 seconds, regardless of the size of n (at least for n between 100 and 10000). However, the ParallelMap version depends strongly on the value of n: for n=100 it's much faster than Map, but for n=2000 it already takes a bit longer. If I remove the Dispatch@, it will be faster.


ParallelEvaluate[dp]; is not slow, which suggests that it is not transferring dp to the parallel kernels that takes time.




Update: evaluating Rule on parallel kernels is very slow


Finally I managed to come up with a smaller and more enlightening test case for this slowdown. In a fresh kernel, evaluate:


LaunchKernels[]

data = Table[Rule[1, 1], {1000}];


AbsoluteTiming[Table[data, {1000}];] (* very fast *)
(* ==> {0.001288, Null} *)

DistributeDefinitions[data] (* ParallelEvaluate doesn't auto-distribute *)

ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.939300, Null}, {0.947640, Null}, {0.941881, Null}, {0.931997, Null},
{0.925231, Null}, {0.930565, Null}, {0.930977, Null}, {0.931430, Null}} *)


This is very slow, and it is the parallel kernels that take so long to evaluate it!


Now let's change data to


data = Table[rule[1, 1], {1000}];

DistributeDefinitions[data]

ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.000197, Null}, {0.000212, Null}, {0.000347, Null}, {0.000192, Null},
{0.000220, Null}, {0.000327, Null}, {0.000331, Null}, {0.000197, Null}} *)


This is now very fast. So the culprit is Rule. But why? Is Rule overloaded in the parallel kernels? ParallelEvaluate[Information[Rule]] reveals nothing special.




Does anyone have any ideas what might be going on here?




Update on 2014-03-04:


I received a reply about this from WRI support and they pointed out that the issue is mentioned in the documentation. It's the third entry under "Possible Issues" at DistributeDefinitions.


The example there describes the same problem with InterpolatingFunction objects. Quoting the relevant part of the documentation, without repeating the example code:



Certain objects with an internal state may not work efficiently when distributed. ... Certain objects with an internal state may not work efficiently when distributed. ... Alternatively, reevaluate the data on all subkernels.




For the example above the workaround is as simple as


ParallelEvaluate[data = data;]

This is very simple to @Michael's workaround in the other answer, but it's not even necessary to recreate the expressions form scratch. It's sufficient to just re-assign the variable to itself (and re-evaluate it in the process).


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]