Can you efficiently parallelize this? The parallel versions are much slower than the sequential version, and I'm not sure why. Does SetSharedVariable
allow simultaneous reads for different kernels? It appears that it doesn't even though the documentation says you should use CriticalSection
to make separate reads and writes atomic and thread-safe.
dispatch = Dispatch@Table[i -> RandomReal[1, 500], {i, 20000}];
Map[Select[# /. dispatch, # < .1 &] &,
RandomInteger[{1, 20000}, 10000]] (* < 3 secs *)
ParallelMap[Select[# /. dispatch, # < .1 &] &,
RandomInteger[{1, 20000}, 10000]] (* 50 secs *)
SetSharedVariable[dispatch]; ParallelMap[
Select[# /. dispatch, # < .1 &] &, RandomInteger[{1, 20000}, 10000],
DistributedContexts -> None] (* longer than 2 minutes *)
Answer
To answer your actual question, SetSharedVariable
does not allow simultaneous reads. It forces the variable to be evaluated on the main kernel, effectively disabling the parallelization.
A more interesting question for me is: why is ParallelMap
so slow when not using SetSharedVariable
? This observation is not an answer but it's too long for a comment.
Consider this simplified example:
n = 1500;
data = RandomInteger[{1, n}, 10000];
dp = Dispatch@Table[i -> RandomReal[1, 1000], {i, n}];
Map[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming
ParallelMap[Select[# /. dp, # < 0.1 &] &, data]; // AbsoluteTiming
On my machine, the Map
version takes about 5 seconds, regardless of the size of n
(at least for n
between 100 and 10000). However, the ParallelMap
version depends strongly on the value of n
: for n=100
it's much faster than Map
, but for n=2000
it already takes a bit longer. If I remove the Dispatch@
, it will be faster.
ParallelEvaluate[dp];
is not slow, which suggests that it is not transferring dp
to the parallel kernels that takes time.
Update: evaluating Rule
on parallel kernels is very slow
Finally I managed to come up with a smaller and more enlightening test case for this slowdown. In a fresh kernel, evaluate:
LaunchKernels[]
data = Table[Rule[1, 1], {1000}];
AbsoluteTiming[Table[data, {1000}];] (* very fast *)
(* ==> {0.001288, Null} *)
DistributeDefinitions[data] (* ParallelEvaluate doesn't auto-distribute *)
ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.939300, Null}, {0.947640, Null}, {0.941881, Null}, {0.931997, Null},
{0.925231, Null}, {0.930565, Null}, {0.930977, Null}, {0.931430, Null}} *)
This is very slow, and it is the parallel kernels that take so long to evaluate it!
Now let's change data
to
data = Table[rule[1, 1], {1000}];
DistributeDefinitions[data]
ParallelEvaluate[AbsoluteTiming[Table[data, {1000}];]]
(* ==>
{{0.000197, Null}, {0.000212, Null}, {0.000347, Null}, {0.000192, Null},
{0.000220, Null}, {0.000327, Null}, {0.000331, Null}, {0.000197, Null}} *)
This is now very fast. So the culprit is Rule
. But why? Is Rule
overloaded in the parallel kernels? ParallelEvaluate[Information[Rule]]
reveals nothing special.
Does anyone have any ideas what might be going on here?
Update on 2014-03-04:
I received a reply about this from WRI support and they pointed out that the issue is mentioned in the documentation. It's the third entry under "Possible Issues" at DistributeDefinitions
.
The example there describes the same problem with InterpolatingFunction
objects. Quoting the relevant part of the documentation, without repeating the example code:
Certain objects with an internal state may not work efficiently when distributed. ... Certain objects with an internal state may not work efficiently when distributed. ... Alternatively, reevaluate the data on all subkernels.
For the example above the workaround is as simple as
ParallelEvaluate[data = data;]
This is very simple to @Michael's workaround in the other answer, but it's not even necessary to recreate the expressions form scratch. It's sufficient to just re-assign the variable to itself (and re-evaluate it in the process).
Comments
Post a Comment