I'm trying to parallelize a very simple code that performs a stochastic gradient descent optimization (on a dual core cpu w. hyperthreading, so 4 parallel kernels).
I use ParallelMap to compute the gradient over a randomly sampled subset of variables. I am puzzled by the fact that when the EXACT SAME definition is used inside or outside a function the performance sees a dramatic drop.
These are the definitions, where SGDgrad and SGDgradPar only differ for the use of Map vs ParallelMap
MyDer[strenghts_, efforts_, i_] := Module[{sum},
sum = strenghts.efforts;
2*strenghts[[i]]^2*efforts[[i]]/sum -
strenghts[[i]] ((strenghts^2).(efforts^2))/sum^2
]
SGDgrad[strenghts_, efforts_, rsamp_] :=
SparseArray[
Map[# -> MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]
SGDgradPar[strenghts_, efforts_, rsamp_] :=
SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}];
As a test this can be run on (small - 50 samples) random data:
strenghts = RandomReal[1, {20, 10000}];
efforts = RandomReal[1, {20, 10000}];
rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j,
Length@strenghts[[1]]}], 1], 50];
I try to run the same code as a function (SGDgrad and SGDgradPar) and as a simple line of code. For the parallelized version there is a 20x performance difference!
SGDgrad[strenghts, efforts, rsamp]; // AbsoluteTiming
SGDgradPar[strenghts, efforts, rsamp]; // AbsoluteTiming
{0.001407, Null}
{0.547823, Null}
SparseArray[
Map[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming
SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming
{0.001153, Null}
{0.026422, Null}
If I make the dataset bigger (50000 samples):
rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j,
Length@strenghts[[1]]}], 1], 50000];
Then timings for the same operations are:
{1.20238, Null}
{1.87731, Null}
{1.20966, Null}
{1.01134, Null}
The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...
Can someone explain these differences? Am I doing something wrong?
I have already tried distributing the definitions of the variables and MyDer, and nothing changes.
Answer
Here is a much smaller example demonstrating the difference:
In[17]:= ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"]; // AbsoluteTiming
Out[17]= {0.065301, Null}
In[13]:= With[{strenghts = strenghts, efforts = efforts},
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"]
]; // AbsoluteTiming
Out[13]= {0.488711, Null}
When you use a function (or With
above), strenghts
and efforts
are inlined into the pure function that you are mapping. These arrays are sent to the subkernels once for each evaluation. The literal value of the arrays is part of the expression describing the pure function. The pure function becomes a huge expression that is slow to transfer.
When you use variables, they are sent (distributed, see DistributeDefinitions
) once and re-used multiple times. The function that you are mapping does not contain the literal value of these arrays. It only contains references to them. The function will be sent to subkernels once for each evaluation. But this time the function is a small expression that does not take a long time to transfer.
The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...
In Mathematica, parallelization involves explicitly transferring data between the main kernel and the subkernels. This transfer is expensive (i.e. it takes a long time). Parallelization is only worth it if the computation takes considerably longer than the data transfer.
Comments
Post a Comment