Skip to main content

parallelization - ParallelMap performs badly inside function



I'm trying to parallelize a very simple code that performs a stochastic gradient descent optimization (on a dual core cpu w. hyperthreading, so 4 parallel kernels).


I use ParallelMap to compute the gradient over a randomly sampled subset of variables. I am puzzled by the fact that when the EXACT SAME definition is used inside or outside a function the performance sees a dramatic drop.


These are the definitions, where SGDgrad and SGDgradPar only differ for the use of Map vs ParallelMap


MyDer[strenghts_, efforts_, i_] := Module[{sum},
sum = strenghts.efforts;
2*strenghts[[i]]^2*efforts[[i]]/sum -
strenghts[[i]] ((strenghts^2).(efforts^2))/sum^2
]
SGDgrad[strenghts_, efforts_, rsamp_] :=
SparseArray[

Map[# -> MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]
SGDgradPar[strenghts_, efforts_, rsamp_] :=
SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}];


As a test this can be run on (small - 50 samples) random data:


strenghts = RandomReal[1, {20, 10000}];
efforts = RandomReal[1, {20, 10000}];

rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j,
Length@strenghts[[1]]}], 1], 50];

I try to run the same code as a function (SGDgrad and SGDgradPar) and as a simple line of code. For the parallelized version there is a 20x performance difference!


SGDgrad[strenghts, efforts, rsamp]; // AbsoluteTiming

SGDgradPar[strenghts, efforts, rsamp]; // AbsoluteTiming

{0.001407, Null}
{0.547823, Null}

SparseArray[
Map[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming

SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming

{0.001153, Null}
{0.026422, Null}


If I make the dataset bigger (50000 samples):


rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j, 
Length@strenghts[[1]]}], 1], 50000];

Then timings for the same operations are:


{1.20238, Null}
{1.87731, Null}


{1.20966, Null}

{1.01134, Null}

The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...


Can someone explain these differences? Am I doing something wrong?


I have already tried distributing the definitions of the variables and MyDer, and nothing changes.



Answer



Here is a much smaller example demonstrating the difference:


In[17]:= ParallelMap[# -> 
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,

Method -> "CoarsestGrained"]; // AbsoluteTiming

Out[17]= {0.065301, Null}

In[13]:= With[{strenghts = strenghts, efforts = efforts},
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"]
]; // AbsoluteTiming


Out[13]= {0.488711, Null}

When you use a function (or With above), strenghts and efforts are inlined into the pure function that you are mapping. These arrays are sent to the subkernels once for each evaluation. The literal value of the arrays is part of the expression describing the pure function. The pure function becomes a huge expression that is slow to transfer.


When you use variables, they are sent (distributed, see DistributeDefinitions) once and re-used multiple times. The function that you are mapping does not contain the literal value of these arrays. It only contains references to them. The function will be sent to subkernels once for each evaluation. But this time the function is a small expression that does not take a long time to transfer.





The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...



In Mathematica, parallelization involves explicitly transferring data between the main kernel and the subkernels. This transfer is expensive (i.e. it takes a long time). Parallelization is only worth it if the computation takes considerably longer than the data transfer.



Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

How to thread a list

I have data in format data = {{a1, a2}, {b1, b2}, {c1, c2}, {d1, d2}} Tableform: I want to thread it to : tdata = {{{a1, b1}, {a2, b2}}, {{a1, c1}, {a2, c2}}, {{a1, d1}, {a2, d2}}} Tableform: And I would like to do better then pseudofunction[n_] := Transpose[{data2[[1]], data2[[n]]}]; SetAttributes[pseudofunction, Listable]; Range[2, 4] // pseudofunction Here is my benchmark data, where data3 is normal sample of real data. data3 = Drop[ExcelWorkBook[[Column1 ;; Column4]], None, 1]; data2 = {a #, b #, c #, d #} & /@ Range[1, 10^5]; data = RandomReal[{0, 1}, {10^6, 4}]; Here is my benchmark code kptnw[list_] := Transpose[{Table[First@#, {Length@# - 1}], Rest@#}, {3, 1, 2}] &@list kptnw2[list_] := Transpose[{ConstantArray[First@#, Length@# - 1], Rest@#}, {3, 1, 2}] &@list OleksandrR[list_] := Flatten[Outer[List, List@First[list], Rest[list], 1], {{2}, {1, 4}}] paradox2[list_] := Partition[Riffle[list[[1]], #], 2] & /@ Drop[list, 1] RM[list_] := FoldList[Transpose[{First@li...

front end - keyboard shortcut to invoke Insert new matrix

I frequently need to type in some matrices, and the menu command Insert > Table/Matrix > New... allows matrices with lines drawn between columns and rows, which is very helpful. I would like to make a keyboard shortcut for it, but cannot find the relevant frontend token command (4209405) for it. Since the FullForm[] and InputForm[] of matrices with lines drawn between rows and columns is the same as those without lines, it's hard to do this via 3rd party system-wide text expanders (e.g. autohotkey or atext on mac). How does one assign a keyboard shortcut for the menu item Insert > Table/Matrix > New... , preferably using only mathematica? Thanks! Answer In the MenuSetup.tr (for linux located in the $InstallationDirectory/SystemFiles/FrontEnd/TextResources/X/ directory), I changed the line MenuItem["&New...", "CreateGridBoxDialog"] to read MenuItem["&New...", "CreateGridBoxDialog", MenuKey["m", Modifiers-...