Skip to main content

parallelization - ParallelMap performs badly inside function



I'm trying to parallelize a very simple code that performs a stochastic gradient descent optimization (on a dual core cpu w. hyperthreading, so 4 parallel kernels).


I use ParallelMap to compute the gradient over a randomly sampled subset of variables. I am puzzled by the fact that when the EXACT SAME definition is used inside or outside a function the performance sees a dramatic drop.


These are the definitions, where SGDgrad and SGDgradPar only differ for the use of Map vs ParallelMap


MyDer[strenghts_, efforts_, i_] := Module[{sum},
sum = strenghts.efforts;
2*strenghts[[i]]^2*efforts[[i]]/sum -
strenghts[[i]] ((strenghts^2).(efforts^2))/sum^2
]
SGDgrad[strenghts_, efforts_, rsamp_] :=
SparseArray[

Map[# -> MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]
SGDgradPar[strenghts_, efforts_, rsamp_] :=
SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}];


As a test this can be run on (small - 50 samples) random data:


strenghts = RandomReal[1, {20, 10000}];
efforts = RandomReal[1, {20, 10000}];

rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j,
Length@strenghts[[1]]}], 1], 50];

I try to run the same code as a function (SGDgrad and SGDgradPar) and as a simple line of code. For the parallelized version there is a 20x performance difference!


SGDgrad[strenghts, efforts, rsamp]; // AbsoluteTiming

SGDgradPar[strenghts, efforts, rsamp]; // AbsoluteTiming

{0.001407, Null}
{0.547823, Null}

SparseArray[
Map[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming

SparseArray[
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"], {Length@strenghts,
Length@strenghts[[1]]}]; // AbsoluteTiming

{0.001153, Null}
{0.026422, Null}


If I make the dataset bigger (50000 samples):


rsamp = RandomSample[Flatten[Table[{i, j}, {i, Length@strenghts}, {j, 
Length@strenghts[[1]]}], 1], 50000];

Then timings for the same operations are:


{1.20238, Null}
{1.87731, Null}


{1.20966, Null}

{1.01134, Null}

The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...


Can someone explain these differences? Am I doing something wrong?


I have already tried distributing the definitions of the variables and MyDer, and nothing changes.



Answer



Here is a much smaller example demonstrating the difference:


In[17]:= ParallelMap[# -> 
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,

Method -> "CoarsestGrained"]; // AbsoluteTiming

Out[17]= {0.065301, Null}

In[13]:= With[{strenghts = strenghts, efforts = efforts},
ParallelMap[# ->
MyDer[strenghts[[All, #[[2]]]],
efforts[[All, #[[2]]]], #[[1]]] &, rsamp,
Method -> "CoarsestGrained"]
]; // AbsoluteTiming


Out[13]= {0.488711, Null}

When you use a function (or With above), strenghts and efforts are inlined into the pure function that you are mapping. These arrays are sent to the subkernels once for each evaluation. The literal value of the arrays is part of the expression describing the pure function. The pure function becomes a huge expression that is slow to transfer.


When you use variables, they are sent (distributed, see DistributeDefinitions) once and re-used multiple times. The function that you are mapping does not contain the literal value of these arrays. It only contains references to them. The function will be sent to subkernels once for each evaluation. But this time the function is a small expression that does not take a long time to transfer.





The advantage of parallelization is very small (which I find strange) and only when batches are large. Such advantage is completely spoiled if I parallelize inside a function rather than outside...



In Mathematica, parallelization involves explicitly transferring data between the main kernel and the subkernels. This transfer is expensive (i.e. it takes a long time). Parallelization is only worth it if the computation takes considerably longer than the data transfer.



Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.