Skip to main content

parallelization - Effective parallel processing of large items


When trying to parallelize a task which involves items of larger sizes, I cannot achieve efficient parallelization. To demonstrate this, we launch some kernels:


LaunchKernels[12];


I am facing this problem with real-world data which I cannot share here. Instead, we generate some random data and adjust the following parameters to a) reflect the problem and b) get some reasonable timings:


itemsize = 20000; (* The size of an individual item *)
numberofitems = 200; (* Number of items to process *)
difficulty = 500;(* Processing diffculty: the part of an individual item that is actually processed *)

Now let's generate the random values:


randomValues = Parallelize@Table[
Transpose@{RandomReal[1, itemsize], RandomReal[1, itemsize]},
{numberofitems}
];


An individual item has 320 kB, the full dataset is 64 MB in size:


ByteCount /@ {randomValues[[1]], randomValues}
(* {320152, 64032072} *)

Now we compare Map and ParallelMap with an arbitrary function that takes a reasonable amount of time to process, like FindCurvePath.


map = Map[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues
]; // AbsoluteTiming


(* {11.9619, Null} *)

pmap = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming

(* {23.6492, Null} *)


Surprisingly, the parallel version is twice as slow. When watching the CPU usage of the main kernel vs. the subkernels, it is notable that most of the evaluation time is spent at the beginning in the main kernel. Then, processing in the subkernels is done in less than 2 seconds.


Note that I intentionally made the items larger than what is actually processed, so the full items of 320 kB in size (20 000 * 20 000 random reals) need to be distributed to the subkernels. If we reduce the item size to the amount that is actually processed, things change drastically:


pmap2 = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues[[All, 1 ;; difficulty]], (* only use a small part of the items *)
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming

(* {2.03152, Null} *)


Now we get a performance improvement as expected. The result is the same:


map === pmap === pmap2
(* True *)

Apparently, the distribution of the large items to the subkernels is the bottleneck. Note that unlike in this demonstration, my real-world application does need all the data that is present in the items.


I did not find any way to improve the parallel performance. Changing the method to FinestGrained or CorsestGrained performs worse. Any ideas how to make parallel processing efficient?



Answer



Seralization of the data to a ByteArray object seems to overcome the data transfer bottleneck. The necessary functions BinarySerialize and BinaryDeserialize have been introduced in 11.1.


Here is a simple function implementing a ParallelMap which serializes the data before the transfer to the subkernels and makes the subkernels deseralize it before processing:



ParallelMapSerialized[f_, data_, opts___] := ParallelMap[
f[BinaryDeserialize@#] &,
BinarySerialize /@ data,
opts
]

Running the benchmark again:


map = Map[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues

]; // AbsoluteTiming

(* {9.60715, Null} *)

pmap = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming


(* {17.5937, Null} *)

pmapserialized = ParallelMapSerialized[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming

(* {1.85387, Null} *)


pmap === pmap2 === pmapserialized
(* True *)

Serialization led to a performance increase of almost 10-fold compared to ParallelMap, and to a 5-fold increase compared to serial processing.


Comments

Popular posts from this blog

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1....