When trying to parallelize a task which involves items of larger sizes, I cannot achieve efficient parallelization. To demonstrate this, we launch some kernels:
LaunchKernels[12];
I am facing this problem with real-world data which I cannot share here. Instead, we generate some random data and adjust the following parameters to a) reflect the problem and b) get some reasonable timings:
itemsize = 20000; (* The size of an individual item *)
numberofitems = 200; (* Number of items to process *)
difficulty = 500;(* Processing diffculty: the part of an individual item that is actually processed *)
Now let's generate the random values:
randomValues = Parallelize@Table[
Transpose@{RandomReal[1, itemsize], RandomReal[1, itemsize]},
{numberofitems}
];
An individual item has 320 kB, the full dataset is 64 MB in size:
ByteCount /@ {randomValues[[1]], randomValues}
(* {320152, 64032072} *)
Now we compare Map and ParallelMap with an arbitrary function that takes a reasonable amount of time to process, like FindCurvePath.
map = Map[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues
]; // AbsoluteTiming
(* {11.9619, Null} *)
pmap = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming
(* {23.6492, Null} *)
Surprisingly, the parallel version is twice as slow. When watching the CPU usage of the main kernel vs. the subkernels, it is notable that most of the evaluation time is spent at the beginning in the main kernel. Then, processing in the subkernels is done in less than 2 seconds.
Note that I intentionally made the items larger than what is actually processed, so the full items of 320 kB in size (20 000 * 20 000 random reals) need to be distributed to the subkernels. If we reduce the item size to the amount that is actually processed, things change drastically:
pmap2 = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues[[All, 1 ;; difficulty]], (* only use a small part of the items *)
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming
(* {2.03152, Null} *)
Now we get a performance improvement as expected. The result is the same:
map === pmap === pmap2
(* True *)
Apparently, the distribution of the large items to the subkernels is the bottleneck. Note that unlike in this demonstration, my real-world application does need all the data that is present in the items.
I did not find any way to improve the parallel performance. Changing the method to FinestGrained or CorsestGrained performs worse. Any ideas how to make parallel processing efficient?
Answer
Seralization of the data to a ByteArray
object seems to overcome the data transfer bottleneck. The necessary functions BinarySerialize
and BinaryDeserialize
have been introduced in 11.1.
Here is a simple function implementing a ParallelMap
which serializes the data before the transfer to the subkernels and makes the subkernels deseralize it before processing:
ParallelMapSerialized[f_, data_, opts___] := ParallelMap[
f[BinaryDeserialize@#] &,
BinarySerialize /@ data,
opts
]
Running the benchmark again:
map = Map[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues
]; // AbsoluteTiming
(* {9.60715, Null} *)
pmap = ParallelMap[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming
(* {17.5937, Null} *)
pmapserialized = ParallelMapSerialized[
FindCurvePath[#[[1 ;; difficulty]]] &,
randomValues,
Method -> "ItemsPerEvaluation" -> 10
]; // AbsoluteTiming
(* {1.85387, Null} *)
pmap === pmap2 === pmapserialized
(* True *)
Serialization led to a performance increase of almost 10-fold compared to ParallelMap, and to a 5-fold increase compared to serial processing.
Comments
Post a Comment