Skip to main content

parallelization - Launching remote slave kernels on HPC causes slow down


I had a post earlier regarding how I can run Mathematica at a HPC cluster (running PBSpro) by having a single MathKernel launch say 190 slaves remotely, in order so that I only utilise one MathKernel license. I now have this working and the schematic of my notebook looks like:


(*First launch all the remote kernels*)



Needs["SubKernels`RemoteKernels`"];
numCore=10; (*change to num cores per node*)
PBSKernel[host_String]:=
LaunchKernels[RemoteMachine[host,"ssh -x -f -l `3` `1` /path/to/math -mathlink -linkmode Connect `4` -linkname '`2`' -subkernel -noinit",numCore]];
hostfile = Environment["PBS_NODEFILE"];

If[hostfile =!= $Failed, (* if we're running in a batch job *)
hosts = Import[hostfile, "List"];
$ConfiguredKernels = Join[$ConfiguredKernels, PBSKernel /@ hosts];
]


Export["kernels.txt", ParallelTable[{$MachineName, $KernelID}, {$KernelCount}], "Table"];

(*now they are launched evaluate something over them, first creating some output files,
one for each node, postfixed by the node name*)
$runningLogFile = "log.data";
ParallelEvaluate[If[FileExistsQ[$runningLogFile <> $MachineName], Get[$runningLogFile <> $MachineName],Export[$runningLogFile <> $MachineName, "", "Text"];];]
DistributeDefinition[...];
ParallelMap[func, Tuples[{Range[1/10, 1, 1/10], Range[0, 59]}]];


where


func[{x_?NumericQ, y_?IntegerQ}] :=
Block[{},
g[r_,s_,t_]:= (*do calc, essentially an integral,altho it does depend on some prev defined helper functions external*);

(*func to write this to file in certain format*)
gwrite[r_, s_,t_] := gwrite[r,s,t] = g[r, s, t] /. v_:>(PutAppend[Unevaluated[g[r, s, t] = v;], $runningLogFile <> $MachineName]; v);
(*write to file using CriticalSection to stop clashing*)
CriticalSection[{logfileLock}, Do[gwrite[tf, x, y], {tf, taufStart, taufEnd, intv}]];
]


Now this all works fine, in the sense of giving me the expected data, but I have noticed a massive slowdown in computation speed this way, versus the previous method I was using whereby I launched 19 MathKernels, one for each compute node and then in turn had each launch 10 slaves locally. This is despite the fact that I have been assigned exactly the same resource, i.e. 19 nodes each with 10 cores and the same memory. The only difference is I am now only using one MathKernel instead of 19 to launch the slaves. The old method did my test portion of paramater space in 20mins vs over 4hrs for the current method, which is a drastic slow down that is too much for my longer jobs.


What could be causing this slow down? Ideas:


1)Remote slave kernels need to communicate and this is causing some bottleneck. I can't see why this would be a problem for me as my func above is just an integral that should not depend on anything the other kernels are doing.


2) Somehow all the remote cores are not getting used? I know they are being Launched as they appear in the Kernels.txt file at the start of the calculation and also $KernelCount=190, but is my method of employing ParallelMap successfully using them all? Is there a way I can check this? I also know all the nodes are being used as I append the $MachineName to each data file and get 19 of them as expected.


3) Anything else? Mathlink just can't handle this many cores successfully?


If I keep exactly the same script and clone it 19 times, then split the list of Tuples equally between the 19 new scripts, and feed these into 19 nodes with a MathKernel running on each (which launches 10 local cores) in serial then things massively speed up and I am back to 20mins.


EDIT:


after reviewing the output logs there is one odd message, although I'm not sure how relevant it is:


StringForm["From `1`:",  Parallel`Kernels`kernel[Parallel`Kernels`Private`bk[remoteKernel[SubKernels`RemoteKernels`Private`lk[LinkObject["49195@10.141.0.3,46878@10.141.0.3", 97, 97], "node012", {"node012", "ssh -x -f -l `3` `1` math -mathlink -linkmode Connect `4` -linkname '`2`' -subkernel -noinit"}, SubKernels`RemoteKernels`Private`speed$1327]], Parallel`Kernels`Private`id$1376, Parallel`Kernels`Private`name$1376], Parallel`Kernels`Private`ek[Parallel`Kernels`Private`nev$1377, Parallel`Kernels`Private`pb$1377, Parallel`Kernels`Private`rd$1377], Parallel`Kernels`Private`sk[Parallel`Kernels`Private`q$1378, Parallel`Kernels`Private`n0$1378, Parallel`Kernels`Private`n1$1378]]]

"DeleteFile::nffil: File not found during DeleteFile[/panfs/panasas01.panfs.cluster/username/out.datanode012]."


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.