I'm using ParallelTable[]
to calculate a function over a range of my parameters , ($\omega,\ell$). This seems to be working well (in terms of speed increase) except for some strange Precision issues.
I set the $MinPrecision=40
at the start of the notebook as well as my working precision, precision goal and accuracy goal for some computations later in the notebook as
wp=$MinPrecision;
ac=$MinPrecision-8;
pg=wp/2;
The rest of my code basically defines some helper functions before going on to the main function (that I feed into ParallelTable
). This main function uses the helper functions along with NDSolve
and NIntegrate
to do some computations, and it's in these that I feed the Working Precision->wp
,AccuracyGoal->ac
, PrecisionGoal->pg
.
I have used DistributeDefinitions
on all my variables, helper functions, and the main function, and even on $MinPrecision
, also all my initial conditions for NDSolve have N[...,wp]
wrapped around them, but still when I use ParallelTable[...]
I get errors:
Precision::precsm. Requested precision 39.153977328439204` is smaller than $MinPrecision.
I don't get any such error when I simply use Do[...]
or Table[..]
on my main function, and indeed the results differ at the 33rd decimal place, which is the decimal the above number is precision too. I have no idea what this number is unfortunately, it doesn't look like any of my outputs or inputs.
I just don't understand how this could not crop up with the non-parallelized forms, but crop up with parallelized?
Minimal Example
This is much simpler than my code, but I think it still captures it and the problem:
Definitions:
M = 1;
$MinPrecision = 40;
wp = $MinPrecision;
ac = $MinPrecision - 8;
pg = wp/2;
rinf = 15000;
The main function to be parallelized:
dGenBessE[\[Omega]_?NumericQ, l_?IntegerQ] :=
Block[{\[CapitalPhi]out, init, dinit},
init = 0.00006630728036817007679447778124486601253323 +
6.913102762021489976135937610105907096265*10^-6 I;
dinit = -6.958226432502243329110910813935678705519*10^-7 +
6.631148430876236565520382557577187147081*10^-6 I;
\[CapitalPhi]out[\[Omega], l] = \[CapitalPhi] /.
Block[{$MaxExtraPrecision = 100},
NDSolve[{\[CapitalPhi]''[r] + (2 (r - M))/(
r (r - 2 M)) \[CapitalPhi]'[
r] + ((\[Omega]^2 r^2)/(r - 2 M)^2 - (l (l + 1))/(
r (r - 2 M))) \[CapitalPhi][r] ==
0, \[CapitalPhi][rinf] ==
N[init, wp], \[CapitalPhi]'[rinf] ==
N[dinit, wp]}, \[CapitalPhi], {r, 30, 40},
WorkingPrecision -> wp, AccuracyGoal -> ac,
PrecisionGoal -> pg, MaxSteps -> \[Infinity]]][[1]];
Print["For \[Omega]=", \[Omega], " and l=", l, ": "];
Print["Kernel ID: ", $KernelID];
Print["Precision of init: ", Precision[init]];
Print["Precision of dinit: ", Precision[dinit]];
Print["\[CapitalPhi]out at 35=" ,
N[\[CapitalPhi]out[\[Omega], l][35], wp] ];
Print["Precision of \[CapitalPhi]out: ",
Precision[\[CapitalPhi]out[\[Omega], l][35]]];
]
Do Paralleization prereqs:
LaunchKernels[4]
DistributeDefinitions[M, rinf, wp, ac, pg, $MinPrecision,dGenBessE];
Attempt to run it with Paralleize for two different values:
Parallelize[{dGenBessE[1/10, 0], dGenBessE[1/10, 1]}] // AbsoluteTiming
Result
For me this leads to a
Precision::precsm: Requested precision 38.95475956978393` is smaller than $MinPrecision. Using $MinPrecision instead.
Answer
I think your problem arises because the value of $MinPrecision
is not distributed correctly (If I remember correctly none of the variables in the System
context are distributed automatically).
So we have to do this by hand
ParallelEvaluate[$MinPrecision = 40]
Parallelize[{dGenBessE[1/10, 0], dGenBessE[1/10, 1]}] // AbsoluteTiming
Should now work without problems.
Also, presonally I would write the last example as
ParallelMap[dGenBessE[1/10, #]&,{0,1}] // AbsoluteTiming
or
Parallelize@Scan[dGenBessE[1/10, #] &, {0, 1}]
depending on whether you need to return values or not.
Comments
Post a Comment