Skip to main content

performance tuning - Parallel function evaluation for minimal value


I am asking myself, if the evaluation of a function f looking for a minimal value over a discrete set of values can be parallelize (brute scan for a potential minimal value over a rectangular region with specified resolution). Is it possible or just not suitable for parallel computing do to the need to always compare to a reference which needs to happen in the main kernel? As far as I understood it in


Why won't Parallelize speed up my code?


the forced evaluation in the main kernel with SetSharedVariable can cause a significant lost in speed, which I think is the case in my horribly parallelized evaluation (see below). Any suggestions? I am pretty sure, I am just not seeing the obvious perspective. I dont want to use NMinimize, I only want to scan rapidly (if possible also in parallel) a rectangular region with specified resolution and pick up the minimal value. Sorry, if this is a duplicate, I was not able to find an answer. Thanks.


Minimal example:


Function:



f = Sin[x - z + Pi/4] + (y - 2)^2 + 13;


Sequential evaluation with do:


Clear[fmin]
fmin
n = 10^1*2;
fmin = f /. {x -> 0, y -> 0, z -> 0};
fmin // N
start = DateString[]
Do[
ftemp = f /. {x -> xp, y -> yp, z -> zp};
If[ftemp < fmin, fmin = ftemp];

, {xp, 0, Pi, Pi/n}
, {yp, -2, 4, 6/n}
, {zp, -Pi, Pi, 2*Pi/n}
]
end = DateString[]
DateDifference[start, end, {"Minute", "Second"}]
fmin // N

Horribly parallelized evaluation


Clear[fmin]

fmin
n = 10^1*2;
fmin = f /. {x -> 0, y -> 0, z -> 0};
fmin // N
SetSharedVariable[fmin];
start = DateString[]
ParallelDo[
ftemp = f /. {x -> xp, y -> yp, z -> zp};
If[ftemp < fmin, fmin = ftemp];
, {xp, 0, Pi, Pi/n}

, {yp, -2, 4, 6/n}
, {zp, -Pi, Pi, 2*Pi/n}
]
end = DateString[]
DateDifference[start, end, {"Minute", "Second"}]
fmin // N

Answer



Don't compare to a single (shared) main-kernel variable (fmin) on each kernel. Instead, allow each kernel to find the smallest of the points it has checked. Let each kernel have its own private fmin. Then you'll have $KernelCount candidates for the minimum. Finally select the smallest of these.


ParallelCombine is made for precisely this type of approach. It may be a good idea to use Method -> "CoarsestGrained".


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...