Skip to main content

performance tuning - Parallel function evaluation for minimal value


I am asking myself, if the evaluation of a function f looking for a minimal value over a discrete set of values can be parallelize (brute scan for a potential minimal value over a rectangular region with specified resolution). Is it possible or just not suitable for parallel computing do to the need to always compare to a reference which needs to happen in the main kernel? As far as I understood it in


Why won't Parallelize speed up my code?


the forced evaluation in the main kernel with SetSharedVariable can cause a significant lost in speed, which I think is the case in my horribly parallelized evaluation (see below). Any suggestions? I am pretty sure, I am just not seeing the obvious perspective. I dont want to use NMinimize, I only want to scan rapidly (if possible also in parallel) a rectangular region with specified resolution and pick up the minimal value. Sorry, if this is a duplicate, I was not able to find an answer. Thanks.


Minimal example:


Function:



f = Sin[x - z + Pi/4] + (y - 2)^2 + 13;


Sequential evaluation with do:


Clear[fmin]
fmin
n = 10^1*2;
fmin = f /. {x -> 0, y -> 0, z -> 0};
fmin // N
start = DateString[]
Do[
ftemp = f /. {x -> xp, y -> yp, z -> zp};
If[ftemp < fmin, fmin = ftemp];

, {xp, 0, Pi, Pi/n}
, {yp, -2, 4, 6/n}
, {zp, -Pi, Pi, 2*Pi/n}
]
end = DateString[]
DateDifference[start, end, {"Minute", "Second"}]
fmin // N

Horribly parallelized evaluation


Clear[fmin]

fmin
n = 10^1*2;
fmin = f /. {x -> 0, y -> 0, z -> 0};
fmin // N
SetSharedVariable[fmin];
start = DateString[]
ParallelDo[
ftemp = f /. {x -> xp, y -> yp, z -> zp};
If[ftemp < fmin, fmin = ftemp];
, {xp, 0, Pi, Pi/n}

, {yp, -2, 4, 6/n}
, {zp, -Pi, Pi, 2*Pi/n}
]
end = DateString[]
DateDifference[start, end, {"Minute", "Second"}]
fmin // N

Answer



Don't compare to a single (shared) main-kernel variable (fmin) on each kernel. Instead, allow each kernel to find the smallest of the points it has checked. Let each kernel have its own private fmin. Then you'll have $KernelCount candidates for the minimum. Finally select the smallest of these.


ParallelCombine is made for precisely this type of approach. It may be a good idea to use Method -> "CoarsestGrained".


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

How to remap graph properties?

Graph objects support both custom properties, which do not have special meanings, and standard properties, which may be used by some functions. When importing from formats such as GraphML, we usually get a result with custom properties. What is the simplest way to remap one property to another, e.g. to remap a custom property to a standard one so it can be used with various functions? Example: Let's get Zachary's karate club network with edge weights and vertex names from here: http://nexus.igraph.org/api/dataset_info?id=1&format=html g = Import[ "http://nexus.igraph.org/api/dataset?id=1&format=GraphML", {"ZIP", "karate.GraphML"}] I can remap "name" to VertexLabels and "weights" to EdgeWeight like this: sp[prop_][g_] := SetProperty[g, prop] g2 = g // sp[EdgeWeight -> (PropertyValue[{g, #}, "weight"] & /@ EdgeList[g])] // sp[VertexLabels -> (# -> PropertyValue[{g, #}, "name"]...