Skip to main content

performance tuning - How to reduce the InterpolatingFunction building overhead?


I want a linear interpolation from the following example list:


list = {{0.0005023, 22.24}, {0.01457, 21.47}, {0.04922, 19.79}, 
{0.07484, 18.7}, {0.104, 17.55}, {0.1331, 16.52}, {0.1632, 15.49},
{0.1888, 14.52}, {0.2215, 13.31}, {0.2506, 12.16}, {0.3024, 10.01},
{0.3435, 8.304}, {0.3943, 6.036}, {0.4098, 5.329}, {0.4726, 2.384}};

The easiest way is to use:


Interpolation[list, InterpolationOrder -> 1]


but my list will be changing a lot, and the InterpolatingFunction takes a lot of time to build:


Timing[
Table[Interpolation[list, InterpolationOrder -> 1][q], {q,
0.0006, 0.4, 0.00001}];]

is 10× slower than:


test=Interpolation[list, InterpolationOrder -> 1];
Timing[Table[test[q], {q, 0.0006, 0.4, 0.00001}];]


How can I remove the overhead?




EDIT (following JxB comment)


This compiled version is 5 times faster than the original version, but I don't think Partition is compiling (it appears between all the Lists when I use FullForm); and there's also a CopyTensor that doesn't look good:


Compile[{{list, _Real, 2}, {value, _Real, 0}},
Module[{temp},
temp = Select[
Partition[list, 2, 1], #[[1, 1]] <= value && #[[2, 1]] > value &][[1]
];
temp[[1, 2]] +

(value - temp[[1, 1]])/(temp[[2, 1]] - temp[[1, 1]])*(temp[[2, 2]] - temp[[1, 2]])
]
]

Any suggestions? (I don't want to compile to C.)



Answer



You can use binary search with Compile. I failed inlining (Compile was complaining endlessly about types mismatch), so I included a binary search directly into Compile-d function. The code for binary search itself corresponds to the bsearchMin function from this answer.


Clear[linterp];
linterp =
Compile[{{lst, _Real, 2}, {pt, _Real}},

Module[{pos = -1 , x = lst[[All, 1]], y = lst[[All, 2]], n0 = 1,
n1 = Length[lst], m = 0},
While[n0 <= n1, m = Floor[(n0 + n1)/2];
If[x[[m]] == pt,
While[x[[m]] == pt && m < Length[lst], m++];
pos = If[m == Length[lst], m, m - 1];
Break[];
];
If[x[[m]] < pt, n0 = m + 1, n1 = m - 1]
];

If[pos == -1, pos = If[x[[m]] < pt, m, m - 1]];
Which[
pos == 0,
y[[1]],
pos == Length[x],
y[[-1]],
True,
y[[pos]] + (y[[pos + 1]] - y[[pos]])/(x[[pos + 1]] -
x[[pos]])*(pt - x[[pos]])
]],

CompilationTarget -> "C"];

This is about 20 times faster, on my benchamrks:


AbsoluteTiming[
Table[Interpolation[list,InterpolationOrder->1][q],{q,0.0006,0.4,0.00001}];
]


{1.453,Null}




AbsoluteTiming[
Table[linterp[list,q],{q,0.0006,0.4,0.00001}];
]


{0.063,Null}



Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...