Skip to main content

performance tuning - AbsoluteTiming affected by surrounding code


In the process of simplifying code posted for another question, I noticed AbsoluteTiming returned different results for the same code being timed, Range[0, 99], when the surrounding code was different. It was constantly different.


What is going on? Is Map caching and Do-AppendTo not? Is the call to Range[0, 99] close enough to to cost to retrieve the value from the cache so the sum of timing is only off by ~0.01?


I have changed the order, Map then Do-AppendTo, and still the Map is faster by ~0.01.


I also tried substituting Range[0, Mod[i++, 10000]] to negate the caching effect, same results. I tried timing different code and noticed the same difference.




iterations = 10000;
times1 = {};
Do[AppendTo[times1, AbsoluteTiming[
Range[0, 99]

][[1]]]
, {iterations}];

times2 = Map[
AbsoluteTiming[
Range[0, 99]
][[1]]
&, Range[1, iterations]];



PrintStats[times_] :=
Print["Sum: ", Fold[Plus, 0., times], " Min: ", Min[times],
" Max: ", Max[times], " Mean: ", Mean[times], " StdDev: ",
StandardDeviation[times]]

PrintStats[times1];
ListPlot[times1, PlotRange -> All]
Histogram[times1]

PrintStats[times2];

ListPlot[times2, PlotRange -> All]
Histogram[times2]

Summary Results of Five Runs:


Do-AppendTo:


Sum: 0.042083  Min: 3.*10^-6  Max: 0.000029  Mean: 4.*10^-6  StdDev: 0.*10^-7
Sum: 0.039829 Min: 2.*10^-6 Max: 0.000019 Mean: 4.*10^-6 StdDev: 0.*10^-7
Sum: 0.041112 Min: 2.*10^-6 Max: 0.000030 Mean: 4.*10^-6 StdDev: 0.*10^-7
Sum: 0.042261 Min: 0. Max: 0.000043 Mean: 4.2261*10^-6 StdDev: 1.07875*10^-6
Sum: 0.040704 Min: 3.*10^-6 Max: 0.000025 Mean: 4.*10^-6 StdDev: 0.*10^-7


Map:


Sum: 0.030643  Min: 2.*10^-6  Max: 0.000012  Mean: 3.*10^-6  StdDev: 0.*10^-7
Sum: 0.030974 Min: 2.*10^-6 Max: 0.000014 Mean: 3.*10^-6 StdDev: 0.*10^-7
Sum: 0.030379 Min: 0. Max: 0.000012 Mean: 3.0379*10^-6 StdDev: 5.08615*10^-7
Sum: 0.030504 Min: 2.*10^-6 Max: 0.000012 Mean: 3.*10^-6 StdDev: 0.*10^-7
Sum: 0.030612 Min: 2.*10^-6 Max: 0.000012 Mean: 3.*10^-6 StdDev: 0.*10^-7

Detailed Results of First Run:


enter image description here




Answer



As Szabolcs pointed out, I was running into the resolution of the timer in my system in my attempts to micro benchmark. Also, my attempts at micro benchmarking were incorrect I might add, but more on that later.


I was able to see the light only after scratching my head for a number of days on Szabolcs' comment and working on a performance package, I finally came across $TimeUnit here. I got the idea to use Pause[10*$TimeUnit] as the code being tested in each case. Clearly Pause[10*$TimeUnit] is above the resolution of the timer in my system. Oh, in case you are curious, $TimeUnit is 1/100 on my system (2011 MBP 17" running Lion).


For those of you who want to play along at home, the following is the updated micro benchmarking code: (btw it is still incorrect).


PrintHeader[] :=
Print[" Total Mean Min Max StdDev "];

PrintStats[doTimes_List , mapTimes_List] :=
(
Print["Do ", " ", doTotal = Total[doTimes], " ",

doMean = Mean[doTimes], " ", Min[doTimes], " ", Max[doTimes],
" ", StandardDeviation[doTimes]];
Print["Map ", " ", mapTotal = Total[mapTimes], " ",
mapMean = Mean[mapTimes] " ", Min[mapTimes], " ", Max[mapTimes],
" ", StandardDeviation[mapTimes]];
Print["Do-Map", " ", doTotal - mapTotal, " ", doMean - mapMean,
"\n"];
)

PrintHeader[];

iterations = 1000;
Do[(
doTimes = {};
Do[AppendTo[doTimes,
AbsoluteTiming[Pause[10*$TimeUnit]][[1]]]
, {iterations}];

mapTimes = Map[
AbsoluteTiming[Pause[10*$TimeUnit]][[1]]
&, Range[1, iterations]];


PrintStats[doTimes, mapTimes];
), {5}];



Results


        Total    Mean     Min      Max      StdDev 
Do 100.339 0.100339 0.100027 0.101277 0.000330
Map 100.343 0.100343 0.100035 0.101294 0.000330
Do-Map -0.004 -4.*10^-6


Do 100.351 0.100351 0.100036 0.101276 0.000327
Map 100.338 0.100338 0.100035 0.101247 0.000330
Do-Map 0.013 0.000013

Do 100.328 0.100328 0.100025 0.101328 0.000319
Map 100.346 0.100346 0.100027 0.101280 0.000332
Do-Map -0.018 -0.000018

Do 100.347 0.100347 0.100029 0.101238 0.000330

Map 100.332 0.100332 0.100021 0.101235 0.000338
Do-Map 0.015 0.000015

Do 100.330 0.100330 0.100023 0.101258 0.000318
Map 100.327 0.100327 0.100028 0.101238 0.000324
Do-Map 0.003 3.*10^-6

The results are as expected, there is no longer a consistent difference between the Do an Map implementations.


Incorrect Micro Benchmarking Method


So why are were my two implementations of micro benchmarking above incorrect? I wanted to capture timing values for each call of the code being tested over a large number of iterations so that I could perform various analytics on them (sum, mean, min, max, std dev, plotting, histogram, ...).



I thought AbsoluteTiming was incredibly precise. When I evaluated the simple expression 1, AbsoluteTiming[1] returned {0.000014, 1}. 0.000014 is way more precise than $TimeUnit on my system which was 0.01 right? Wrong!


Lets look at two different ways of micro benching marking the expression 1 for 1000000 iterations. But first, lets establish a base case to help us understand the results.




Base Case


iterations = 1000000;
oneTime = AbsoluteTiming[1][[1]];
expectation = oneTime*iterations;

Print["oneTime: ", oneTime, " expectation for ", iterations, ": ", expectation];


Results:


oneTime: 0.000014   expectation for 1000000: 14.



Collection of Individual AbsoluteTiming Values


individualTimes = Map[AbsoluteTiming[1][[1]] &, Range[1, iterations]];

Print["Total: ", Total[individualTimes], " Min: ",
Min[individualTimes], " Max: ", Max[individualTimes], " Mean: ",
Mean[individualTimes], " StdDev: ",

StandardDeviation[individualTimes]];

Results:


Total: 2.26236  Min: 0.  Max: 0.000167  Mean: 2.26236*10^-6  StdDev: 6.79631*10^-7

First thing we notice is that the Total(2.26236) is less than our expectation(14.) . Why the difference? Well first look at the Min(0.) and Max(0.000167), they are out side our oneTime(0.000014), which is good. If there are a significant number of individual times less than our oneTime value, that could explain the difference. It just so happened that Length[Select[individualTimes, # < oneTime & ]] returned 999939 .


Anyone know how Min can be 0. ? Is it because the expression was cached? If it was cached, it still would have taken some time. Maybe the time it took is less than what AbsoluteTiming can observe, hmmmm.




One AbsoluteTiming Value for All Iterations


totalTime = AbsoluteTiming[Do [1, {iterations}]][[1]];


Print["totalTime: ", totalTime];

Results:


totalTime: 0.015782

Woe wait a minute, the totalTime(0.015782) is no where close to the Total of the individualTimes(2.26236) not to mention our expectation(14.)! I must have messed something up. Okay let me think about this, could it be AbsoluteTime is not as precise as I originally thought? Ding ding, ding, give that man a cigar!


As the chaps at Wolfram said it here. AbsoluteTiming does not have the precision past $TimeUnit. So the oneTime and all of the individually collected timing values were prone to error as a result of the resolution of the timer of the system. Thanks again Szabolcs!


The the one timing value for all iterations also has the same error, but only since it is only called one time for 1000000 iterations and is less than $TimeUnit (0.01) it has a much smaller effect.


Okay, yeah I know, I should rerun the One AbsoluteTiming Value for All Iterations with iterations = 1000000000 to get a more accurate value. totalTime: 12.440098 which makes making one call to 1 take 1.2440098*10^-8 seconds, which is a much much much smaller number than $TimeUnit. Poor AbsoluteTiming, it never stood a chance. ;)



Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...