Skip to main content

performance tuning - Considerations when determining efficiency of Mathematica code


I have two segments of code that do the same thing and I want to determine the which is more efficient.


What are the considerations when determining efficiency of Mathematica code?



  • Correctness/Equality of code segments

  • AbsoluteTiming vs Timing ... Why?

  • Clearing the cache


  • Memory footprint (speed vs size) ... Any suggestions on how to measure this?

  • More?


Any useful packages out there to assist in this?




Hypothetical Code Segment 1


numbers = {}; For[i = 0, i < 100, i++, AppendTo[numbers, i]]; numbers



Hypothetical Code Segment 2



Range[0, 99]



Testing Code


(* Test Equality *)
Print["Equality: ",
numbers = {}; For[i = 0, i < 100, i++, AppendTo[numbers, i]]; numbers ==
Range[0, 99]]

(* Timing Comparison *)

iterations = 10000;

times = Map[{
AbsoluteTiming[
numbers = {}; For[i = 0, i < 100, i++, AppendTo[numbers, i]]; numbers
][[1]],
AbsoluteTiming[
Range[0, 99]
][[1]]
} &, Range[1, iterations]];

{times1, times2} = Transpose[times];

PrintStats[times_] :=

Print["Sum: ", Fold[Plus, 0., times], " Min: ", Min[times],
" Max: ", Max[times], " Mean: ", Mean[times], " StdDev: ",
StandardDeviation[times]]

PrintStats[times1];
ListPlot[times1, PlotRange -> All]

Histogram[times1]

PrintStats[times2];
ListPlot[times2, PlotRange -> All]
Histogram[times2]

Results:


enter image description here



Answer



First off, Timing isn't as accurate as AbsoluteTiming because it has a tendency to ignore various things. Here is a paticularly telling example. Keep in mind that neither will keep track of rendering time or formatting of output, this is purely time spent computing in the kernel.



AbsoluteTiming[x = Accumulate[Range[10^6]]; Pause[x[[1]]]; resA = x + 3;]

==> {1.045213, Null}

Timing[x = Accumulate[Range[10^6]]; Pause[x[[1]]]; resB = x + 3;]

==> {0.031200, Null}

These are identical calculations but Timing ignores Pause so it is way off.


Now lets set up a toy example. Your tests for timings are what I would typically do first when looking for efficiency.



f[x_Integer?Positive] := Accumulate[Range[x]]

g[x_Integer?Positive] :=
Block[{result = Array[0, x]},
result[[1]] = 1;
For[i = 2, i <= x, i++, result[[i]] = result[[i - 1]] + i];
result
]

The AbsoluteTiming is quite different for these two approaches. Clearly the built in function is preferable in this case.



AbsoluteTiming[resf = f[10^6];]

==> {0.015600, Null}

AbsoluteTiming[resg = g[10^6];]

==> {3.432044, Null}

And of course, we should test that these produce equivalent results..


resf == resg


==> True

Now I will mention that there are times when Equal will return False. This may be acceptable in some situations if say we are only really interested in very low precision, ball-park results.


As for memory consumption, I hope someone else might elaborate on this part. One way to test it is with MemoryInUse.


m1 = MemoryInUse[];
f[10^6];
MemoryInUse[] - m1

==> 8001424


m1 = MemoryInUse[];
g[10^6];
MemoryInUse[] - m1

==> 24000656

Again, the system function wins hands down.


Edit:


The reason the second method showed such a substantial increase in MemoryInUse is because it doesn't produce a packed array. If we pack the output, it uses the same memory as the first. This tells me that MemoryInUse only tells us how much memory the result uses and nothing about the amount of memory used in intermediate computations.



m1 = MemoryInUse[];
Developer`ToPackedArray@g[10^6];
MemoryInUse[] - m1

==> 8001472

Edit 2: Here is a function I put together that I'm sure can be made more effective and efficient. It uses a binary search technique with MemoryConstrained to find the amount of memory requested when evaluating an expression.


SetAttributes[memBinarySearch, HoldFirst]

memBinarySearch[expr_, min_, max_] :=

Block[{med = IntegerPart[(max - min)/2], low = min, high = max,
i = 1},
While[True,
If[MemoryConstrained[expr, med] === $Aborted,
low = med;
,
high = med;
];
med = IntegerPart[low + (high - low)/2];
If[Equal @@ Round[{low, med, high}, 2], Break[]];

];
med
]

Here it is applied to f and g from above...


memBinarySearch[f[10^6], 1, 10^9]

==> 16000295

memBinarySearch[g[10^6], 1, 10^9]


==> 62499999

Note that memBinarySearch is only accurate to 2 bytes. For some reason (probably related to IntegerPart) it doesn't like to find the exact byte count requested.


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

How to remap graph properties?

Graph objects support both custom properties, which do not have special meanings, and standard properties, which may be used by some functions. When importing from formats such as GraphML, we usually get a result with custom properties. What is the simplest way to remap one property to another, e.g. to remap a custom property to a standard one so it can be used with various functions? Example: Let's get Zachary's karate club network with edge weights and vertex names from here: http://nexus.igraph.org/api/dataset_info?id=1&format=html g = Import[ "http://nexus.igraph.org/api/dataset?id=1&format=GraphML", {"ZIP", "karate.GraphML"}] I can remap "name" to VertexLabels and "weights" to EdgeWeight like this: sp[prop_][g_] := SetProperty[g, prop] g2 = g // sp[EdgeWeight -> (PropertyValue[{g, #}, "weight"] & /@ EdgeList[g])] // sp[VertexLabels -> (# -> PropertyValue[{g, #}, "name"]...