Skip to main content

timing - Throwaway memoization makes built-ins faster?


Yesterday I got into an argument with @UnchartedWorks over in the comment thread here. At first glance, he posted a duplicate of Marius' answer, but with some unnecessary memoization:


unitize[x_] := unitize[x] = Unitize[x]
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt]

and proposed the following test to justify his claim that his approach is faster:


RandomSeed[1];

n = -1;
data = RandomChoice[Range[0, 10], {10^8, 3}];

AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]

AbsoluteTiming[pick[data, unitize@data[[All, n]], 1] // Length]

(*
{7.3081, 90913401}
{5.87919, 90913401}

*)

A significant difference. Naturally, I was skeptical. The evaluation queue for his pick is (I believe) as follows:



  1. pick is inert, so evaluate the arguments.

  2. data is just a list, 1 is inert, data[[All, n]] quickly evaluates to a list

  3. unitize@data[[All, n]] writes a large DownValue...

  4. ...calling Unitize@data[[All, n]] in the process, returning the unitized list.

  5. Another large DownValue of the form pick[data] = *pickedList* is created (data here is, of course, meant in its evaluated form), never to be called again (unless, for some reason, we explicitly type pick[data]).

  6. The *pickedList* is returned.



What about the evaluation queue for Pick[data, Unitize@data[[All, n]], 1]?



  1. Pick is inert.

  2. data becomes an inert list, 1 is inert, data[[All, n]] quickly evaluates to an inert list.

  3. Nothing happens here.

  4. Unitize@data[[All, n]] returns the unitized list.

  5. Nothing happens here either.

  6. The same step as before is taken to get us the picked list.



So, clearly pick has more things to do than Pick.


To test this out I run the following code:


Quit[]

$HistoryLength = 0;

Table[
Clear[pick, unitize, data];
unitize[x_] := unitize[x] = Unitize[x];
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt];

data = RandomChoice[Range[0, 10], {i*10^7, 3}];
{Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First},
{i, 5}]

Much to my surprise, pick is consistently faster!



{{0.482837, 0.456147},
{1.0301, 0.90521},
{1.46596, 1.35519},

{1.95202, 1.8664},
{2.4317, 2.37112}}

How can I protect myself from black magic make a representative test? Or should I embrace the black magic is this real and a valid way to speed things up?


Update re: answer by Szabolcs


Reversing the order of the list like so:


{pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First}

gave me the following result:




{{0.466251, 0.497084},
{1.18016, 1.17495},
{1.34997, 1.42752},
{1.80211, 1.93181},
{2.25766, 2.39347}}

Once again, regardless of order of operations, pick is faster. Caching could be suspect, and as mentioned in the comment thread of the other question, I did try throwing in a ClearSystemCache[] between the pick and Pick, but that didn't change anything.


Szabolcs suggested that I throw out the memoization and just use wrapper functions. I presume, he meant this:


unitize[x_] := Unitize[x];

pick[xs_, sel_, patt_] := Pick[xs, sel, patt];

As before, on a fresh kernel I set history length to 0 and run the Table loop. I get this:


{{0.472934, 0.473249},
{0.954632, 0.96373},
{1.42848, 1.43364},
{1.91283, 1.90989},
{2.37743, 2.40031}}

i.e. nearly equal results, sometimes one is faster, sometimes the other (left column is pick, right is Pick). The functions perform as well as Pick in a fresh kernel.



I try again with the memoization as described towards the beginning of the answer:


{{0.454302, 0.473273},
{0.93477, 0.947996},
{1.35026, 1.4196},
{1.79587, 1.90001},
{2.24727, 2.38676}}

The memoized pick and unitize perform consistently better out of a fresh kernel. Of course, it uses twice the memory along the way.



Answer



Cause of speed up



This is definitely not memoization. The reason for the observed speed up is that for large arrays (e.g. 10^8 elements), the memory clean up operations may take noticeable time. If one doesn't free memory, one can perform some operations a bit faster.


Here is a simple example:


Let's create a large array, then perform a calculation, and remove the array:


AbsoluteTiming[
Total[ConstantArray[0, 10^8]];
]


{0.422509, Null}




It takes 0.42 seconds. Let's now do the same thing, but keep the array in memory:


AbsoluteTiming[
Total[garbage = ConstantArray[0, 10^8]];
]


{0.366755, Null}



This evaluation is a bit faster.


Let's check how long does it take to remove the large array:



AbsoluteTiming[
Remove[garbage]
]


{0.061982, Null}



Note that 0.06 seconds is the difference of the calculation times above. This example shows that if we keep the large array instead of removing it, our code can run faster, because we don't need to spent time on freeing memory.


Your example


In the example you provide, removing the result of Unitize@data[[All, n]] from memory takes some time. If one saves this array in a redundant variable, one avoids immediate memory clean-up and the evaluation seems to be faster. In case of pseudo-memoization the Clear[pick, unitize] command will take extra time to free the memory, but this command is placed outside the AbsoluteTiming[] scope. That is why "memoization" seems to speed up the calculation.



How to make a representative test?


You should put Clear[pick, unitize] inside your timing function. This test will show that the pseudo-memoization technique is actually slower than built-in functions:


Table[
Clear[data];
data=RandomInteger[{0,10},{i*10^7,3}];
{
Pick[data,Unitize@data[[All,-1]],1]; // AbsoluteTiming // First
,
Clear[pick,unitize];
unitize[x_]:=unitize[x]=Unitize[x];

pick[xs_,sel_,patt_]:=pick[xs,sel,patt]=Pick[xs,sel,patt];
pick[data,unitize@data[[All,-1]],1]; // AbsoluteTiming // First
},
{i,5}]

(*
{{0.534744, 0.469538},
{1.03776, 1.05842},
{1.58536, 1.65404},
{2.10422, 2.11284},

{2.48129, 2.71405}}
*)

Technical note: as noted by Carl Woll in comments, if one wants to measure the symbol-removing-time using the following code:


In[1] := garbage = ConstantArray[0, 10^8];
In[2] := AbsoluteTiming[Remove[garbage]]

one should set $HistoryLength to zero, otherwise the Out[1] variable will retain the contents of the large array. If Out[1] retains the large data, Remove[garbage] will only delete the reference, but not the data itself. Deletion time of a reference is almost zero, but it doesn't correspond to the deletion time for large data.


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.