Skip to main content

performance tuning - With versus Function


I've seen people compare Block with Module and With as scoping constructs, and try to figure out when to choose which. Also, I've seen posts about Function versus # versus downvalue functions. However, I already understood the differences between those, but still find myself more often doubting between functions and With...


Does With have any advantage in any case over Function?


I am talking about this constructions


With[{sth=inj}, code code sth code]

versus


(code code # code)&[inj]


or


Function[sth, code code sth code][inj]

Apart from the fact that its syntax is better suited for defining constants since the variable name and value are "together", or other elegance issues I mean.


They are both scoping constructs, and Function among other uses can also inject code as With. If With's variables are set with := you can always use Function version with attributes.


I know, this doesn't hold in the infrequent situation when you would use a With with, e.g, 2 :=s and 2 =s which can't be emulated by a function attribute.


In the (not so extensive) tests I've done it doesn't seem to be faster.


In[127]:= TimingAverage[
With[{sum = 2 + 8},

{sum*x, sum*y}]]

Out[127]= 3.42178*10^-6

In[124]:= TimingAverage[
Function[{sum}, {sum*x, sum*y}][2 + 8]
]

Out[124]= 3.36266*10^-6


In[122]:= Module[{lalala},
lalala[sum_] := {sum*x, sum*y};
TimingAverage[
lalala[2 + 8]
]
]

Out[122]= 2.88582*10^-6

In[123]:= TimingAverage[

{#*x, #*y} &[2 + 8]
]

Out[123]= 2.67792*10^-6

Just out of curiosity, this very same test seem to be consistent with the common knowledge of Module being slower than Block being slower than With


In[119]:= TimingAverage[
Module[{sum = 2 + 8},
{sum*x, sum*y}]]


Out[119]= 7.19833*10^-6

In[120]:= TimingAverage[
Block[{sum = 2 + 8},
{sum*x, sum*y}]]

Out[120]= 4.58145*10^-6

As to the holding version of the tests, we get marginally better results with With


In[129]:= TimingAverage[

Function[{sum}, {sum*x, sum*y}, HoldFirst][2 + 8]
]

Out[129]= 4.22668*10^-6


In[128]:= TimingAverage[
With[{sum := 2 + 8},
{sum*x, sum*y}]]


Out[128]= 3.8681*10^-6


In[131]:= Module[{lalala},
SetAttributes[lalala, HoldFirst];
lalala[sum_] := {sum*x, sum*y};
TimingAverage[
lalala[2 + 8]
]
]


Out[131]= 3.45039*10^-6

EDIT


As to style and elegance, I guess that functions are more suited when the expressions you're injected are big and the code you're injecting them into is small, and With otherwise. Any thoughts on all this issue? Am I missing something?



Answer



To my mind, the differences are significant if obscure. The very big difference in evaluation was described already by @Sal. Here are several more subtle ones, which may however bite you. So, functions go first.


Functions





  • Can be in two forms, Function[x,x^2] or Function[#^2] (the last is equivalent to #^2&), which are not always equivalent. Differences:



    • Functions with named arguments are scoping constructs, and as such, perform variable collision resolution by renaming variables in inner scoping constructs, if needed. Functions with Slot-s are not quite (are in some respects but not others. Example of the difference is below), and because of that have some speed advantage.

    • Functions with named arguments represent a leaky functional abstraction (see at the bottom of that answer). This matters because you can never be sure that you won't run into trouble when passing such a function as an argument. Functions with slots are ok, but can not always be nested.

    • Functions with slots have a form which takes arbitrary number of arguments, such as Function[Null, Plus[##]]. Functions with named arguments don't have such form.

    • Functions with slots can be made recursive, which can be a very powerful tool in some cases.




  • Functions with slots, not being full - fledged scoping constructs, have the substitution semantics similar to replacement rules, in that they won't care about inner scoping constructs and possible name collisions.





Example:


With[{x=a},x+#]&[x]

(*
==> 2 a
*)

but



Function[{inj},With[{x=a},x+inj]][x]

(*
==> a+x
*)

(we could have used Module or another Function in place of With here). Which behavior is preferred depends on the situation, but more often than not the former one is used not intentionally and leads to subtle bugs. In any case, one should be aware of this.




  • As mentioned, Function - s with slots can take arbitrary number of arguments





  • Functions can carry attributes. For example, this function will sort its arguments: Function[Null, {##}, Orderless]




  • Because functions can carry attributes, they can hold arguments passed to them, for example: Function[expr,Head[Unevaluated[expr]],HoldAll], and also inject unevaluated arguments in their body. Functions with slots can do that for an arbitrary number of arguments as well, here is an example




  • Because of their SubValue - looking form of invokation: Function[x,x^2][y], and the fact that SubValues can not be forced to hold outer groups of arguments, Function call semantics for Function-s with Hold-arguments can not be easily emulated by other means. This tells us that Function-s are very special objects, for which probably an exception was made in the main evaluation sequence semantics.





  • Because Function-s can carry Hold-attributes, they can implement pass-by-reference semantics. In particular, they can change values of variables passed to them: a=1; Function[x,x=2,HoldAll][a];a.




  • Because of their evaluation semantics (elaborated by @Sal), Function-s can be used to implement currying.




With


Ok, now time for With:





  • With is always a scoping construct. This means it cares about inner scoping constructs and renames their variables in cases of conflicts. This is a good feature most of the time, but when it is not, there are ways to disable renaming




  • With normally does evaluate the r.h.sides of its variable declarations. I recently learned (from @Szabolcs) that there is a syntax which will keep them unevaluated: With[{a := Print[1]}, Hold[a]], but it is undocumented and it is not clear if it is reliable.




  • By its nature, With will always require a fixed number of arguments





  • With can not normally change the values of its "variables" (constants really), unless again an undocumented form of it is used: a=1;With[{b := a}, b = 3];a.




  • In principle, the core With functionality is nothing special in the sense that it can be emulated with a top-level code.




  • With can be used as a r.h.s. of delayed rules. This usage allows it to share variables between its body and condition, e.g. x_:>With[{y=x^2},{x,y}/;y<10]. This is a powerful feature, and one variation of it (Trott-Strzebonski technique, a reference to the original source and some explanation can be found e.g. here) is a particularly powerful device in rule-based programming. This is a language feature, and can not be easily emulated (well, perhaps unless one uses RuleCondition) . Function-s can not be used in a similar fashion.




Some conclusions



While Function and With have somewhat similar semantics regarding the way they bind their variables, these construct are different. Even within Function-s themselves, functions with slots are substantially different from functions with named arguments, the main difference being that the former is not really a full-fledged scoping contsruct (more of a macro, as was also noted in other answers), while the latter is.


Returing to With vs Function, viewed as injecting devices - the sets of such use cases for both do have a significant overlap. In other words, in many such cases they can be used interchangeably. However, my personal inclanation is to mostly use With as an injecting device, while I use Function for such purposes in relatively special circumstances. One should also keep in mind differences outlined above, sometimes those effects can lead to subtle errors.


Finally, from the viewpoint of a programming paradigms, With plays well both with functional programming and with rules, while Function plays well with functional programming constructs, so With seems to have somewhat wider domain of applicability.


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...