Skip to main content

numerics - Why is MainEvaluate being used when LinearSolve can be compiled?


According to this question LinearSolve can be compiled. However, CompilePrint[] shows a call to MainEvaluate[] but no warning is generated. It appears that LinearSolve is not compilable, given the presence of MainEvaluate[]. But the lack of any warning is surprising. Something more subtle appears to be going on. Consider the following.


In[1]:= SetSystemOptions[
"CompileOptions" -> "CompileReportExternal" -> True];

In[2]:= << CompiledFunctionTools`

In[3]:= v2 = Compile[{{m, _Real, 2}, {v, _Real, 1}},
LinearSolve[m, v]

];


In[4]:= CompilePrint[v2]

Out[4]= "
2 arguments
3 Tensor registers
Underflow checking off
Overflow checking off

Integer overflow checking on
RuntimeAttributes -> {}

T(R2)0 = A1
T(R1)1 = A2
Result = T(R1)2

1 T(R1)2 = MainEvaluate[ Hold[LinearSolve][ T(R2)0, T(R1)1]]
2 Return
"


There are no warnings generated, but I am not sure why there is a call to MainEvaluate[] in the CompilePrint.


There is a much clearer warning that compiling fails when one uses options within LinearSolve while attempting to compile. Consider the following:


In[5]:= v3 = Compile[{{m, _Real, 2}, {v, _Real, 1}},
LinearSolve[m, v, Method -> "Cholesky"]
]

During evaluation of In[5]:= Compile::extscalar: Method->Cholesky cannot
be compiled and will be evaluated externally.
The result is assumed to be of type Integer. >>


During evaluation of In[5]:= Compile::exttensor: LinearSolve[m,v,Method->Cholesky]
cannot be compiled and will be evaluated externally.
The result is assumed to be a rank 2 tensor of type Real. >>

Also, CompilePrint[] gives the following:


In[6]:= CompilePrint[v3]

Out[6]= "
2 arguments

1 Integer register
3 Tensor registers
Underflow checking off
Overflow checking off
Integer overflow checking on
RuntimeAttributes -> {}

T(R2)0 = A1
T(R1)1 = A2
Result = T(R2)2


1 T(R2)2 = MainEvaluate[ Function[{m, v}, LinearSolve[m, v,
Method -> Cholesky]][ T(R2)0, T(R1)1]]
2 Return
"

Questions:



  • If LinearSolve can't be compiled, why is there no warning in the default case? Is there something more subtle going on (e.g. some parts of the process are compiled)?

  • If yes, how can one use the Method option within the Compiled function to ensure that what can be compiled actually is?




Answer



acl already posted the crucial information needed to solve this conundrum (i.e., the definition of Internal`CompileValues[LinearSolve]), but wishes to delete his post since he had not interpreted it to give the complete answer. Therefore I re-post the following observation along with a summary of what it means.


The input,


Internal`CompileValues[];
ClearAttributes[Internal`CompileValues, ReadProtected];
Internal`CompileValues[LinearSolve]

yields:


HoldPattern[Internal`CompileValues[LinearSolve]] :> {

HoldPattern[
LinearSolve[
System`CompileDump`x_?(Internal`TensorTypeQ[Real, {_, _}]),
System`CompileDump`b_?(Internal`TensorTypeQ[Real, {_}])]
] :> _?(Internal`TensorTypeQ[Real, {_}]),
HoldPattern[
LinearSolve[
System`CompileDump`x_?(Internal`TensorTypeQ[Complex, {_, _}]),
System`CompileDump`b_?(Internal`TensorTypeQ[Complex, {_}])]
] :> _?(Internal`TensorTypeQ[Complex, {_}])

}

Briefly put, this tells us that when the compiler sees a function call like LinearSolve[x, b], it knows that:



  • when x is a real matrix and b is a real vector, the result is a real vector

  • when x is a complex matrix and b is a complex vector, the result is a complex vector


As a result of this knowledge, the compiler is able to determine what type of register is needed to store the return value from LinearSolve in these two cases. This is important if further operations are then carried out on the result: in the absence of type information, all subsequent operations on LinearSolve's return value would need to be performed via the interpreter using MainEvaluate for full generality, but because the type of the result is predetermined, such operations can be compiled instead. However, since LinearSolve is a highly optimized top-level function, compilation does not offer any benefit outside of this scenario, and so knowing the return type has no value if LinearSolve[x, b] is the entire contents of the compiled function, since the operation may as well have been performed via the interpreter anyway.


As regards why LinearSolve[x, b, Method -> m] produces a message: it is because the definition for Internal`CompileValues[LinearSolve] does not provide for pattern matching against LinearSolve calls when any Method is specified. It handles only the form LinearSolve[x, b].


Conclusion



Just because Internal`CompileValues[func] is defined for some function func, one cannot assume that func can be called directly from compiled code without using a MainEvaluate call. It simply means that the compiler has information about func which it can incorporate into the compilation process as a whole.


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...