Skip to main content

parallelization - Are built-in Mathematica functions already parallelized?


I've been noticing something strange since updating to Mathematica 8, and that is that occaisionally I'll see that the MathKernel is using up to 800% CPU in my Activity Monitor on OS X (I have 8 cores). I have no Parallel calls whatsoever, and this is in a single kernel, not across multiple kernels. My code is pretty much only Interpolates, Maps, Do loops, and plotting routines.



I'm curious if some of the built-in Mathematica routines are in fact already parallel, and if so, which ones?



Answer



Natively multi-threaded functions


A lot of functions are internally multi-threaded (image processing, numerical functions, etc.). For instance:


In[1]:= a = Image[RandomInteger[{0, 255}, {10000, 10000}], "Byte"];

In[2]:= SystemOptions["ParallelOptions"]

Out[2]= {"ParallelOptions" -> {"AbortPause" -> 2., "BusyWait" -> 0.01,
"MathLinkTimeout" -> 15., "ParallelThreadNumber" -> 4,

"RecoveryMode" -> "ReQueue", "RelaunchFailedKernels" -> False}}

In[3]:= ImageResize[a, {3723, 3231},
Resampling -> "Lanczos"]; // AbsoluteTiming

Out[3]= {1.2428834, Null}

In[4]:= SetSystemOptions[
"ParallelOptions" -> {"ParallelThreadNumber" -> 1}]


Out[4]= "ParallelOptions" -> {"AbortPause" -> 2., "BusyWait" -> 0.01,
"MathLinkTimeout" -> 15., "ParallelThreadNumber" -> 1,
"RecoveryMode" -> "ReQueue", "RelaunchFailedKernels" -> False}

In[5]:= ImageResize[a, {3723, 3231},
Resampling -> "Lanczos"]; // AbsoluteTiming

Out[5]= {2.7461943, Null}

Functions calling optimized libraries



Mathematica surely gets benefit from multi-threaded libraries (such as MKL) too:


In[1]:= a = RandomReal[{1, 2}, {5000, 5000}];

In[2]:= b = RandomReal[1, {5000}];

In[3]:= SystemOptions["MKLThreads"]

Out[3]= {"MKLThreads" -> 4}

In[4]:= LinearSolve[a, b]; // AbsoluteTiming


Out[4]= {4.9585104, Null}

In[5]:= SetSystemOptions["MKLThreads" -> 1]

Out[5]= "MKLThreads" -> 1

In[6]:= LinearSolve[a, b]; // AbsoluteTiming

Out[6]= {8.5545926, Null}


Although, the same function may not get multi-threaded depending on the type of input.


Compiled function


CompiledFunctions and any other functions that automatically use Compile can be multi-threaded too, using Parallelization option to Compile.


Caution




  1. Measuring timing with AbsoluteTiming for multi-threaded functions could be inaccurate sometimes.





  2. The performance gain is usually not direct proportion to the number of threads. It depends on a lot of different factors.




  3. Increasing number of threads (by using SetSystemOptions ) more than what your CPU support (either physical or logical cores) is not a good idea.




Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...