Skip to main content

performance tuning - Are there rules of thumb for knowing when RandomVariate is more efficient than RandomReal?


Please consider the following:


From a fresh Mathematica kernel, RandomVariate is more efficient for NormalDistribution but RandomReal is for uniformly distributed noise.


RandomReal[NormalDistribution[0, 1], 100]; // Timing


{0.00535, Null}



RandomVariate[NormalDistribution[0, 1], 100]; // Timing



{0.000069, Null}



RandomReal[{0, 1}, 100]; // Timing


{0.00004, Null}



RandomVariate[UniformDistribution[], 100]; // Timing



{0.005236, Null}



But if I re-evaluate, I get Timing results that are much more similar:


RandomReal[NormalDistribution[0, 1], 100]; // Timing


{0.000051, Null}



RandomVariate[NormalDistribution[0, 1], 100]; // Timing



{0.000052, Null}



RandomReal[{0, 1}, 100]; // Timing


{0.00003, Null}



RandomVariate[UniformDistribution[], 100]; // Timing



{0.000058, Null}



Does caching the distribution definition really matter that much?


Obviously RandomVariate has the advantage that it can generate data from mixed (not only fully continuous or fully discrete) distributions. So it is more general. But if one is generating random numbers from standard distributions like the normal or the Poisson, is there any advantage – performance or otherwise – to using RandomVariate instead of RandomReal or RandomInteger?



Answer



In general you should use RandomVariate for distributions and RandomReal for uniforms. Often RandomVariate calls RandomReal or RandomInteger under the hood but it varies on a distribution by distribution basis. After loading any necessary symbols, on evaluation, any timing differences should be negligible.


RandomVariate is intended to give the flexibility to not have to think of whether the distribution is continuous or discrete (or mixed), it has also been optimized for each distribution in the system. One should always be able to use RandomInteger or RandomReal if the type is known ahead of time (and is not mixed or fuzzy in some way e.g. EmpiricalDistribution) but again, most of the overhead is in initializing the generator so if you are generating a large number of random numbers you shouldn't notice a big difference in timings after evaluating both RandomVariate and RandomReal/RandomInteger.


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]