Skip to main content

numerical integration - More efficient method to compute moments of the Johnson SB distribution


Here is a very specific feature request. I need


Mean[JohnsonDistribution["SB", γ, δ, 0, 1]]

When I issue e.g.


Table[N[Mean[JohnsonDistribution["SB", γ, δ, 0, 1]]],
{γ, -10, 10}, {δ, 1, 10}] // TableForm

I get several messages of the form



NIntegrate::ncvb: "NIntegrate failed to converge to prescribed accuracy after
9 recursive bisections in Statistics`DistributionPropertiesDump`x$10142 near
{Statistics`DistributionPropertiesDump`x$10142} = {1.}. NIntegrate obtained
0.9960487815187503` and 0.03645902871201284` for the integral and error estimates."

which seemingly indicates that Mathematica tries to find the mean using numerical integration.


I could live with this somehow, but actually, I also need several higher moments of that distribution (because I want to estimate a distribution with them as the Maximum Likelihood method seems to be too sensitive to fluctuations in my empirical data). And with higher moments, this seems to be even worse - when I am trying to compute an estimated distribution with the method of moments, Mathematica goes on computing for a long time and I stopped it, since I need very many instances of it, and it would be impractical.


So much for the motivation. Now here's what I've been able to find out about this:


In the Appendix to the initial paper where Johnson introduced his distributions, he gives a series expansion for the mean of his SB distribution based on papers of Mordell in 1920 and 1933 (I could not find any online file for the older one). Johnson also gives expressions of higher moments in terms of derivatives of the mean with respect to one of the parameters, and provides sort of an algorithm to compute them. His formula for the mean reads 2πδeγ22eπiψt22πνt1+e2πtdt

where ψ=2πiδ2 and ν=γδ.


I cannot judge how efficient his procedures might be, but what I know is that in recent years, there has been an explosion of interest in Mordell integrals from the side of number theorists, and they produced lots of new information about it.



In particular, for example, in Theorem 2.1 here one may find the following: 12eπiτw22πzwcosh(πw)dw=η(z,τ)+eπiz2τiτη(zτ,1τ)

where η(z,τ)=n=0(4n)eπinzeπin2τ4,
with (4n) the Jacobi symbol.


The latter series has very good convergence properties; at least, I am sure it will give a much more efficient method to handle the moments of the Johnson SB distribution than the numerical integration which Mathematica seemingly uses now.


Will it be possible to incorporate this?



Answer



Using a real value with desired precision for the last parameter:


mom[k_Integer] := Outer[Moment[JohnsonDistribution["SB", ##, 0, 
SetPrecision[1., 20]], k] &, Range[-10, 10], Range[1, 10]]
mom[1] // MatrixForm

enter image description here



Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...