Skip to main content

calculus and analysis - How to efficiently find moments of a multinormal distribution?


Update: Starting from V10.0 the build-in Moment is fast enough for practical use.




I have a multinormal distribution with covariance matrix $\sigma$ and zero mean. I want to find moment


$$ E[x_1^{r_1}x_2^{r_2}\cdots x_n^{r_n}] =\,? $$


Of course, there is a built-in function Moment, but it is quite slow for big moments


Moment[MultinormalDistribution[{0, 0}, {{σ[1, 1], σ[1, 2]}, {σ[2, 1], σ[2, 2]}}], 
{20, 20}]; // AbsoluteTiming // First



1.604658



Is there a more efficient method? It is related to this question and my answer there.


I think that Isserlis' theorem is helpful. It says


$$ E[x_1x_2\cdots x_n] = \sum\prod E[x_ix_j] $$ if $n$ even and $0$ if $n$ odd, by definition $E[x_ix_j]=\sigma_{ij}$. Here the notation $\sum\prod$ means summing over all distinct ways of partitioning $x_1,x_2,\ldots, x_n$ into unordered pairs.


For example, $x_1,x_2,x_3,x_4$ can be spitted as $$ (x_1,x_2),(x_3,x_4);\quad (x_1,x_3),(x_2,x_4);\quad (x_1,x_4),(x_2,x_3). $$ Therefore,


$$ E[x_1x_2x_3x_4]=\sigma _{1,2} \sigma _{3,4}+\sigma _{1,3} \sigma _{2,4}+\sigma _{1,4} \sigma _{2,3}. $$


If we want to calculate $E[x_1^2x_2x_3]$ then we can put two indexes equal (e.g. $x_4=x_2$) in the previous equation


$$ E[x_1^2x_2x_3] = \sigma _{1,1} \sigma _{2,3} + 2 \sigma _{1,2} \sigma _{1,3}. $$


Another examples:



$$ E[x_1^2x_2^2] = \sigma _{1,1} \sigma _{2,2} + 2 \sigma _{1,2}^2, $$


$$ E[x_1^3x_2] = 3 \sigma _{1,1} \sigma _{1,2}, $$


$$ E[x_1^4] = 3 \sigma _{1,1}^2. $$


For six variables there is 15 terms:


$$ E[x_1x_2x_3x_4x_5x_6]=\sigma _{1,2} \sigma _{3,4} \sigma _{5,6}+\sigma _{1,2} \sigma _{3,5} \sigma _{4,6}+\sigma _{1,2} \sigma _{3,6} \sigma _{4,5}+\sigma _{1,4} \sigma _{2,5} \sigma _{3,6}+\sigma _{1,5} \sigma _{2,4} \sigma _{3,6}+\sigma _{1,4} \sigma _{2,6} \sigma _{3,5}+\sigma _{1,6} \sigma _{2,4} \sigma _{3,5}+\sigma _{1,5} \sigma _{2,6} \sigma _{3,4}+\sigma _{1,6} \sigma _{2,5} \sigma _{3,4}+\sigma _{1,3} \sigma _{2,4} \sigma _{5,6}+\sigma _{1,4} \sigma _{2,3} \sigma _{5,6}+\sigma _{1,3} \sigma _{2,5} \sigma _{4,6}+\sigma _{1,5} \sigma _{2,3} \sigma _{4,6}+\sigma _{1,3} \sigma _{2,6} \sigma _{4,5}+\sigma _{1,6} \sigma _{2,3} \sigma _{4,5}. $$


As you can see, this statistical problem is just a combinatorics problem: find all possibilities to put $r_1$ balls of type $1$, $r_2$ balls of type $2$, ..., and $r_n$ balls of type $n$ to bins. Each bin can take 2 and only 2 balls.



Answer



Explicit formula


The wolfies' answer gave me an idea that one can derive an explicit formula. Here it is!


$$ E(x_1^{r_1}x_2^{r_2}\cdots x_n^{r_n}) = \sum_{(p)}\prod_{i}\frac{r_i!\,\sigma_{ii}^{p_{ii}}}{(2p_{ii})!!}\prod_{i

where sum is performed over all non-negative integer values of $p_{ii}$ and $p_{ij}$ with constrain


$$ \sum_{j=1}^{i-1}p_{ji}+2p_{ii}+\sum_{j=i+1}^{n}p_{ij}=r_i, \quad i=1,2,\ldots,n. $$


Implementation


moment2[x_List] := With[{n = Length[x]}, 
With[{σii = Table[σ[i, i], {i, n}], σij = Join @@ Table[σ[i, j], {i, n}, {j, i + 1, n}],
pij = Table[Unique[], {n (n - 1)/2}],
pos = n (n - 1)/2 - (n - Min[##] + 1) (n - Min[##])/2 + Abs[# - #2] &},
With[{pii = Table[x[[i]] - Sum[If[i == j, 0, pij[[pos[i, j]]]], {j, n}], {i, n}]/2,
lim = Sequence @@ Join @@ Table[{pij[[pos[i, j]]],
x[[i]] - Sum[If[i == k, 0, pij[[pos[i, k]]]], {k, j - 1}],

0, -If[j == n, 2, 1]}, {i, n}, {j, i + 1, n}]},
With[{arg = Times @@ ((x! σii^pii)/(2 pii)!!) Times @@ (σij^pij/pij!)},
If[Length[{lim}] == 0, arg, Sum[arg, lim]]]]]]

moment2[{19, 20, 21}] // Hash // AbsoluteTiming
moment[{19, 20, 21}] // Hash // AbsoluteTiming


{0.036750, 4700900427412246901}


{2.762643, 4700900427412246901}




It is very fast and requred no memory for memoization (for large moments moment takes a huge amount of memory). The leaf count is small as in the wolfies' answer.


Derivation of the formula


One variable


At the beginning, let us consider the simplest case with one variable $E(x_1^{r_1})$. The moment generation function is


$$ m(t_1) = \exp\left(\frac{1}{2}\sigma_{11}t_1^2\right). $$


The first derivatives are


$$ \frac{\partial m}{\partial t_1}(t_1) = t_1 \sigma_{11}\exp\left(\frac{1}{2}\sigma_{11}t_1^2\right), $$


$$ \frac{\partial^2 m}{\partial t_1^2}(t_1) = (t_1^2 \sigma_{11}^2+\sigma_{11})\exp\left(\frac{1}{2}\sigma_{11}t_1^2\right), $$


$$ \frac{\partial^3 m}{\partial t_1^3}(t_1) = (t_1^3 \sigma_{11}^3+3t_1\sigma_{11}^2)\exp\left(\frac{1}{2}\sigma_{11}t_1^2\right), $$



$$ \frac{\partial^4 m}{\partial t_1^4}(t_1) = (t_1^4 \sigma_{11}^4+6t_1^2\sigma_{11}^3+3\sigma_{11}^2)\exp\left(\frac{1}{2}\sigma_{11}t_1^2\right) $$


and so on. Then to calculate the moment we need to put $t_1=0$. For the forth moment we have


$$ \frac{\partial^4 m}{\partial t_1^4}(0) = 3\sigma_{11}^2. $$


This process can be represented by the following scheme


enter image description here


We can take the derivative of:




  1. The exponent. It increases the power of $t_{1}$ by 1 and multiply by $\sigma_{11}$. It is represented by the upward arrows (all of them has multiplication factor $\sigma_{11}$).





  2. The pre-exponential polynomial. It decreases the power of $t_1$ by 1 and multiply by the current power of $t_1$. It is represented by the downward arrows (their multiplication factors correspond to the vertical position).




At the end we need to come to the zero vertical position. All other terms disappear after substitution $t_1=0$. To obtain the moment we need to sum products of factors of all possible paths.


Example for $r_1=10$:


enter image description here


One can check that we obtain the known result


$$ E(x_1^{r_1}) = \left\{\begin{array}{ll} (r_1-1)!!\sigma_{11}^{r_1/2} & \text{if }r_1\text{ is even},\\ 0 & \text{if }r_1\text{ is odd}. \end{array}\right. $$


For the odd $r_1$ there is simply no paths.



Two variables


With two variables the moment generating function is


$$ m(t_1,t_2) = \exp\left(\frac{1}{2}\sigma_{11}t_1^2+\frac{1}{2}\sigma_{22}t_2^2+\sigma_{12}t_1t_2\right). $$


The moment $E(x_1^{r_1}x_2^{r_2})$ can be calculated with


$$ E(x_1^{r_1}x_2^{r_2}) = \frac{\partial^{r_1+r_2} m}{\partial t_1^{r_1}\partial t_2^{r_2}}\Bigg|_{\substack{t_1=0,\\t_2=0}} $$


For definiteness we will:


$~~\rm A.$ take all derivatives with respect to $t_1$,


$~~\rm B.$ then take all derivatives with respect to $t_2$.


At the stage $\rm A$ there are three possibilities:





  1. Increase the power of $t_{1}$ by 1 and multiply by $\sigma_{11}$.




  2. Increase the power of $t_{2}$ by 1 and multiply by $\sigma_{12}$.




  3. Decrease the power of $t_1$ by 1 and multiply by the current power of $t_1$.





After the stage $\rm A$ we substitute $t_1=0$.


At the stage $\rm B$ there are only to possibilities:




  1. Increase the power of $t_{2}$ by 1 and multiply by $\sigma_{22}$.




  2. Decrease the power of $t_2$ by 1 and multiply by the current power of $t_2$.





We don't consider production of powers of $t_1$ because they will be killed at the final substitution $t_1=0,t_2=0$.


Let us consider $E(x_1^6x_2^4)$. If at the stage $\rm A$ we always choose cases 1 or 3 (not 2) then it can be represented as multiplication of two full diagrams


enter image description here


If at the stage $\rm A$ we choose case 2 two times then it can be represented as multiplication of the diagrams


enter image description here


The first diagram is smaller by 2 because we consume 2 of 6 derivatives with respect to $t_1$ to produce $t_2$. The binomial coefficient $\binom{6}{2}$ is the number of possible choices of the case 2. The second diagram starts from the position 2 because we produce $t_2^2$ at the stage $\rm A$.


One can show that the sum of the diagram with $r_2$ derivatives and the initial position $k_2$ is $$ \left\{\begin{array}{ll} \frac{r_2!}{(r_2-k_2)!!}\sigma_{22}^{(r_2-k_2)/2} & \text{if }r_2-k_2\text{ is even},\\ 0 & \text{if }r_2-k_2\text{ is odd}. \end{array}\right. $$


Therefore, the term $\sigma_{11}^{p_{11}}\sigma_{12}^{p_{12}}\sigma_{22}^{p_{22}}$ in the moment $E(x_1^{r_1}x_2^{r_2})$ has the coefficient


$$ \frac{(2p_{11})!}{(2p_{11})!!}\frac{r_1!}{(p_{12})!(r_1-p_{12})!}\frac{r_2!}{(r_2-p_{12})!!} =\\ \frac{r_1!}{(2p_{11})!!}\frac{1}{p_{12}!}\frac{r_2!}{(2p_{22})!!} $$


where I use that $2p_{11}+p_{12}=r_1$ and $p_{12}+2p_{22}=r_2$. This formula tell us the form of the general formula which I wrote in the beginning. One can check that the formula has the same form for any number of variables. I didn't write it here because it is much more complicated. I just give an example for diagrams in the case of three variables.



Three variables


Let us consider $E(x_1^7x_2^6x_3^5)$ and coefficient before $\sigma_{11}^2\sigma_{12}^\vphantom{2}\sigma_{13}^2\sigma_{22}^2\sigma_{23}^\vphantom{2}\sigma_{33}^\vphantom{2}$.


enter image description here


enter image description here


The coefficient is


4!/4!! Multinomial[1, 2, 4] 5!/4!! Multinomial[1, 5] 5!/2!!


1701000


The general formula returns the same


moment2[{7, 6, 5}]


... + 1701000 σ[1, 1]^2 σ[1, 2] σ[1, 3]^2 σ[2, 2]^2 σ[2, 3] σ[3, 3] + ...

If anybody knows this formula please write where it is published!


Comments

Popular posts from this blog

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1....