Skip to main content

matrix - Product of $N 2 times 2$ matrices and subsequently solving an equation dependent on the product


I have $N\ 2 \times 2$ matrices, each one containing one variable energy which may be complex. I want to muliply them all together to a single matrix M and then find all the energy for which M[[2,1]] + M[[2,2]] = 0.


What I do so far is the following:


I generate the matrices and put them all inside a list called matrixlist. Each one still contains the variable 'energy'.


M = Fold[Dot, IdentityMatrix[2], matrixlist]; 

ev = FindRoot[M[[2, 1]] + M[[2, 2]] == 0, {energy, 1.1}]]

This prodecure sometimes finds one of the complex energy.


The problem I'm having is with memory. As soon as I multiply more than, say, 20 matrices, my PC gives up (kernel shutdown). I would like to handle $N= 100$.


Any idea how to find all the complex solutions energy within reasonable time? Maybe it helps for you to know that all the matrices are unitary (but all are different and the energy dependency is complicated)?


edit: sorry for the mistakes, I am new to SE, thank you for welcoming me! Here is the relevant code:


   V = x^2 - 0.1 x^4 I - 0.1 I + 1;  
h = 1;
dischalf = Table[{x - h/2, V}, {x, 0, 10, h}];
k[j_] := Sqrt[2 (energy - U[j])];

U[j_] := dischalf[[j + 1, 2]];
B[j_, i_] :=
1/(2 k[j]) {{(k[j] + k[i]) Exp[
I (k[j] + k[i]) h/2], (k[j] - k[i]) Exp[
I (k[j] - k[i]) h/2]}, {(k[j] -
k[i]) Exp[-I (k[j] - k[i]) h/2], (k[j] +
k[i]) Exp[-I (k[j] + k[i]) h/2]}};
SmatrixlistHalb =
Reverse[Table[B[j + 1, j], {j, 0, Length[dischalf] - 2}]];
STransferMatrixH = Fold[Dot, IdentityMatrix[2], SmatrixlistHalb];

Timing[FindRoot[
STransferMatrixH[[2, 1]] + STransferMatrixH[[2, 2]] == 0, {energy,
0.5}]]

This calculation takes on my PC ~15sec. But I would like to decrease the stepsize h to 0.1, which is a bit too much for my good old calculator.
To see better how the matrices look like, here the matrix $B[1,2]$: $B[1,2]= \left( \begin{array}{cc} \frac{e^{\frac{1}{2} i h (k(1)+k(2))} (k(1)+k(2))}{2 k(1)} & \frac{e^{\frac{1}{2} i h (k(1)-k(2))} (k(1)-k(2))}{2 k(1)} \\ \frac{e^{-\frac{1}{2} i h (k(1)-k(2))} (k(1)-k(2))}{2 k(1)} & \frac{e^{-\frac{1}{2} i h (k(1)+k(2))} (k(1)+k(2))}{2 k(1)} \\ \end{array} \right)$


If I can help you helping me in any way let me know!



Answer



General idea


A general rule of thumb to reduce the time and memory needed for a computation is to use inexact (floating-point) numerics instead of exact or symbolic approaches, and to use it early. Of course, one might worry about accuracy, but there's no guarantee that feeding an exact, symbolic problem to FindRoot will be more accurate than feeding an inexact one.



For a problem $f(x) = 0$, from $dy = f'(x_0) \; dx$, the norm of the error $|dy|$ in the residual $f(x_0^*)$ of best floating-point approximation $x_0^*$ to an exact solution $x_0$ will roughly at most the norm of the Jacobian (derivative) $|f'(x_0)|$ times the uncertainty $|dx|$ in $x_0$ from the floating-point approximation. The uncertainty $|dx|$ is at most 0.5 ulp (unit in the last place) and is approximately $x_0\varepsilon/2$, where $\varepsilon$ is $MachineEpsilon. So a residual that satisfies $$f(x_0) \le |f'(x_0^*)|\, x_0\,\varepsilon/2$$ represents a solution that is probably pretty good.


By this measure, for the example problem, machine precision does a pretty good job, so compiling some of the computation will speed things up more. Otherwise, one could use arbitrary-precision numbers and replaced the compiled functions with regular Mathematica functions.


OP's code


The OP's code takes 10+ seconds and seems to have a large residual but not relative to the magnitude of the derivative (~ 10^51).


AbsoluteTiming[
FindRoot[STransferMatrixH[[2, 1]] + STransferMatrixH[[2, 2]] ==
0, {energy, 0.5}]]
STransferMatrixH[[2, 1]] + STransferMatrixH[[2, 2]] /. Last[%]



FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>



(*
{10.7866, {energy -> 1.78929 - 0.154278 I}}
4.15384*10^35 + 9.96921*10^35 I
*)

Speed up: Compile the objective function


We compile the objective function for speed, objCF, and provide a _?NumericQ-protected interfaces, obj, so that the compiled functions won't evaluate symbolically. (I had done the same for the Jacobian, but it turns out not to help that much.)


Clear[obj, jac];

With[{mat = Dot @@ SmatrixlistHalb}, (* This takes a long time if h is small *)
objCF = Compile[{{energy, _Complex}},
Evaluate[mat[[2, 1]] + mat[[2, 2]]]
];
obj[e0_?NumericQ] := objCF[e0];
jacCF = Compile[{{energy, _Complex}},
Evaluate@D[{mat[[2, 1]] + mat[[2, 2]]}, {{energy}}]];
jac[e0_?NumericQ] := jacCF[e0];
];


The objective function alone give the same result, albeit in much less time:


(sol = FindRoot[obj[e0] == 0, {e0, 1/2}]) // AbsoluteTiming
obj[e0] /. sol


FindRoot::lstol: ...



(*
{0.045566, {e0 -> 1.78929 - 0.154278 I}}
-6.64614*10^35 + 6.64614*10^35 I

*)

By supplying the Jacobian, the root-finding is faster and more accurate:


(sol = FindRoot[obj[e0] == 0, {e0, 0.5}, Jacobian -> jac[e0]]) // AbsoluteTiming
obj[e0] /. sol

FindRoot::lstol: ...


(*
{0.046047, {e0 -> 1.78929 - 0.154278 I}}
-5.81537*10^35 + 3.32307*10^35 I

*)

Using less memory


Computing the matrices for a given value of energy and then their product saves on memory and it doesn't take any longer. Again I tried with the Jacobian, but again it didn't help. Code note: The form Fold[Dot, matCF[e0]] work in V10 and above; for earlier versions of Mathematica, use the OP's Fold[Dot, IdentityMatrix[2], matCF[e0]].


Clear[obj2, jac2];
With[{
matlist = SmatrixlistHalb,
dmatlist = D[SmatrixlistHalb, energy]},
matCF = Compile[{{energy, _Complex}}, matlist];
dmatCF = Compile[{{energy, _Complex}}, dmatlist];

obj2[e0_?NumericQ] := With[{mat2 = Fold[Dot, matCF[e0]]},
mat2[[2, 1]] + mat2[[2, 2]]
];
jac2[e0_?NumericQ] := With[{m = matCF[e0], dm = dmatCF[e0]},
With[{j = Sum[Fold[Dot, ReplacePart[m, i -> dm[[i]]]], {i, Length@m}]},
{{j[[2, 1]] + j[[2, 2]]}}]];
];

We get similar results:


(sol = FindRoot[obj2[e0] == 0, {e0, 1/2}]) // AbsoluteTiming

obj2[e0] /. sol


FindRoot::lstol: ...



(*
{0.047256, {e0 -> 1.78929 - 0.154278 I}}
1.24615*10^35 + 0. I
*)


Numerical stability


The large magnitude of the Jacobian at the solution,


absJ = Abs@ jac[e0] /. {e0 -> 1.7892907120093438` - 0.15427772564884512` I}
absDX = Abs[e0] /. {e0 -> 1.7892907120093438` - 0.15427772564884512` I}
(*
{{4.04292*10^51}}
1.79593
*)

makes it hard to get an exact root at machine precision. One should expect an a residual of approximately at most



absJ * absDX * $MachineEpsilon / 2
(* {{8.06111*10^35}} *)

which is what we see in the results of FindRoot. Thus it is a happy accident that obj has an apparently exact root, but obj2 does not:


obj[e0] /. {e0 -> 1.7892907120093438` - 0.15427772564884512` I}
obj2[e0] /. {e0 -> 1.7892907120093438` - 0.15427772564884512` I}
(*
0. + 0. I
2.4923*10^35 + 3.32307*10^35 I
*)

Comments

Popular posts from this blog

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1....