Skip to main content

matrix - Solving "Lyapunov-like" equation AX+X'B=C


Is there some way I can solve the following equation with $d-by-d$ matrices in Mathematica in reasonable time?


$$AX+X'B=C$$



My solution below calls linsolve on $d^2,d^2$ matrix, which is too expensive for my case (my d is 1000)


kmat[n_] := Module[{mat1, mat2},
mat1 = Array[{#1, #2} &, {n, n}];
mat2 = Transpose[mat1];
pos[{row_, col_}] := row + (col - 1)*n;
poses = Flatten[MapIndexed[{pos[#1], pos[#2]} &, mat2, {2}], 1];
Normal[SparseArray[# -> 1 & /@ poses]]
];
unvec[Wf_, rows_] := Transpose[Flatten /@ Partition[Wf, rows]];
vec[x_] := Flatten[Transpose[x]];


solveLyapunov2[a_, b_, c_] := Module[{},
dims = Length[a];
ii = IdentityMatrix[dims];
x0 = LinearSolve[
KroneckerProduct[ii, a] +
KroneckerProduct[Transpose[b], ii].kmat[dims], vec[c]];
X = unvec[x0, dims];
Print["error is ", Norm[a.X + Transpose[X].b - c]];
X

]

a = RandomReal[{-3, 3}, {3, 3}];
b = RandomReal[{-3, 3}, {3, 3}];
c = RandomReal[{-3, 3}, {3, 3}];
X = solveLyapunov2[a, b, c]

Edit Sep 30: An approximate solution would be useful as well. In my application $C$ is the gradient, and $X$ is the preconditioned gradient, so I'm looking for something that's much better than a "default" solution of $X_0=C$



Answer



General matrices



For the desired matrix sizes I have doubts that a numerical solution would be feasible. Here is a simplified code using sparse matrices.


tmSylvester[n_]:=Module[{a,b,c,sA,sB,sC,sAB},
a=RandomReal[{-3,3},{n,n}];
b=RandomReal[{-3,3},{n,n}];
c=RandomReal[{-3,3},{n,n}];
sA=SparseArray[Table[{(i-1)n+l,(k-1)n+l}->a[[i,k]],{i,n},{k,n},{l,n}]//Flatten];
sB=SparseArray[Table[{(l-1)n+j,(k-1)n+l}->b[[k,j]],{k,n},{j,n},{l,n}]//Flatten];
sAB=sA+sB;
sC=SparseArray[Table[{(i-1)n+j}->c[[i,j]],{i,n},{j,n}]//Flatten];
First[Timing[LinearSolve[sAB,sC];]]]


Now, let us plot the timing


ListLogPlot[Table[{n,tmSylvester[n]},{n,10,120,10}],Joined->True,PlotTheme->{"Frame","Monochrome"}, FrameLabel->{"Matrix Size","Time(s)"}]

enter image description here


Even at a very optimistic extrapolation it is unlikely that the n=1000 calculation would be routinely possible. There are, however, experts here that might be able to further tune up the linear solver.


Nonsingular matrices


According to F. M. Dopico, J. González, D. Kressner, and V. Simoncini. Projection methods for large-scale T-Sylvester equations, in Mathematics of Computation (2015), under the usual conditions of existence the following equations have equal unique solutions


$$􏰁B^{−T} A􏰂 X − X 􏰁A^{−T} B􏰂 = B^{−T} C − B^{−T} C^{T} A^{−T} B;$$ $$AX + X^T B = C, $$ where $A^{-T}\equiv(A^{-1})^T$.


Therefore, we can use the Lyapunov solver



tmDopico[n_]:=Module[{a,b,c},
a=RandomReal[{-3,3},{n,n}];
b=RandomReal[{-3,3},{n,n}];
c=RandomReal[{-3,3},{n,n}];
First[Timing[LyapunovSolve[Transpose[Inverse[b]].a,-Transpose[Inverse[a]].b,Transpose[Inverse[b]].c-Transpose[Inverse[b]].Transpose[c].Transpose[Inverse[a]].b];]]]

Let us check the timing:


ListLogPlot[Table[{n,tmDopico[n]},{n,50,1000,50}],Joined->True,PlotTheme->{"Frame","Monochrome"}, FrameLabel->{"Matrix size","Time(s)"}]

enter image description here



The method should therefore have $\mathcal{O}(n^3)$ scaling under favorite conditions.


Comments

Popular posts from this blog

front end - keyboard shortcut to invoke Insert new matrix

I frequently need to type in some matrices, and the menu command Insert > Table/Matrix > New... allows matrices with lines drawn between columns and rows, which is very helpful. I would like to make a keyboard shortcut for it, but cannot find the relevant frontend token command (4209405) for it. Since the FullForm[] and InputForm[] of matrices with lines drawn between rows and columns is the same as those without lines, it's hard to do this via 3rd party system-wide text expanders (e.g. autohotkey or atext on mac). How does one assign a keyboard shortcut for the menu item Insert > Table/Matrix > New... , preferably using only mathematica? Thanks! Answer In the MenuSetup.tr (for linux located in the $InstallationDirectory/SystemFiles/FrontEnd/TextResources/X/ directory), I changed the line MenuItem["&New...", "CreateGridBoxDialog"] to read MenuItem["&New...", "CreateGridBoxDialog", MenuKey["m", Modifiers-...

How to thread a list

I have data in format data = {{a1, a2}, {b1, b2}, {c1, c2}, {d1, d2}} Tableform: I want to thread it to : tdata = {{{a1, b1}, {a2, b2}}, {{a1, c1}, {a2, c2}}, {{a1, d1}, {a2, d2}}} Tableform: And I would like to do better then pseudofunction[n_] := Transpose[{data2[[1]], data2[[n]]}]; SetAttributes[pseudofunction, Listable]; Range[2, 4] // pseudofunction Here is my benchmark data, where data3 is normal sample of real data. data3 = Drop[ExcelWorkBook[[Column1 ;; Column4]], None, 1]; data2 = {a #, b #, c #, d #} & /@ Range[1, 10^5]; data = RandomReal[{0, 1}, {10^6, 4}]; Here is my benchmark code kptnw[list_] := Transpose[{Table[First@#, {Length@# - 1}], Rest@#}, {3, 1, 2}] &@list kptnw2[list_] := Transpose[{ConstantArray[First@#, Length@# - 1], Rest@#}, {3, 1, 2}] &@list OleksandrR[list_] := Flatten[Outer[List, List@First[list], Rest[list], 1], {{2}, {1, 4}}] paradox2[list_] := Partition[Riffle[list[[1]], #], 2] & /@ Drop[list, 1] RM[list_] := FoldList[Transpose[{First@li...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...