Skip to main content

performance tuning - What is the fastest way to find an integer-valued row echelon form for a matrix with integer entries?


Let me begin by saying that this is my first post on StackExchange. I apologize in advance if I unwittingly break any of its unwritten rules of etiquette.



Recently, I've been trying to understand an algorithm by coding it into Mathematica. One of the main steps in the algorithm requires me to find a integer-valued row echelon form for a matrix with integer entries. There are several constraints that the input matrices satisfy. The matrices represent sparse underdetermined (never inconsistent or trivial) homogeneous linear systems. The number of columns may only take particular values fixed by the algorithm but, for the purposes of this question, it is enough to know that the number of columns is always much, much larger than the number of rows. The number of rows is also typically somewhat large and it may be the case that some number of rows are duplicates or linear combinations of other rows. Another key point is that the integer entries could generically be quite large. Therefore Compile was not an option for me (however if there is some magic way to do multi-precision arithmetic with Compile please let me know!).


My main motivation for asking this question is that this is the first time that it has actually mattered to me how efficient my Mathematica code is and I am having a very difficult time optimizing my solution attempts. First a word about data structures. I represent my sparse arrays using their rule-based form on a row-by-row basis. As a representative example, consider the following 15 X 100 matrix:


{{{1} -> 1, {21} -> -1, {62} -> -1, {92} -> 1}, {{2} -> 
1, {22} -> -1, {64} -> -1, {94} -> 1}, {{3} ->
1, {23} -> -1, {65} -> -1, {95} -> 1}, {{2} ->
1, {22} -> -1, {64} -> -1, {94} -> 1}, {{4} ->
1, {24} -> -1, {67} -> -1, {97} -> 1}, {{5} ->
1, {25} -> -1, {68} -> -1, {98} -> 1}, {{3} ->
1, {23} -> -1, {65} -> -1, {95} -> 1}, {{5} ->
1, {25} -> -1, {68} -> -1, {98} -> 1}, {{6} ->

1, {26} -> -1, {69} -> -1, {99} -> 1}, {{55} -> -2, {56} ->
1987, {62} -> -8, {63} -> -2, {65} -> -4, {66} ->
1987, {72} -> -4, {73} -> -2, {82} -> -4, {95} ->
2}, {{58} -> -2, {59} ->
1987, {64} -> -8, {65} -> -2, {68} -> -4, {69} ->
1987, {74} -> -4, {75} -> -2, {84} -> -4, {98} ->
2}, {{59} -> -2, {60} ->
1987, {65} -> -8, {66} -> -2, {69} -> -4, {70} ->
1987, {75} -> -4, {76} -> -2, {85} -> -4, {99} ->
2}, {{5} -> -2, {6} -> 1987, {13} -> -2, {16} ->

1987, {22} -> -4, {23} -> -2, {32} -> 4, {45} -> -2, {55} ->
4, {56} -> -3974, {62} -> 16, {63} -> 4, {64} -> -8, {65} ->
4, {66} -> -3974, {72} -> 8, {73} -> 4, {82} -> 8, {94} ->
8}, {{8} -> -2, {9} -> 1987, {15} -> -2, {19} ->
1987, {24} -> -4, {25} -> -2, {34} -> 4, {48} -> -2, {58} ->
4, {59} -> -3974, {64} -> 16, {65} -> 4, {67} -> -8, {68} ->
4, {69} -> -3974, {74} -> 8, {75} -> 4, {84} -> 8, {97} ->
8}, {{9} -> -2, {10} -> 1987, {16} -> -2, {20} ->
1987, {25} -> -4, {26} -> -2, {35} -> 4, {49} -> -2, {59} ->
4, {60} -> -3974, {65} -> 16, {66} -> 4, {68} -> -8, {69} ->

4, {70} -> -3974, {75} -> 8, {76} -> 4, {85} -> 8, {98} -> 8}}

My main motivation for using this representation of my data is that it seems to make the implementation of an efficient pivoting strategy incredibly simple. I'm not overly wedded to this choice of data structure and would be happy to switch to something else if it allows for a better solution to the problem.


One obvious question is: Why not just use built-in functions? It seems to me that, at least naively, the out-of-the-box solution based on RowReduce


IntegerRowEchelon1[L_, n_] := (temp = 
DeleteCases[RowReduce[SparseArray[#, n] & /@ L], 0 Range[n]];
Drop[ArrayRules[#], -1] & /@
Expand[(LCM @@ Denominator[Flatten[temp]]) temp])

is overkill in the extreme. First of all, it seems impossible to force RowReduce to deliver a row echelon as opposed to a reduced row echelon form. The back-solving step of the elimination process is unnecessary for me and therefore should be avoided. Furthermore, since I am working over the integers, I have to waste time by clearing the denominators and turning the reduced row echelon form that RowReduce delivers into an acceptable integer-valued output. So far, however, I haven't been able to improve upon the above solution.



The essence of my question to you, the experts, is whether or not I should just go with the ridiculous work-around IntegerRowEchelon1 or if there is a more natural lower-level solution that would work as fast or faster. There are clearly many computations done by IntegerRowEchelon1 that could in principle be avoided but, on the other hand, RowReduce has undoubtedly been highly optimized by the developers at WRI. So it is not obvious to me whether or not one can do significantly better.


Finally, let me share with you two solution attempts which lose to IntegerRowEchelon1 by factors of roughly 2 and 3 respectively. I tested them both on 1000 15 X 100 matrices with 35% of their entries filled in using RandomInteger. The correctness of the output was verified by applying IntegerRowEchelon1 to both the input and the output. These solution attempts were based on a classical algorithm due to Camille Jordan. Since this algorithm was not widely known until the work of Erwin Bareiss circa 1968 (The original paper, E. H. Bareiss, "Sylvester's Identity and Multistep Integer-Preserving Gaussian Elimination", is googleable (search "Erwin Bareiss Sylvester's Identity", first link) if you want to have a look at it. I also was able to get quite a bit out of the Google Books preview for Chapter 9 of "Algorithms For Computer Algebra" by Geddes et. al. http://books.google.es/books?id=9fOUwkkRxT4C&printsec=frontcover&hl=es&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false), it is usually called the Bareiss one-step fraction-free Gaussian elimination algorithm. This algorithm is exposited in many textbooks on computer algebra and is probably the simplest one that does what I want.


Although I couldn't find an answer in the documentation, I would guess that this algorithm is what Mathematica is using to forward-solve when you select the method "OneStepRowReduction." Needless to say, I tried forcing RowReduce to use this method but it appears to make no difference whatsoever, in either the timing or in the output. Of course, I've also considered the possibility that the reason I can't beat Mathematica is that it is using an algorithm more efficient than Bareiss's. For now, I will proceed assuming that Bareiss's algorithm is at least no worse than the algorithm(s) upon which RowReduce is ultimately based. For those of you with insider information, any pearls of wisdom you can share about how RowReduce actually works would be more than welcome.


My first attempt at coding Bareiss's algorithm looks pretty ugly but only loses to IntegerRowEchelon1 by a factor of about 1.9:


IntegerRowEchelon2[L_] := (Aold = SortBy[L, #[[1, 1, 1]] &]; 
rnum = Length[Aold]; i = 1; lastpivot = 1; Anew = Aold; flag = 0;
While[i <= rnum - 1,
If[Aold[[i, 1, 1]] =!= Aold[[i + 1, 1, 1]], i++; lastpivot = 1;
Continue[]];
For[a = i + 1, a <= rnum, a++,

If[Aold[[i, All, 1]] =!= Aold[[a, All, 1]],
tempRRi =
Rule[#, 0] & /@
Complement[Aold[[a, All, 1]], Aold[[i, All, 1]]];
tempRRa =
Rule[#, 0] & /@
Complement[Aold[[i, All, 1]], Aold[[a, All, 1]]];
Aold[[a]] = Sort[Join[Aold[[a]], tempRRa]];
Aold[[i]] = Sort[Join[Aold[[i]], tempRRi]];
Anew[[i]] = Aold[[i]];

Anew[[a]] =
If[Aold[[a, 1, 1]] === Aold[[i, 1, 1]], Drop[Aold[[a]], 1],
Aold[[a]]],
Anew[[a]] =
If[Aold[[a, 1, 1]] === Aold[[i, 1, 1]], Drop[Aold[[a]], 1],
Aold[[a]]]];
Which[Aold[[a, 1, 1]] === Aold[[i, 1, 1]],
Anew[[a, 1 ;; Length[Aold[[a]]] - 1, 2]] =
Drop[1/lastpivot (Aold[[i, 1, 2]] Aold[[a,
1 ;; Length[Aold[[a]]], 2]] -

Aold[[i, 1 ;; Length[Aold[[a]]], 2]] Aold[[a, 1, 2]]), 1],
True,
Anew[[a, 1 ;; Length[Aold[[a]]], 2]] =
1/lastpivot Aold[[i, 1, 2]] Aold[[a, 1 ;; Length[Aold[[a]]],
2]]]; If[
Anew[[a, 1 ;; Length[Aold[[a]]] - 1, 2]] =!=
0 Range[Length[Aold[[a]]] - 1],
Anew[[{i, a}]] = DeleteCases[#, {x_} -> 0] & /@ Anew[[{i, a}]];
Aold[[{i, a}]] = DeleteCases[#, {x_} -> 0] & /@ Aold[[{i, a}]],
flag = 1; Anew = Delete[Anew, a]; rnum--; Break[]]];

If[flag == 1, flag = 0; If[i != Anew[[i + 1, 1, 1, 1]], i++];
Aold = Anew; Continue[]]; lastpivot = Aold[[i, 1, 2]]; i++;
Anew = SortBy[Anew, #[[1, 1, 1]] &]; Aold = Anew]; Anew)

I was sure that my second attempt was going to perform much better but it turned out to lose by a factor of about 2.8 to IntegerRowEchelon1


IntegerRowEchelon3[L_] :=
(A = SortBy[L /. Rule[{x_}, y_] :> List[x, y], #[[1, 1]] &];
rnum = Length[A]; i = 1; lastpivot = 1;
While[i < rnum,
If[A[[i, 1, 1]] != A[[i + 1, 1, 1]], i++; lastpivot = 1;

Continue[]];
pivotrow = A[[i]]; (activerow = A[[#]];
A[[#]] =
If[Length[#] == 1, {#[[1, 1]], #[[1, 2]]/
lastpivot}, {#[[1, 1]], (#[[1, 2]] + #[[2, 2]])/
lastpivot}] & /@
SplitBy[Sort[
Join[Map[{#[[1]], pivotrow[[1, 2]] #[[2]]} &,
If[activerow[[1, 1]] == pivotrow[[1, 1]],
Drop[activerow, 1], activerow]],

Map[{#[[1]], -If[activerow[[1, 1]] == pivotrow[[1, 1]],
activerow[[1, 2]], 0] #[[2]]} &,
Drop[pivotrow, 1]]]], #[[1]] &]) & /@ Range[i + 1, rnum];
A = SortBy[
DeleteCases[DeleteCases[A, {_, 0}, {2}], {}], #[[1, 1]] &];
lastpivot = pivotrow[[1, 2]]; rnum = Length[A]; i++];
A /. List[x_, y_] :> Rule[{x}, y])

At least to me,IntegerRowEchelon3 looks much nicer than IntegerRowEchelon2. But it's way slower! This is quite counter-intuitive to me because, in coding IntegerRowEchelon3, I made a concerted effort to get rid of as much fluff as possible and be as functional as possible which I thought was "good Mathematica programming style". It was very disappointing to discover that IntegerRowEchelon3 loses so badly to IntegerRowEchelon1. I must admit that I feel a bit lost at this point about issues of performance tuning and look forward to seeing alternative solutions to my problem. To be clear, at the end of the day speed is what matters and I would be more than happy to accept "high-level" solutions that beat IntegerRowEchelon1 as well.



Answer




Actually what you want is HermiteDecomposition. It is the integer ring form of RowReduce; that latter, while working largely over the integers for the forward elimination, actually is an echelon form over the rational field.


As for efficiency, you'll have to experiment to see if it meets your needs. If not, feel free to post or send examples so I'll have something concrete to test.


--- edit ---


Here is some testing. I defined your input to be m1, and do not repeat show that definition below. First I turn it into a bona fide mathematica SparseArray object.


m2 = Flatten[MapIndexed[{#2[[1]], #1[[1, 1]]} -> #1[[2]] &, m1, {2}], 1];
mat = SparseArray[m2];

Now we get the HNF.


In[455]:= Timing[{uu, hh} = HermiteDecomposition[mat];]


Out[455]= {0.01, Null}

We have the normal form (as uu) and the conversion matrix (as hh). I'll show the former.


In[457]:= uu

Out[457]= {{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2, -1988,

0, -1, -994}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2, 0, 0, -1}, {0,
0, 0, 0, 0, -2, 0, 0, 1987, -2, 0, 0, -1, 0, 0}, {0, 0, 0, 0, 0, 0,
0, 0, 0, -1, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0,
0, -1, -994, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0}, {0, -1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, -1, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, -1, 0, 1, 0, 0, 0,
0, 0, 0, 0}}

Here is a substantially larger example.


randomSparseIntegerMatrix[m_, n_, p_, max_] := 

RandomChoice[{p, 1 - p} -> {1, 0}, {m, n}]*
RandomInteger[{-max, max}, {m, n}]

I create a 50x250 matrix with inputs up to 10^8 in magnitude, and sparsity of around 80%.


biggermat = randomSparseIntegerMatrix[50, 250, 1/5, 10^20];

In[465]:= Timing[{uubig, hhbig} = HermiteDecomposition[biggermat];]

Out[465]= {1.68, Null}


Hope that helps to give some idea of possibilities for using HermiteDecomposition.


--- end edit ---


Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.