Say I want to multiply a reasonably large list of symbolic matrices.
tab =
Partition[#,4]&/@
Table[
Symbol[CharacterRange["a","z"][[nm]]<>ToString[nn]],{nn,1,12},{nm,1,16}];
tab[[3]]
{{a3,b3,c3,d3},{e3,f3,g3,h3},{i3,j3,k3,l3},{m3,n3,o3,p3}}*)
Applying Dot
directly on the list is pretty slow.
Dot@@tab // AbsoluteTiming // First
2.761965
However, because of associativity I can split the multiplication in a way that Dot
is always called with a small number of arguments.
FDot=
Dot@@
Nest[
(Dot@@@(Partition[#,Divisors[Length@#][[2]]]))&,
#,
Total[Last/@FactorInteger@Length@#]]&;
FDot@tab // AbsoluteTiming // First
0.002115
This is $3$ orders of magnitude faster.
So, can anybody explain to me why Mathematica computes $$ ((a \cdot b) \cdot (c \cdot d)) \cdot ((e \cdot f) \cdot (g \cdot h)) $$ faster than $$ a \cdot b \cdot c \cdot d \cdot e \cdot f \cdot g \cdot h \quad? $$
This is a follow-up on this question.
Update
I revisited the problem and came up with a more elegant implementation of the faster Dot
function.
FastDot[a_,b___]:=a.b;
FastDot[a_,b_,rest__]:=a.b.FastDot[rest];
FastDot@@tab // AbsoluteTiming // First
0.006545
Answer
For a wide variety of applications, the cost of doing a scalar product is rarely linear in the complexity of the multiplicands. Furthermore, the complexity of a product is usually larger than the complexity of the inputs. This can range from the simple case of multiplying two $n$-bit integers to get a $2n$-bit sum, to the horrible case of multiplying two sparse polynomials with $n$ terms each and getting a polynomial with $n^2$ terms.
The product of two matrices, of course, involves doing lots of scalar products, and correspondingly the product matrix has entries of greater complexity than the inputs. In your example, the scalar sums involved in taking a matrix product are increasing the complexity of your matrix entries too.
Since the costs aren't linear, the cost of chained multiplications can vary wildly depending on the order. A simple method that often gets nearly optimal performance is to try and balance the products: e.g. by always multiplying the two least complex terms of the list.
Grouping terms as you've done achieves this. Doing the products in order, however, is pretty much the worst case scenario.
Addendum: the sparse polynomial example maybe wasn't such a good choice, because in the typical case, the cost of computing the product is linear in the complexity of the output. But I think the main idea is still relevant in this case because you're doing the additions too.
Also, in the case of sparse polynomials, usually the game is to try and maintain the sparseness as long as possible, before products have so many terms they have to be considered dense. Generally there isn't much you can do to maintain sparsity, but what little you can do in a generic way amounts again to balanced products.
Comments
Post a Comment