Suppose I have a $m\times n$ matrix $A$ (real for simplicity). Then SingularValueDecomposition[A]
yields 3 matrices $U$, $\Sigma$ and $V$ such that
$A = U\Sigma V^\top = u_1 \sigma_1 v_1^\top + u_2 \sigma_2 v_2^\top + \cdots$,
where $U = [u_1\;\; u_2 \ldots]$ (vector = column matrix).
Successive approximations for $A$ are given by the rank 1 matrices $u_1 \sigma_1 v_1^\top$, etc. I wanted to make a function to compute this but found myself doing all sorts of messy manipulations due to Mathematica's matrix structure (and/or my lack of knowledge on how to use them).
Here is my tentative function. It has as input any matrix A and an optional argument n saying how many rank 1 matrices should be summed. This function is not meant to be numerically fast or anything; perhaps I would use it as a educational tool to examine different approximations of A or etc. The point is in the Sum
function where I would like to know: is there a more efficient (or perhaps cleaner) way of doing the matrix multiplications?
RankOneApprox[A_, n_: 1] := Block[{U, Sig, V},
{U, Sig, V} = SingularValueDecomposition[A];
Sum[Sig[[i,i]] ({U[[All, i]]}\[Transpose].{V[[All, i]]}), {i, 1, n}]];
Answer
How about the following:
RankOneApprox[A_, n_: 1] := Module[
{U, Sig, V},
{U, Sig, V} = SingularValueDecomposition[A];
U.DiagonalMatrix[Diagonal[Sig[[1 ;; n]]], 0, Length[a]].ConjugateTranspose[V]
]
Here I take the first n
diagonal elements of Sig
. That's equivalent to summing the projectors you wrote out explicitly. With these n
singular values, I then form a new matrix that has zeros everywhere else, and insert that into the definition of the SVD. When n
equals the dimension of a
, you get the original SVD back.
This assumes a
to be a square matrix - if desired, it would be pretty easy to extend to rectangular matrices. Here I just want to illustrate how one uses the matrix manipulations to avoid writing explicit sums. Also, your original function has the Transpose
in a syntactically incorrect spot, and I've fixed that in the above version.
Edit
As mentioned by R.M in the comment, there is another way of calling SingularValueDecomposition
that already weeds out the smallest singular values for you: SingularValueDecomposition[A, n]
However, this returns condensed versions of all the matrices, see also the documentation, where an example is given (under Applications) that does something very similar to this question. One has to be a little careful about the dimensionality of the factors in the SVD formula, because the unitary matrices in the condensed form are no longer represented by square matrices (in particular, the syntax Dot @@ SingularValueDecomposition
in the comment won't work because conjugate-transposition was omitted). Fortunately, to construct the corresponding matrix approximation, we can maintain the same syntax as in the above function:
RankOneApprox[A_, n_: 1] := Module[
{U, Sig, V},
{U, Sig, V} = SingularValueDecomposition[A, n];
U.Sig.ConjugateTranspose[V]
]
Comments
Post a Comment