Skip to main content

syntax - Orthonormalization of non-hermitian matrix eigenvectors


When using Orthogonalize[] one can specify which definition of "inner product" is to be used. For example, Orthogonalize[vectors,Dot] presupposes that all values are real.


When dealing with non-hermitian matrices, the "inner product" definition (apparently) needs to be |z|2=zz, just like with real numbers.


Consider this matrix:


mat = {{0. + 1.002 I, -1}, {-1, -I}}


The following code does not orthonormalize the eigenvectors of my matrix, as can be seen:


Orthogonalize[N[Eigenvectors[mat]]];
N[%[[1]].%[[2]]]

-2.90378*10^-15 + 0.999 I

The Dot option does not work, it presents me with an error



Orthogonalize::ornfa: The second argument Dot is not an inner product function, which always should return a number or symbol. If its two numeric arguments are the same, it should return a non-negative real number.




which is not surprising, since I have complex entries.


How can I force Orthogonalize[] to use the real number inner product definition, without Dot?


I'm using the words "inner product", but feel free to correct me as it seems they're not appropriate. Maybe I should say "involution"?


EDIT


As there's a bounty on this question, I'm leaving it open until the last day it's effective. Thanks for the answers.


After talking with my supervisor, turns out we'll have to "manually" orthonormalize eigenvectors that share the same eigenvalue, since the eigenvectors are otherwise already orthonormal, as in this case:


Mathematica graphics



Answer



The way I am interpreting this question, it has nothing to do with the vectors in question being eigenvectors of any particular matrix. If this interpretation is wrong, the question needs to be clarified.



The next point is that orthogonality requires an actual scalar product, and for a complex vector space this rules out the Dot product because Dot[{I,0},{I,0}] == -1 which is obviously not positive.


Therefore, the only way that it could make sense to speak of orthogonalization with respect to the Dot product for complex vectors is that you might wish for some reason to apply the orthogonalization algorithm (e.g., Gram-Schmidt) to a set of vectors with complex entries.


Doing this is completely legitimate, but it will not lead to vectors that are orthogonal with respect to the Dot product because the Dot product is not a scalar product. It just doesn't make sense to use the term "orthogonality" in this case.


Here is a function that performs the algorithm as described:


orthoDotize[vecs_] := Module[{s, a, e, ortho},
s = Array[a, Dimensions[vecs]];
ortho = Orthogonalize[s];
ortho /. Thread[Flatten[s] -> Flatten[vecs]]
]


This function has the property that its output satisfies the expected Euclidean orthogonality relations when the vectors in the list vecs are real. If they are not real, then the dot product after "pseudo-orthogonalization" can have imaginary parts:


mat = {{0. + 1.002 I, -1}, {-1, -I}};
evecs = N[Eigenvectors[mat]];
ovecs = orthoDotize[evecs]


{{0.722734 + 0. I, -2.69948*10^-16 + 0.691127 I}, {3.67452*10^-15 + 0.691127 I, 0.722734 - 3.56028*10^-15 I}}



Chop[ovecs[[1]].ovecs[[2]]]



0. + 0.999001 I



Edit: a possible cause of confusion


However, as I mentioned in my comment to the question (March 28), it could also be that there is a mathematical misunderstanding of a different kind here: equating orthogonality with biorthogonality.


As explained on this MathWorld page, we can define left and right eigenvectors of the matrix mat, which in this case are transposes of each other because mat is symmetric. To get these (generally different) sets of eigenvectors, you can do


eR = Eigenvectors[mat];
eL = Transpose[Eigenvectors[Transpose[mat]]];

The last line follows from



xLM=λxLMxL=λxL


Then the following holds:


eL.eR // Chop


(0.0446879000.0446879)



The appearance of the diagonal matrix here means that the rows of the matrix eL (the left eigenvectors) are orthogonal to the columns of eR (the right eigenvectors) in the sense of the matrix product. This is automatically true, and there is no need to do any further orthogonalization.


Edit 2


In case it needs further clarification: for any symmetric matrix mat we have that Transpose[eR] == eL. This implies that Transpose[eR].eR is diagonal (see above) and therefore eR[[1]].eR[[2]] == 0. That's why there is no need for any further orthogonalization in the example given in the question.



Edit 3


If mat is not symmetric, then its (right) eigenvectors are not orthogonal in the dot multiplication sense. Forming any kind of linear combination of those eigenvectors with the intention of orthogonalizing them will lead to new vectors which in general are no longer eigenvectors (unless the vectors in question share the same eigenvalue). So the orthogonalization idea is either trivial (for symmetric matrices) or violates the eigenvector property (for general non-symmetric matrices with non-degenerate spectrum).


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

How to remap graph properties?

Graph objects support both custom properties, which do not have special meanings, and standard properties, which may be used by some functions. When importing from formats such as GraphML, we usually get a result with custom properties. What is the simplest way to remap one property to another, e.g. to remap a custom property to a standard one so it can be used with various functions? Example: Let's get Zachary's karate club network with edge weights and vertex names from here: http://nexus.igraph.org/api/dataset_info?id=1&format=html g = Import[ "http://nexus.igraph.org/api/dataset?id=1&format=GraphML", {"ZIP", "karate.GraphML"}] I can remap "name" to VertexLabels and "weights" to EdgeWeight like this: sp[prop_][g_] := SetProperty[g, prop] g2 = g // sp[EdgeWeight -> (PropertyValue[{g, #}, "weight"] & /@ EdgeList[g])] // sp[VertexLabels -> (# -> PropertyValue[{g, #}, "name"]...