I am trying to work with the vector notation without defining vector components explicitly.
$Assumptions = (x | y | z) \[Element] Vectors[3]
The vectors x
, y
and z
are unit and orthogonal. To set this I'm using the UpSet
command:
x.x^=1; x.y^=0; x.z^=0;
y.x^=0; y.y^=1; y.z^=0;
z.x^=0; z.y^=0; z.z^=1;
Most of the needed vector operations work correctly in this case, for example:
TensorProduct[x, y].(2 y + z) // TensorExpand
evaluates correctly to 2 x
. However, (x,y,z)
is not a "complete" coordinate system without defining "handness", so the cross product doesn't work.
Cross[x, y]
should evaluate to z
, Cross[z,y]
to -x
and so on for a "right hand" coordinate system. How do I implement this?
I am using Mathematica 9, but I can upgrade to 10 if needed.
edit:
As the response to one the comments: I'm working with the vector fields, say s
and p
, defined in the terms of x,y,z
. I need to calculate matrix elements such as s.T.t
where T
, for example an operator e.z
, with e
as a Levi Civita tensor. For 'e.z' I can simply define the operator as T == -TensorProduct[x,y] + TensorProduct[y,x]
, but for more complex expressions it can be complicated.
Answer
You can combine the best of both worlds: symbolic tensors and vectors on one hand, and explicit vectors on the other. Explicit vectors are necessary in most vector algebra operations, unless you want to rely heavily on UpValues
defined for all those operations and all the symbols you're using. It's cleaner to let Mathematica's matrix algebra take over whenever symbolic simplifications don't get anywhere.
So here is what I'd suggest:
First keep the $Assumptions
that you defined, in order for TensorExpand
to give simplifications whenever possible.
Then I define just two functions that can take care of all the rest: tensorExpand
(with lower case spelling) is an extension of TensorExpand
that post-processes the result by temporarily replacing x
, y
, and z
, by their canonical unit vector counterparts.
This allows for things like Cross
and Dot
to work without any UpSet
definitions. When that's complete, you have a simplified expression that contains 3D vectors, matrices and potentially higher rank tensors. Whatever those may be, I can always convert each of them to SparseArray
s and extract their ArrayRules
. From those, it's easy to read off how it can be rewritten as a linear combination of tensor products of the basis vectors. This is done in the function toSymbols
.
Clear[x, y, z];
$Assumptions = (x | y | z) ∈ Vectors[3];
tensorExpand[expr_] :=
Simplify[TensorExpand[expr] /.
Thread[{x, y, z} -> IdentityMatrix[3]]] /.
l_List :> SparseArray[l] /. s_SparseArray :> toSymbols[ArrayRules@s]
toSymbols[ruleList_] :=
Total[ruleList /.
HoldPattern[Rule[a_, b_]] :>
Times[Apply[TensorProduct, a /. Thread[{1, 2, 3} -> {x, y, z}]],
b]]
Here are some tests:
tensorExpand[TensorProduct[x, y]]
(* ==> x⊗y *)
tensorExpand[TensorProduct[x, y].(2 y + z)]
(* ==> 2 x *)
tensorExpand[Cross[x, y]]
(* ==> z *)
tensorExpand[Cross[x, y].(2 y + z)]
(* ==> 1 *)
tensorExpand[Cross[x, {a, b, c}]]
(* ==> -c y + b z *)
As you can see, the use of component vectors in the intermediate calculations is completely invisible in the end results. You always get back an expression that is written in terms of the symbolic basis vectors, so you can do further manipulations on them if needed.
Edit adding LeviCivitaTensor
etc.
The handling of LeviCivitaTensor
requested in the question also works without any problems:
ϵ = LeviCivitaTensor[3];
tensorExpand[ϵ]
(*
==> x⊗y⊗z - x⊗z⊗y - y⊗x⊗z + y⊗z⊗x + z⊗x⊗y - z⊗y⊗x
*)
tensorExpand[ϵ.z]
(* ==> x⊗y - y⊗x *)
By the way, this tells me that you had a sign error in your version of T = ϵ.z
.
When mixing symbolic vectors and explicit arrays, one has to watch out for cases where the symbols get pulled into the array when you don't use the Dot
multiplication. But I don't think this issue arises in your question.
Comments
Post a Comment