mersenneforum.org > Math Tensor Analysis books
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

 2006-12-07, 22:37 #1 Damian     May 2005 Argentina 2×3×31 Posts Tensor Analysis books Wich book is the best to initiate to study tensor analysis? I have Levi-Civita "The absolute differential calculus" but is to abstract to me. I would prefer one with more numerical examples and graphics if possible. Thanks in advance, Damian.
 2006-12-11, 19:23 #2 ewmayer ∂2ω=0     Sep 2002 República de California 5×2,351 Posts We used this one in my graduate-level differential geometry class, but I suspect you might also find it somewhat "abstract" for your taste, even though Fomenko and Novikov are well-known relativity physicists and try to keep things "physically grounded" whenever possible. If differential geometry were easy, general relativity would be a high school subject.
2006-12-12, 08:57   #3
xilman
Bamboozled!

"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across

1164710 Posts

Quote:
 Originally Posted by ewmayer We used this one in my graduate-level differential geometry class, but I suspect you might also find it somewhat "abstract" for your taste, even though Fomenko and Novikov are well-known relativity physicists and try to keep things "physically grounded" whenever possible. If differential geometry were easy, general relativity would be a high school subject.
I very much like Misner, Thorne & Wheeler Gravitation but that text is very much more than a book on tensor calculus.

It does have a lot of pretty pictures and sound physical interpretations of objects which are often treated as very abstract mathematical constructs.

Paul

 2006-12-12, 17:03 #4 mfgoode Bronze Medalist     Jan 2004 Mumbai,India 40048 Posts Tensor Analysis Damian, I'm not in the league with Ewmayer or Xilman nor I can I ever measure up to them but I would advise the Schaum's outline series of Theory and problems of 'Vector Analysis and an introduction to Tensor Analysis' There are about 60 pages at the end devoted to Tensor Analysis and I think sufficient to move on to perhaps those recommended by our colleagues. It has 480 solved problems so you can get a better idea on the subject. It is by Murray R. Spiegel, PhD. This is widely available in the U.S. libraries and in New York I can assure you as I picked my copy on sale from them for a throwaway price. My copy is collecting dust on my shelves . I do not profess to have gone thru or even understood it, but I know a good book when I see one . Mally Last fiddled with by mfgoode on 2006-12-12 at 17:04
 2006-12-14, 00:16 #5 Damian     May 2005 Argentina 2728 Posts thanks for the replys. I'm downloading Thorne and Wheeler Gravitation book. A newbie question: Does covariant and contravariant concept has anything to do with the transpose of a vector? I ask because I see that a_i*b^i gives the dot product (that is the same as the product of a vector matrix with the traspose of the other vector matrix), and it also the same result as the contraction of tensors (the summation convention) Another question: how can I use tex tags in these posts? Thanks in advance Damian.
 2006-12-14, 09:42 #6 mfgoode Bronze Medalist     Jan 2004 Mumbai,India 22·33·19 Posts High Brow. I may be terribly wrong Damian. I think you are jumping the gun, but it all depends on your level. I knew a buddy of mine who was studying this book on gravitation for his PhD thesis. As xilman says it is more than just Tensor calculus and as ewmayer says about differential geometry. The climb to Tensors is long and tedious and requires a good foundation in modern geometry. This sounds simple but I dont want to discourage you though. The covariant curvature tensor is of fundamental importance in Einstein's general theory of Relativity. The contravariance has to do with curvilinear co-ordinate systems. They are both related with the latter coming before the former. All the best, Mally
2006-12-14, 15:46   #7
Xyzzy

Aug 2002

23×1,069 Posts

Quote:
 Another question: how can I use tex tags in these posts?
http://www.mersenneforum.org/showthread.php?t=4576

 2006-12-14, 16:08 #8 Damian     May 2005 Argentina 2·3·31 Posts Thanks, what I ment was: if I have two tensors $A$ and $B$, then the tensor contraction $A_i B^i$ equals the dot product of two vectors, wich itself equals the product of column vector matrix $A^t$ with file vector $B$ Is this casual, or there is a connection between covariance/contravariance and transpose of matrices. I guess the answer is that is casual, because I can have a rank 3 tensor, and how would I define its transpose since it is "similar" to a three dimensional matrix. Thanks, Damian.
2006-12-14, 17:35   #9
xilman
Bamboozled!

"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across

101101011111112 Posts

Quote:
 Originally Posted by Damian Thanks, what I ment was: if I have two tensors $A$ and $B$, then the tensor contraction $A_i B^i$ equals the dot product of two vectors, wich itself equals the product of column vector matrix $A^t$ with file vector $B$ Is this casual, or there is a connection between covariance/contravariance and transpose of matrices. I guess the answer is that is casual, because I can have a rank 3 tensor, and how would I define its transpose since it is "similar" to a three dimensional matrix. Thanks, Damian.
|In the special case of the Euclidean metric (Lorenz metric in GR) and Cartesian coordinates, the process of raising indices is the same as transposition. This is because the metric tensor, g, has an especially simple form --- the Euclidean metric is just the identity matrix and the Lorenz metric is the Euclidean metric with a single sign change in the x_0 component.

Paul

2006-12-14, 19:57   #10
Damian

May 2005
Argentina

2×3×31 Posts

Quote:
 Originally Posted by xilman |In the special case of the Euclidean metric (Lorenz metric in GR) and Cartesian coordinates, the process of raising indices is the same as transposition. This is because the metric tensor, g, has an especially simple form --- the Euclidean metric is just the identity matrix and the Lorenz metric is the Euclidean metric with a single sign change in the x_0 component. Paul
Ok, but take for example this tensor formula
$A_{ij}x^iy^j$

The vector related formula would be
$x^t A y$

Because it would be inconsistent to write:
$A x^t y^t$

Why does that happen? (have to transpose only one variable and put it before the matrix?)

2006-12-14, 20:19   #11
ewmayer
2ω=0

Sep 2002
República de California

5·2,351 Posts

Quote:
 Originally Posted by Damian Ok, but take for example this tensor formula $A_{ij}x^iy^j$ The vector related formula would be $x^t A y$ Because it would be inconsistent to write: $A x^t y^t$ Why does that happen? (have to transpose only one variable and put it before the matrix?)
The difference is that matrix-vector multiply has conventions about how to loop over the rows and columns, which, to get a scalar result from the vector-vector product of 2 length-n vectors (one row, one column) x and y (which could give either an nxn result or a 1x1, i.e. a scalar, depending on the order of the operands) make it necessary to have the row vector on the left of the product and the column vector on the right. If one's convention is that vectors without transpose superscripts denote column vector, that means x^t y gives a scalar. Similary, for a 3-way product of x, y and nxn matrix A to yield a scalar, one must order things as (row vector)*A*(column vector). In your example, x^t A y is the only way for a matrix-vector product of A, x^t (row vector), and y (column vector) to both be well-defined and yield a scalar result. Note that even though matrix multiply does not commute in general, it *is* associative, i.e. you could first calculate (x^t A) and right-multiply the resulting row vector with y, or first calculate (A y) and then left-multiply the resulting column vector with x^t; in either case the result is the same scalar.

The tensor index notation replaces this row/column-based convention with a different one, based on implied summation over a repeated index. This leads to a less visually intuitive procedure than above, but again, it is unambiguous and (at least for vectors and matrices) completely equivalent to conventional matrix multiply. In A_{ij}x^iy^j, the fact that you can either do the index sum over i (equivalent to x^t A) or j (== A y) first simply reflects the associativity of matrix multiply.

 Thread Tools

 Similar Threads Thread Thread Starter Forum Replies Last Post Orgasmic Troll Other Mathematical Topics 17 2011-05-26 17:49 blob100 Miscellaneous Math 15 2010-06-01 02:25 Xyzzy Soap Box 9 2007-07-10 17:09 devarajkandadai Lounge 7 2005-06-05 16:38 Washuu Miscellaneous Math 1 2005-03-24 11:57

All times are UTC. The time now is 22:56.

Sat Jan 28 22:56:18 UTC 2023 up 163 days, 20:24, 0 users, load averages: 0.88, 0.96, 1.01

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔