- Tensor product
In

mathematics , the**tensor product**, denoted by $otimes$, may be applied in different contexts to vectors, matrices,tensor s,vector space s, algebras,topological vector space s, and modules. In each case the significance of the symbol is the same: the most general bilinear operation. In some contexts, this product is also referred to as. The term "tensor product" is also used in relation to monoidal categories.outer product **Example:**:$egin\{bmatrix\}\; a\_\{11\}\; a\_\{12\}\; \backslash \; a\_\{21\}\; a\_\{22\}\; \backslash \; end\{bmatrix\}otimes\; egin\{bmatrix\}\; b\_\{11\}\; b\_\{12\}\; \backslash \; b\_\{21\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}=\; egin\{bmatrix\}\; a\_\{11\}\; egin\{bmatrix\}\; b\_\{11\}\; b\_\{12\}\; \backslash \; b\_\{21\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}\; a\_\{12\}\; egin\{bmatrix\}\; b\_\{11\}\; b\_\{12\}\; \backslash \; b\_\{21\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}\; \backslash \; \backslash \; a\_\{21\}\; egin\{bmatrix\}\; b\_\{11\}\; b\_\{12\}\; \backslash \; b\_\{21\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}\; a\_\{22\}\; egin\{bmatrix\}\; b\_\{11\}\; b\_\{12\}\; \backslash \; b\_\{21\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}\; \backslash \; end\{bmatrix\}=\; egin\{bmatrix\}\; a\_\{11\}\; b\_\{11\}\; a\_\{11\}\; b\_\{12\}\; a\_\{12\}\; b\_\{11\}\; a\_\{12\}\; b\_\{12\}\; \backslash \; a\_\{11\}\; b\_\{21\}\; a\_\{11\}\; b\_\{22\}\; a\_\{12\}\; b\_\{21\}\; a\_\{12\}\; b\_\{22\}\; \backslash \; a\_\{21\}\; b\_\{11\}\; a\_\{21\}\; b\_\{12\}\; a\_\{22\}\; b\_\{11\}\; a\_\{22\}\; b\_\{12\}\; \backslash \; a\_\{21\}\; b\_\{21\}\; a\_\{21\}\; b\_\{22\}\; a\_\{22\}\; b\_\{21\}\; a\_\{22\}\; b\_\{22\}\; \backslash \; end\{bmatrix\}$

Resultant rank = 4, resultant dimension = 4×4.

Here rank denotes the

tensor rank (number of requisite indices), while dimension counts the number of degrees of freedom in the resulting array; thematrix rank is 1.A representative case is the

Kronecker product of any two rectangular arrays, considered as matrices. Adyadic product is the special case of the tensor product between two vectors of the same dimension.**Tensor product of vector spaces**The tensor product "V" ⊗ "W" of two

vector space s "V" and "W" over a field "K" can be defined by the method of "generators and relations".To construct "V" ⊗ "W", one begins with the set of ordered pairs in the

Cartesian product "V" × "W". For the purposes of this construction, regard this Cartesian product as a set rather than a vector space. Thefree vector space on "V" × "W" is defined by taking the vector space in which the elements of "V" × "W" are a basis. Symbolically,:$F(V\; imes\; W)\; =\; left\{sum\_\{i=1\}^n\; alpha\_i\; e\_\{(v\_i\; imes\; w\_i)\}mid\; ninmathbb\{N\},\; alpha\_iin\; K,\; (v\_i\; imes\; w\_i)in\; V\; imes\; W\; ight\},$

where we have used the symbol "e"

_{(v × w)}to emphasize that these are taken to be linearly independent for distinct ("v" × "w") ∈ "V" × "W".The tensor product arises by defining the following three

equivalence relations in "F"("V" × "W"):* $e\_\{(v\_1+v\_2)\; imes\; w\}\; sim\; e\_\{v\_1\; imes\; w\}+e\_\{v\_2\; imes\; w\}!$

* $e\_\{v\; imes\; (w\_1+w\_2)\}\; sim\; e\_\{v\; imes\; w\_1\}+e\_\{v\; imes\; w\_2\}!$

* $ce\_\{v\; imes\; w\}sim\; e\_\{(cv)\; imes\; w\}\; sim\; e\_\{v\; imes\; (cw)\}\; !$where "v", "v"

_{i}, "w", and "w"_{i}are vectors from "V" and "W" (respectively), and "c" is from the underlying field "K". Denoting by "R" the space generated by these three equivalence relations, the**tensor product of the two vector spaces**"V" and "W" is then the quotient space:$V\; otimes\; W\; =\; F(V\; imes\; W)\; /\; R$

It is also called the

**tensor product space**of "V" and "W" and is a vector space (which can be verified by directly checking the vector space axioms). The**tensor product of two elements**"v" and "w" is theequivalence class ("e"_{("v" × "w")}+ "R") of "e"_{(v × w)}in "V" ⊗ "W", denoted "v" ⊗ "w". This notation can somewhat obscure the fact that tensors are alwayscoset s: manipulations performed via the representatives "(v,w)" must always be checked that they do not depend on the particular choice of representative.The space "R" is mapped to zero in "V" ⊗ "W", so that the above three equivalence relations become equalities in the tensor product space

* $(v\_1+v\_2)otimes\; w\; -\; (v\_1otimes\; w+v\_2otimes\; w)\; =\; 0$

* $votimes\; (w\_1+w\_2)\; -\; (votimes\; w\_1+votimes\; w\_2)\; =\; 0$

* $cvotimes\; w=votimes\; cw=c(votimes\; w).$Given bases {v

_{i}} and {w_{i}} for "V" and "W" respectively, the tensors {v_{i}⊗ w_{j}}form a basis for "V" ⊗ "W" (generally ordered so that v_{i}⊗ w_{j+1}comes before v_{i+1}⊗ w_{j}). The dimension of the tensor product therefore is the product of dimensions of the original spaces; for instance**R**^{m}⊗**R**^{n}will have dimension "mn".Elements of "V" ⊗ "W" are sometimes referred to as

**tensors**, although this term refers to many other related concepts as well. [*See*] An element of "V" ⊗ "W" of the form "v" ⊗ "w" is called atensor ortensor (intrinsic definition) .**pure**or**simple tensor**. In general, an element of the tensor product space is not a pure tensor, but rather a finite linear combination of pure tensors. To wit, if "v"_{1}and "v"_{2}arelinearly independent , and "w"_{1}and "w"_{2}are also linearly independent, then "v"_{1}⊗ "w"_{1}+ "v"_{2}⊗ "w"_{2}cannot be written as a pure tensor.**Characterization by a universal property**The tensor product is characterized by a

universal property . Consider the problem of mapping the Cartesian product "V" × "W" into a vector space "X" via a bilinear map "ψ". The tensor product construction "V" ⊗ "W", together with the natural bilinear embedding map "φ" : "V" × "W" → "V" ⊗ "W" given by:$varphi\; (u,w)=\; u\; otimes\; w\; ,$ is the "universal" solution to this problem in the following sense. For any other such pair ("X", "ψ"), where "X" is a vector space, and ψ a bilinear mapping "V" × "W" → "X", there exists a "unique"

linear map :$T\; :\; V\; otimes\; W\; ightarrow\; X$

such that

:$psi\; =\; T\; circ\; varphi.$

As with any universal property, this characterizes the tensor product uniquely up to unique isomorphism.

An immediate consequence is the identification of the bilinear maps from "V" × "W" to "X"

:$L^2(V\; imes\; W,\; X),$

and the linear maps

:$L(V\; otimes\; W,\; X)$

obtained by sending "ψ" to "T".

**As a functor**The tensor product also operates on linear maps between vector spaces. Specifically, given two linear maps "S" : "V" → "X" and "T" : "W" → "Y" between vector spaces, the

**tensor product of the two linear maps**"S" and "T" is a linear

$Sotimes\; T:Votimes\; W\; ightarrow\; Xotimes\; Y$defined by:$(Sotimes\; T)(votimes\; w)=S(v)otimes\; T(w).$In this way, the tensor product becomes abifunctor from the category of vector spaces to itself, covariant in both arguments.The Kronecker product of two matrices is the matrix of the tensor product of the two corresponding linear maps under a standard choice of bases of the tensor products (see the article on Kronecker products).

**More than two vector spaces**The construction and the universal property of the tensor product can be extended to allow for more than two vector spaces. For example, suppose that "V"

_{1}, "V"_{2}, and "V"_{3}are three vector spaces. The tensor product "V"_{1}⊗"V"_{2}⊗"V"_{3}is defined along with a trilinear mapping from thedirect product :$varphi\; :\; V\_1\; imes\; V\_2\; imes\; V\_3\; o\; V\_1otimes\; V\_2otimes\; V\_3$

so that, any trilinear map "F" from the direct product to a vector space "W"

:$F:V\_1\; imes\; V\_2\; imes\; V\_3\; o\; W$

factors uniquely as

:$F\; =\; Lcircvarphi$

where "L" is a linear map. The tensor product is uniquely characterized by this property, up to a unique isomorphism.

This construction is related to repeated tensor products of two spaces. For example, if "V"

_{1}, "V"_{2}, and "V"_{3}are three vector spaces, then there are (natural) isomorphisms:$V\_1otimes\; V\_2otimes\; V\_3cong\; V\_1otimes(V\_2otimes\; V\_3)cong\; (V\_1otimes\; V\_2)otimes\; V\_3.$More generally, the tensor product of an arbitrary indexed family "V"

_{"i"}, "i" ∈ "I", is defined to be universal with respect to multilinear mappings of the direct product $scriptstyle\{prod\_\{iin\; I\}\; V\_i\}.$**Tensor powers and braiding**Let "n" be a non-negative integer. The "n"th

**tensor power**of the vector space "V" is the "n"-fold tensor product of "V" with itself. That is:$V^\{otimes\; n\}\; stackrel\{def\}\{=\}\; underbrace\{Votimescdotsotimes\; V\}\_\{n\}.$A

permutation σ of the set {1,2,…,"n"} determines a mapping of the "n"th Cartesian power of "V":$sigma\; :\; V^n\; o\; V^n$defined by:$sigma(v\_1,v\_2,dots,v\_n)\; =\; (v\_\{sigma\; 1\},\; v\_\{sigma\; 2\},dots,v\_\{sigma\; n\}).$Let:$varphi:V^n\; o\; V^\{otimes\; n\}$be the natural multilinear embedding of the Cartesian power of "V" into the tensor power of "V". Then, by the universal property, there is a unique isomorphism:$au\_sigma\; :\; V^\{otimes\; n\}\; o\; V^\{otimes\; n\}$such that:$varphicircsigma\; =\; au\_sigmacircvarphi.$The isomorphism τ_{σ}is called the**braiding map**associated to the permutation σ.**Tensor product of two tensors**A "

tensor on "V" (generally abbreviated to simply a "tensor") is an element of a vector space of the form::$egin\{matrix\}\; T^r\_s(V)\; =\; underbrace\{\; Votimes\; dots\; otimes\; V\}\; otimes\; underbrace\{\; V^*otimes\; dots\; otimes\; V^*\}\; =\; V^\{otimes\; m\}otimes\; V^\{otimes\; n\}\backslash \; r\; s\; end\{matrix\}$

for non-negative integers "r" and "s". There is a general formula for the components of a (tensor) product of two (or more)

tensor s. For example, if "F" and "G" are two covariant tensors of rank "m" and "n" (respectively) (i.e. "F" ∈ "T"_{m}^{0}, and "G" ∈ "T"_{n}^{0}), then the components of their tensor product are given by:$(Fotimes\; G)\_\{i\_1i\_2...i\_\{m+n\; =\; F\_\{i\_\{1\}i\_\{2\}...i\_\{mG\_\{i\_\{m+1\}i\_\{m+2\}i\_\{m+3\}...i\_\{m+n$. [

*Analogous formulas also hold for contravariant tensors, as well as tensors of mixed variance. Although in many cases such as when there is an*] In this example, it is assumed that there is a chosen basis "B" of the vector space "V", and the basis on any tensor space "T"inner product defined, the distinction is irrelevant._{s}^{r}is tacitly assumed to be the standard one(this basis is described in the article on Kronecker products).Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor.Note that in the tensor product, the factor "F" consumes the first rank("F") indices, and the factor "G" consumes the next rank("G") indices, so:$mathrm\{rank\}(\; F\; otimes\; G\; )=mathrm\{rank\}(F)+mathrm\{rank\}(G)$

**Example**Let

**U**be a tensor of type (1,1) with components "U^{α}_{β}", and let**V**be a tensor of type (1,0) with components "V^{γ}". Then:$U^alpha\; \{\}\_eta\; V^gamma\; =\; (U\; otimes\; V)^alpha\; \{\}\_eta\; \{\}^gamma$and:$V^mu\; U^\; u\; \{\}\_sigma\; =\; (V\; otimes\; U)^\{mu\; u\}\; \{\}\_sigma$.The tensor product inherits all the indices of its factors.

See also:

Classical treatment of tensors **Kronecker product of two matrices**With matrices this operation is usually called the "Kronecker product", a term used to make clear that the result has a particular

block structure imposed upon it, in which each element of the first matrix is replaced by the second matrix, scaled by that element. For matrices $U$ and $V$ this is::$U\; otimes\; V\; =\; egin\{bmatrix\}\; u\_\{11\}V\; u\_\{12\}V\; cdots\; \backslash \; u\_\{21\}V\; u\_\{22\}V\; \backslash \; vdots\; ddots\; end\{bmatrix\}\; =\; egin\{bmatrix\}\; u\_\{11\}v\_\{11\}\; u\_\{11\}v\_\{12\}\; cdots\; u\_\{12\}v\_\{11\}\; u\_\{12\}v\_\{12\}\; cdots\; \backslash \; u\_\{11\}v\_\{21\}\; u\_\{11\}v\_\{22\}\; u\_\{12\}v\_\{21\}\; u\_\{12\}v\_\{22\}\; \backslash \; vdots\; ddots\; \backslash \; u\_\{21\}v\_\{11\}\; u\_\{21\}v\_\{12\}\; \backslash \; u\_\{21\}v\_\{21\}\; u\_\{21\}v\_\{22\}\; \backslash \; vdots\; end\{bmatrix\}$.

**Tensor product of multilinear maps**Given

multilinear maps $f\; (x\_1,...x\_k)$ and $g\; (x\_1,...\; x\_m)$their tensor product is the multilinear function :$(f\; otimes\; g)\; (x\_1,...,x\_\{k+m\})=f(x\_1,...,x\_k)g(x\_\{k+1\},...,x\_\{k+m\})$**Tensor product of Hilbert spaces**The tensor product of two

Hilbert space s is another Hilbert space, which is defined as described below.**Definition**The discussion so far has been purely algebraic. In light of the extra structure on Hilbert spaces, one would like to introduce an inner product, and therefore a topology, on the tensor product that arise naturally from those of the factors. Let "H"

_{1}and "H"_{2}be two Hilbert spaces with inner products $langle\; cdot,cdot\; angle\_1$ and $langle\; cdot,cdot\; angle\_2$, respectively. Construct the tensor product of "H"_{1}and "H"_{2}as vector spaces as explained above. We can turn this vector space tensor product into aninner product space by defining:$langlephi\_1otimesphi\_2,psi\_1otimespsi\_2\; angle\; =\; langlephi\_1,psi\_1\; angle\_1\; ,\; langlephi\_2,psi\_2\; angle\_2\; quad\; mbox\{for\; all\; \}\; phi\_1,psi\_1\; in\; H\_1\; mbox\{\; and\; \}\; phi\_2,psi\_2\; in\; H\_2$and extending by linearity. That this inner product is the natural one is justified by the identification of scalar-valued bilinear maps on "H"_{1}× "H"_{2}and linear functionals on their vector space tensor product. Finally, take the completion under this inner product. The resulting Hilbert space is the tensor product of "H"_{1}and "H"_{2}.**Properties**If "H"

_{1}and "H"_{2}have orthonormal bases {φ_{"k"}} and {ψ_{"l"}}, respectively, then {φ_{"k"}⊗ ψ_{"l"}} is an orthonormal basis for "H"_{1}⊗ "H"_{2}.**Examples and applications**The following examples show how tensor products arise naturally.

Given two

measure space s "X" and "Y", with measures μ and ν respectively, one may look at L^{2}("X" × "Y"), the space of functions on "X" × "Y" that are square integrable with respect to the product measure μ × ν. If "f" is a square integrable function on "X", and "g" is a square integrable function on "Y", then we can define a function "h" on "X" × "Y" by "h"("x","y") = "f"("x") "g"("y"). The definition of the product measure ensures that all functions of this form are square integrable, so this defines abilinear mapping L^{2}("X") × L^{2}("Y") → L^{2}("X" × "Y").Linear combination s of functions of the form "f"("x") "g"("y") are also in L^{2}("X" × "Y"). It turns out that the set of linear combinations is in fact dense in L^{2}("X" × "Y"), if L^{2}("X") and L^{2}("Y") are separable. This shows that L^{2}("X") ⊗ L^{2}("Y") isisomorphic to L^{2}("X" × "Y"), and it also explains why we need to take the completion in the construction of the Hilbert space tensor product.Similarly, we can show that L

^{2}("X"; "H"), denoting the space of square integrable functions "X" → "H", is isomorphic to L^{2}("X") ⊗ "H" if this space is separable. The isomorphism maps "f"("x") ⊗ φ ∈ L^{2}("X") ⊗ "H" to "f"("x")φ ∈ L^{2}("X"; "H"). We can combine this with the previous example and conclude that L^{2}("X") ⊗ L^{2}("Y") and L^{2}("X" × "Y") are both isomorphic to L^{2}("X"; L^{2}("Y")).Tensor products of Hilbert spaces arise often in

quantum mechanics . If some particle is described by the Hilbert space "H"_{1}, and another particle is described by "H"_{2}, then the system consisting of both particles is described by the tensor product of "H"_{1}and "H"_{2}. For example, the state space of aquantum harmonic oscillator is L^{2}(**R**), so the state space of two oscillators is L^{2}(**R**) ⊗ L^{2}(**R**), which is isomorphic to L^{2}(**R**^{2}). Therefore, the two-particle system is described by wave functions of the form φ("x"_{1}, "x"_{2}). A more intricate example is provided by theFock space s, which describe a variable number of particles.**Relation with the dual space**In the discussion on the universal property, replacing "X" by the underlying scalar field of "V" and "W" yields that the space $(V\; otimes\; W)^star$ (the

dual space of $V\; otimes\; W$, containing all linear functionals on that space) is naturally identified with the space of allbilinear functionals on $V\; imes\; W$. In other words, every bilinear functional is a functionalon the tensor product, and vice versa.Whenever $V$ and $W$ are finite dimensional, there is a natural

isomorphism between $V^star\; otimes\; W^star$ and $(V\; otimes\; W)^star$, whereas for vector spaces of arbitrary dimension we only have an inclusion $V^star\; otimes\; W^starsubset\; (V\; otimes\; W)^star$.So, the tensors of the linear functionals are bilinear functionals. Thisgives us a new way to look at the space of bilinear functionals, as a tensorproduct itself.**Types of tensors, e.g., alternating**Linear subspaces of the bilinearoperators (or in general, multilinear operators) determine natural

quotient space s of the tensor space, which are frequently useful. Seewedge product for the first major example. Another would be the treatment ofalgebraic form s as symmetric tensors.**Over more general rings**The notation $otimes\_R$ refers to a

tensor product of modules over a ring "R".**Tensor product for computer programmers**Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as $circ\; .\; imes$ (for example $A\; circ\; .\; imes\; B$ or $A\; circ\; .\; imes\; B\; circ\; .\; imes\; C$). In J the tensor product is the dyadic form of

***/**(for example**a */ b**or**a */ b */ c**).Note that J's treatment also allows the representation of some tensor fields (as

**a**and**b**may be functions instead of constants -- the result is then a derived function, and if**a**and**b**aredifferentiable , then**a*/b**is differentiable).However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example,

Matlab ), and/or may not supporthigher-order functions such as the Jacobian derivative (for example,Fortran /APL).**Notes****ee also***

Outer product

*Tensor product of modules

*Tensor product of R-algebras

*Tensor product of fields

*Topological tensor product

*Tensor product of line bundles

*Tensor product of graphs

*Tensor product of quadratic forms

*Dyadic product **References***.

*.

*.

*.

*Wikimedia Foundation.
2010.*