Theorems and definitions in linear algebra

Theorems and definitions in linear algebra

This article collects the main theorems and definitions in linear algebra.

Vector spaces

" A vector space( or linear space) "V" over a number field² F consists of a set on which two operations (called addition and scalar multiplication, respectively) are defined so, that for each pair of elements x, y, in "V" there is a unique element x + y in "V", and for each element a in F and each element x in "V" there is a unique element ax in "V", such that the following conditions hold."

*(VS 1) "For all x, y in "V", x+y=y+x (commutativity of addition)."
*(VS 2) "For all x, y, z in "V", (x+y)+z=x+(y+z) (associativity of addition)."
*(VS 3) "There exists an element in "V" denoted by 0 such that x+0=x for each x in "V"."
*(VS 4) "For each element x in "V" there exists an element y in "V" such that x+y=0."
*(VS 5) "For each element x in "V", 1x=x."
*(VS 6) "For each pair of element a in F and each pair of elements x,y in "V", a(x+y)=ax+ay.
*(VS 7) "For each element a in F and each pair of elements x,y in "V", a(x+y)=ax+ay."
*(VS 8) "For each pair of elements a,b in F and each pair of elements x in "V", (a+b)x=ax+bx."

Vector spaces

ubspaces

Linear combinations

ystems of linear equations

Linear dependence

Linear independence

Bases

Dimension

Linear transformations and matrices

Linear transformations

Null spaces

Ranges

The matrix representation of a linear transformation

Composition of linear transformations

Matrix multiplication

Invertibility

Isomorphisms


=The change-of-coordinates matrix=

Change of coordinate matrix
Clique
Coordinate vector relative to a basis
Dimension theorem
Dominance relation
Identity matrix
Identity transformation
Incidence matrix
Inverse of a linear transformation
Inverse of a matrix
Invertible linear transformation
Isomorphic vector spaces
Isomorphism
Kronecker delta
Left-multiplication transformation
Linear operator
Linear transformation
Matrix representing a linear transformation
Nullity of a linear transformation
Null space
Ordered basis
Product of matrices
Projection on a subspace
Projection on the x-axis
Range
Rank of a linear transformation
Reflection about the x-axis
Rotation
Similar matrices Standard ordered basis for F_n
Standard representation of a vector space with respect to a basis
Zero transformation

P.S.
coefficient of the differential equation,differentiability of complex function,vector space of functionsdifferential operator, auxiliary polynomial, to the power of a complex number, exponential function.

{color{Blue}~2.1} N(T)&R(T) are subspaces

Let V and W be vector spaces and I: V→W be linear. Then N(T) and R (T) are subspaces of Vand W, respectively.


={color{Blue}~2.2} R(T)= span of T(basis in V)=

Let V and W be vector spaces, and let T: V→W be linear. If eta={v_1,v_2,...,v_n} is a basis for V, then ::mbox{R(T)}=mbox{span}(T(etambox{))}=mbox{span}({T(v_1),T(v_2),...,T(v_n)}).

{color{Blue}~2.3} Dimension Theorem

Let V and W be vector spaces, and let T: V→W be linear. If V is finite-dimensional, then ::::::mbox{nullity}(T)+mbox{rank}(T)=dim(V).


={color{Blue}~2.4} one-to-one ⇔ N(T)={0}=

Let V and W be vector spaces, and let T: V→W be linear. Then T is one-to-one if and only if N(T)={0}.


={color{Blue}~2.5} one-to-one ⇔ onto ⇔ rank(T)=dim(V)=

Let V and W be vector spaces of equal (finite) dimension, and let T:V→W be linear. Then the following are equivalent. :(a) T is one-to-one. :(b) T is onto. :(c) rank(T)=dim(V).


={color{Blue}~2.6}{w_1,w_2...w_n}= exactly one T(basis),=

Let V and W be vector space over F, and suppose that {v_1, v_2,...,v_n} is a basis for V. For w_1, w_2,...w_n in W, there exists exactly one linear transformation T: V→W such that mbox{T}(v_i)=w_i for i=1,2,...n. Corollary. Let V and W be vector spaces, and suppose that V has a finite basis {v_1,v_2,...,v_n}. If U, T: V→W are linear and U(v_i)=T(v_i) for i=1,2,...,n, then U=T.

{color{Blue}~2.7} T is vector space

Let V and W be vector spaces over a field F, and let T, U: V→W be linear. :(a) For all a ∈ "F", ambox{T}+mbox{U} is linear. :(b) Using the operations of addition and scalar multiplication in the preceding definition, the collection of all linear transformations form V to W is a vector space over F.

{color{Blue}~2.8} linearity of matrix representation of linear transformation

Let V and W ve finite-dimensional vector spaces with ordered bases β and γ, respectively, and let T, U: V→W be linear transformations. Then :(a) [T+U] _eta^gamma= [T] _eta^gamma+ [U] _eta^gamma and :(b) [aT] _eta^gamma=a [T] _eta^gamma for all scalars a.

{color{Blue}~2.9} commutative law of linear operator

Let V,w, and Z be vector spaces over the same field f, and let T:V→W and U:W→Z be linear. then UT:V→Z is linear.

{color{Blue}~2.10} law of linear operator

Let v be a vector space. Let T, U1, U2mathcal{L}(V). Then (a) T(U1+U2)=TU1+TU2 and (U1+U2)T=U1T+U2T (b) T(U1U2)=(TU1)U2 (c) TI=IT=T (d) a(U1U2)=(aU1)U2=U1(aU2) for all scalars a.


={color{Blue}~2.11} [UT] αγ= [U] βγ [T] αβ=

Let V, W and Z be finite-dimensional vector spaces with ordered bases α β γ, respectively. Let T: V⇐W and U: W→Z be linear transformations. Then ::::::: [UT] _alpha^gamma= [U] _eta^gamma [T] _alpha^eta.

Corollary. Let V be a finite-dimensional vector space with an ordered basis β. Let T,U∈mathcal{L}(V). Then [UT] β= [U] β [T] β.

{color{Blue}~2.12} law of matrix

Let A be an m×n matrix, B and C be n×p matrices, and D and E be q×m matrices. Then :(a) A(B+C)=AB+AC and (D+E)A=DA+EA.:(b) a(AB)=(aA)B=A(aB) for any scalar a.:(c) ImA=AIm.:(d) If V is an n-dimensional vector space with an ordered basis β, then [Iv] β=In.

Corollary. Let A be an m×n matrix, B1,B2,...,Bk be n×p matrices, C1,C1,...,C1 be q×m matrices, and a_1,a_2,...,a_k be scalars. Then :::::::ABigg(sum_{i=1}^k a_iB_iBigg)=sum_{i=1}^k a_iAB_iand:::::::Bigg(sum_{i=1}^k a_iC_iBigg)A=sum_{i=1}^k a_iC_iA.

{color{Blue}~2.13} law of column multiplication

Let A be an m×n matrix and B be an n×p matrix. For each j (1le jle p) let u_j and v_j denote the jth columns of AB and B, respectively. Then (a) u_j=Av_j (b) v_j=Be_j, where e_j is the jth standard vector of Fp.


={color{Blue}~2.14} [T(u)] γ= [T] βγ [u] β=

Let V and W be finite-dimensional vector spaces having ordered bases β and γ, respectively, and let T: V→W be linear. Then, for each u ∈ V, we have :::::::: [T(u)] _gamma= [T] _eta^gamma [u] _eta.

{color{Blue}~2.15} laws of LA

Let A be an m×n matrix with entries from F. Then the left-multiplication transformation LA: Fn→Fm is linear. Furthermore, if B is any other m×n matrix (with entries from F) and β and γ are the standard ordered bases for Fn and Fm, respectively, then we have the following properties. (a) [L_A] _eta^gamma=A. (b) LA=LB if and only if A=B. (c) LA+B=LA+LB and LaA=aLA for all a∈F. (d) If T:Fn→Fm is linear, then there exists a unique m×n matrix C such that T=LC. In fact, mbox{C}= [L_A] _eta^gamma. (e) If W is an n×p matrix, then LAE=LALE. (f ) If m=n, then L_{I_n}=I_{F^n}.


={color{Blue}~2.16} A(BC)=(AB)C=

Let A,B, and C be matrices such that A(BC) is defined. Then A(BC)=(AB)C; that is, matrix multiplication is associative.

{color{Blue}~2.17} T-1is linear

Let V and W be vector spaces, and let T:V→W be linear and invertible. Then T-1: W→V is linear.


={color{Blue}~2.18} [T-1] γβ=( [T] βγ)-1=

Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T:V→W be linear. Then T is invertible if and only if [T] _eta^gamma is invertible. Furthermore, [T^{-1}] _gamma^eta=( [T] _eta^gamma)^{-1}

Lemma. Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W).

Corollary 1. Let V be a finite-dimensional vector space with an ordered basis β, and let T:V→V be linear. Then T is invertible if and only if [T] β is invertible. Furthermore, [T-1] β=( [T] β)-1.

Corollary 2. Let A be an n×n matrix. Then A is invertible if and only if LA is invertible. Furthermore, (LA)-1=LA-1.


={color{Blue}~2.19} V is isomorphic to W ⇔ dim(V)=dim(W)=

Let W and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(V)=dim(W).

Corollary. Let V be a vector space over F. Then V is isomorphic to Fn if and only if dim(V)=n.

{color{Blue}~2.20} ??

Let W and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let β and γ be ordered bases for V and W, respectively. Then the function ~Phi: mathcal{L}(V,W)→Mm×n(F), defined by ~Phi(T)= [T] _eta^gamma for T∈mathcal{L}(V,W), is an isomorphism.

Corollary. Let V and W be finite-dimensional vector spaces of dimension n and m, respectively. Then mathcal{L}(V,W) is finite-dimensional of dimension mn.

{color{Blue}~2.21} "Φβ" is an isomorphism

For any finite-dimensional vector space V with ordered basis β, "Φβ" is an isomorphism.

{color{Blue}~2.22} ??

Let β and β' be two ordered bases for a finite-dimensional vector space V, and let Q= [I_V] _{eta'}^eta. Then (a) Q is invertible. (b) For any vin V, ~ [v] _eta=Q [v] _{eta'}.


={color{Blue}~2.23} [T] β'=Q-1 [T] βQ=

Let T be a linear operator on a finite-dimensional vector space V,and let β and β' be two ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β'-coordinates into β-coordinates. Then :::::::~ [T] _{eta'}=Q^{-1} [T] _eta Q.

Corollary. Let A∈Mn×n("F"), and le t γ be an ordered basis for Fn. Then [LA] γ=Q-1AQ, where Q is the n×n matrix whose jth column is the jth vector of γ.

{color{Blue}~2.24}

{color{Blue}~2.25}

{color{Blue}~2.26}


={color{Blue}~2.27} "p"(D)(x)=0 ("p"(D)∈C)⇒ x(k)exists (k∈N)=

Any solution to a homogeneous linear differential equation with constant coefficients has derivatives of all orders; that is, if x is a solution to such an equation, then x^{(k)} exists for every positive integer k.


={color{Blue}~2.28} {solutions}= N(p(D))=

The set of all solutions to a homogeneous linear differential equation with constant coefficients coincides with the null space of p(D), where p(t) is the auxiliary polynomial with the equation.

Corollary. The set of all solutions to s homogeneous linear differential equation with constant coefficients is a subspace of mbox{C}^infty.

{color{Blue}~2.29} derivative of exponential function

For any exponential function f(t)=e^{ct}, f'(t)=ce^{ct}.

{color{Blue}~2.30} {e-at} is a basis of N("p"(D+aI))

The solution space for the differential equation, ::::y'+a_0y=0is of dimension 1 and has {e^{-a_0t}}as a basis.

Corollary. For any complex number c, the null space of the differential operator D-cI has {e^{ct}} as a basis.

{color{Blue}~2.31} e^{ct} is a solution

Let p(t) be the auxiliary polynomial for a homogeneous linear differential equation with constant coefficients. For any complex number c, if c is a zero of p(t), then to the differential equation.


={color{Blue}~2.32} dim(N("p"(D)))=n=

For any differential operator p(D) of order n, the null space of p(D) is an n_dimensional subspace of C.

Lemma 1. The differential operator D-cI: C to C is onto for any complex number c.

Lemma 2 Let V be a vector space, and suppose that T and U are linear operators on V such that U is onto and the null spaces of T and U are finite-dimensional, Then the null space of TU is finite-dimensional, and :::::dim(N(TU))=dim(N(U))+dim(N(U)).

Corollary. The solution space of any nth-order homogeneous linear differential equation with constant coefficients is an n-dimensional subspace of C.

{color{Blue}~2.33} ecit is linearly independent with each other (ci are distinct)

Given n distinct complex numbers c_1, c_2,...,c_n, the set of exponential functions {e^{c_1t},e^{c_2t},...,e^{c_nt}} is linearly independent.

Corollary. For any nth-order homogeneous linear differential equation with constant coefficients, if the auxiliary polynomial has n distinct zeros c_1, c_2, ..., c_n, then {e^{c_1t},e^{c_2t},...,e^{c_nt}} is a basis for the solution space of the differential equation.

Lemma. For a given complex number c and positive integer n, suppose that (t-c)^n is athe auxiliary polynomial of a homogeneous linear differential equation with constant coefficients. Then the set :::eta={e^{c_1t},e^{c_2t},...,e^{c_nt}}is a basis for the solution space of the equation.

{color{Blue}~2.34} general solution of homogeneous linear differential equation

Given a homogeneous linear differential equation with constant coefficients and auxiliary polynomial :::::(t-c_1)^n_1(t-c_2)^n_2...(t-c_k)^n_k, where n_1, n_2,...,n_k are positive integers and c_1, c_2, ..., c_n are distinct complex numbers, the following set is a basis for the solution space of the equation: :::{e^{c_1t}, te^{c_1t},...,t^{n_1-1}e^{c_1t},...,e{c_kt},te^{c_kt},..,t^{n_k-1}e^{c_kt}}.

Elementary matrix operations and systems of linear equations

Elementary matrix operations

Elementary matrix

Rank of a matrix

Matrix inverses

ystem of linear equations

Determinants

"If"::::::::: A = egin{pmatrix}a & b \c & d \end{pmatrix}"is a" 2×2" matrix with entries form a field F, then we define the determinant of A, denoted "det("A")" or |A|, to be the scalar ad-bc."

*Theorem 1: linear function for a single row.
*Theorem 2: nonzero determinant ⇔ invertible matrix

Theorem 1:" The function "det: M2×2("F")" → F is a linear function of each row of a "2×2" matrix when the other row is held fixed. That is, if u,v, and w are in "F²" and k is a scalar, then":::::::detegin{pmatrix}u + kv\w\end{pmatrix}=detegin{pmatrix}u\w\end{pmatrix}+ kdetegin{pmatrix}v\w\end{pmatrix}

"and"

:::::::detegin{pmatrix}w\u + kv\end{pmatrix}=detegin{pmatrix}w\u\end{pmatrix}+ kdetegin{pmatrix}w\v\end{pmatrix}

Theorem 2:"Let A in "M2×2("F")". Then thee deter minant of A is nonzero if and only if A is invertible. Moreover, if A is invertible, then" ::::::::A^{-1}=frac{1}{det(A)}egin{pmatrix}A_{22}&-A_{12}\-A_{21}&A_{11}\end{pmatrix}

Diagonalization

Characteristic polynomial of a linear operator/matrix

{color{Blue}~5.1} diagonalizable⇔basis of eigenvector

A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, eta= {v_1,v_2,...,v_n} is an ordered basis of eigenvectors of T, and "D" = [T] β then D is a diagonal matrix and D_{jj} is the eigenvalue corresponding to v_j for 1le j le n.


={color{Blue}~5.2} eigenvalue⇔det("A"-λ"I"n)=0=

Let "A"∈Mn×n("F"). Then a scalar λ is an eigenvalue of "A" if and only if det("A"-λ"I"n)=0

{color{Blue}~5.3} characteristic polynomial

Let A∈Mn×n("F"). (a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient(-1)n. (b) A has at most n distinct eigenvalues.

{color{Blue}~5.4} υ to λ⇔υ∈N(T-λI)

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. A vector υ∈V is an eigenvector of T corresponding to λ if and only if υ≠0 and υ∈N(T-λI).

{color{Blue}~5.5} vi to λi⇔vi is linearly independent

Let T be alinear operator on a vector space V, and let lambda_1,lambda_2,...,lambda_k, be distinct eigenvalues of T. If v_1,v_2,...,v_k are eigenvectors of t such that lambda_i corresponds to v_i (1le ile k), then {v_1,v_2,...,v_k} is linearly independent.

{color{Blue}~5.6} characteristic polynomial splits

The characteristic polynomial of any diagonalizable linear operator splits.

{color{Blue}~5.7} 1≤dim(Eλ)≤m

Let T be alinear operator on a finite-dimensional vectorspace V, and let λ be an eigenvalue of T haveing multiplicity m. Then 1 ledim(E_{lambda})le m.


={color{Blue}~5.8} S=S1∪S2∪...∪Sk is linearly indenpendent=

Let T e a linear operator on a vector space V, and let lambda_1,lambda_2,...,lambda_k, be distinct eigenvalues of T. For each i=1,2,...,k, let S_i be a finite linearly indenpendent subset of the eigenspace E_{lambda_i}. Then S=S_1cup S_2 cup...cup S_k is a linearly indenpendent subset of V.

{color{Blue}~5.9} ⇔T is diagonalizable

Let T be a linear operator on a finite-dimensional vector space V that the characteristic polynomial of T splits. Let lambda_1,lambda_2,...,lambda_k be the distinct eigenvalues of T. Then (a) T is diagonalizable if and only if the multiplicity of lambda_i is equal to dim(E_{lambda_i}) for all i. (b) If T is diagonalizable and eta_i is an ordered basis for E_{lambda_i} for each i, then eta=eta_1cup eta_2cup cupeta_k is an ordered basis^2 for V consisting of eigenvectors of T.

Test for diagonlization

Inner Product Spaces

Inner product, standard inner product on Fn, conjugate transpose, adjoint, Frobenius inner product, complex/real inner product space, norm, length, conjugate linear, orthogonal, perpendicular, orthogonal, unit vector, orthonormal, normalizing.

{color{Blue}~6.1} properties of linear product

Let V be an inner product space. Then for x,y,zin V and c in f, the following staements are true. (a) langle x,y+z angle=langle x,y angle+langle x,z angle. (b) langle x,cy angle=ar{c}langle x,y angle. (c) langle x,mathit{0} angle=langlemathit{0},x angle=0. (d) langle x,x angle=0 if and only if x=mathit{0}. (e) Iflangle x,y angle=langle x,z angle for all xin V, then y=z.

{color{Blue}~6.2} law of norm

Let V be an inner product space over F. Then for all x,yin V and cin F, the following statements are true. (a) |cx|=|c|cdot|x|. (b) |x|=0 if and only if x=0. In any case, |x|ge0. (c)(Cauchy-Schwarz In equality)|langle x,y angle|le|x|cdot|y|. (d)(Triangle Inequality)|x+y|le|x|+|y|.

orthonormal basis,Gram-schmidtprocess,Fourier coefficients,orthogonal complement,orthogonal projection

{color{Blue}~6.3} span of orthogonal subset

Let V be an inner product space and S={v_1,v_2,...,v_k} be an orthogonal subset of V consisting of nonzero vectors. If y∈span(S), then ::::::y=sum_{i=1}^n{langle y,v_i angle over |v_i|^2}v_i

{color{Blue}~6.4} Gram-Schmidt process

Let V be an inner product space and S={w_1,w_2,...,w_n} be a linearly independent subset of V. DefineS'={v_1,v_2,...,v_n}, where v_1=w_1 and ::::::v_k=w_k-sum_{j=1}^{k-1}{langle w_k, v_j angleover|v_j|^2}v_jThen S' is an orhtogonal set of nonzero vectors such that span(S')=span(S).

{color{Blue}~6.5} orthonormal basis

Let V be a nonzero finite-dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β ={v_1,v_2,...,v_n} and x∈V, then ::::::x=sum_{i=1}^nlangle x,v_i angle v_i.

Corollary. Let V be a finite-dimensional inner product space with an orthonormal basis β ={v_1,v_2,...,v_n}. Let T be a linear operator on V, and let A= [T] β. Then for any i and j, A_{ij}=langle T(v_j), v_i angle.

{color{Blue}~6.6} W⊥ by orthonormal basis

Let W be a finite-dimensional subspace of an inner product space V, and let y∈V. Then there exist unique vectors u∈W and u∈W such that y=u+z. Furthermore, if {v_1,v_2,...,v_k} is an orthornormal basis for W, then ::::::u=sum_{i=1}^klangle y,v_i angle v_i.S={v_1,v_2,...,v_k}Corollary. In the notation of Theorem 6.6, the vector u is the unique vector in W that is "closest" to y; thet is, for any x∈W, |y-x|ge|y-u|, and this inequality is an equality if and onlly if x=u.

{color{Blue}~6.7} properties of orthonormal set

Suppose that S={v_1,v_2,...,v_k} is an orthonormal set in an n-dimensional inner product space V. Than (a) S can be extended to an orthonormal basis {v_1, v_2, ...,v_k,v_{k+1},...,v_n} for V. (b) If W=span(S), then S_1={v_{k+1},v_{k+2},...,v_n} is an orhtonormal basis for W(using the preceding notation). (c) If W is any subspace of V, then dim(V)=dim(W)+dim(W).

Least squares approximation,Minimal solutions to systems of linear equations

{color{Blue}~6.8} linear functional representation inner product

Let V be a finite-dimensional inner product space over F, and let g:V→F be a linear transformation. Then there exists a unique vector y∈ V such that m{g}(x)=langle x, y angle for all x∈ V.

{color{Blue}~6.9} definition of T*

Let V be a finite-dimensional inner product space, and let T be a linear operator on V. Then there exists a unique function T*:V→V such that langle m{T}(x),y angle=langle x, m{T}^*(y) angle for all x,y ∈ V. Furthermore, T* is linear


={color{Blue}~6.10} [T*] β= [T] *β=

Let V be a finite-dimensional inner product space, and let β be an orthonormal basis for V. If T is a linear operator on V, then ::::: [T^*] _eta= [T] ^*_eta.

{color{Blue}~6.11} properties of T*

Let V be an inner product space, and let T and U be linear operators onV. Then (a) (T+U)*=T*+U*; (b) (cT)*=ar c T* for any c∈ F; (c) (TU)*=U*T*; (d) T**=T; (e) I*=I.

Corollary. Let A and B be n×nmatrices. Then (a) ("A"+"B")*="A"*+"B"*; (b) (c"A")*=ar c "A"* for any c∈ F; (c) ("AB")*="B"*"A"*; (d) "A"**="A"; (e) "I"*="I".

{color{Blue}~6.12} Least squares approximation

Let "A" ∈ Mm×n("F") and y∈Fm. Then there exists x_0 ∈ Fn such that (A*A)x_0=A*y and |Ax_0-Y|le|Ax-y| for all x∈ Fn

Lemma 1. let "A "∈ Mm×n("F"), x∈Fn, and y∈Fm. Then :::::langle Ax, y angle _m =langle x, A*y angle _n

Lemma 2. Let "A "∈ Mm×n("F"). Then rank("A*A")=rank("A").

Corollary.(of lemma 2) If "A" is an m×n matrix such that rank("A")=n, then "A*A" is invertible.

{color{Blue}~6.13} Minimal solutions to systems of linear equations

Let "A "∈ Mm×n("F") and b∈ Fm. Suppose that Ax=b is consistent. Then the following statements are true. (a) There existes exactly one minimal solution s of Ax=b, and s∈R(L"A"*). (b) Ther vector s is the only solutin to Ax=b that lies in R(L"A"*); that is , if u satisfies (AA*)u=b, then s=A*u.

Canonical forms

References

* Linear Algebra 4th edition, by Stephen H. Friedberg Arnold J. Insel and Lawrence E. spence ISBN7040167336
* Linear Algebra 3rd edition, by Serge Lang (UTM) ISBN0387964126


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • algebra — /al jeuh breuh/, n. 1. the branch of mathematics that deals with general statements of relations, utilizing letters and other symbols to represent specific sets of numbers, values, vectors, etc., in the description of such relations. 2. any of… …   Universalium

  • Algebra — This article is about the branch of mathematics. For other uses, see Algebra (disambiguation). Algebra is the branch of mathematics concerning the study of the rules of operations and relations, and the constructions and concepts arising from… …   Wikipedia

  • Eigenvalues and eigenvectors — For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix. In this shear mapping the red arrow changes direction but the blue arrow does not. Therefore the blue arrow is an… …   Wikipedia

  • Eigenvalue, eigenvector and eigenspace — In mathematics, given a linear transformation, an Audio|De eigenvector.ogg|eigenvector of that linear transformation is a nonzero vector which, when that transformation is applied to it, changes in length, but not direction. For each eigenvector… …   Wikipedia

  • History of algebra — Elementary algebra is the branch of mathematics that deals with solving for the operands of arithmetic equations. Modern or abstract algebra has its origins as an abstraction of elementary algebra. Historians know that the earliest mathematical… …   Wikipedia

  • Abstract algebra — This article is about the branch of mathematics. For the Swedish band, see Abstrakt Algebra. The permutations of Rubik s Cube have a group structure; the group is a fundamental concept within abstract algebra. Abstract algebra is the subject area …   Wikipedia

  • Geometric algebra — In mathematical physics, a geometric algebra is a multilinear algebra described technically as a Clifford algebra over a real vector space equipped with a non degenerate quadratic form. Informally, a geometric algebra is a Clifford algebra that… …   Wikipedia

  • Comparison of vector algebra and geometric algebra — Vector algebra and geometric algebra are alternative approaches to providing additional algebraic structures on vector spaces, with geometric interpretations, particularly vector fields in multivariable calculus and applications in mathematical… …   Wikipedia

  • List of mathematics articles (T) — NOTOC T T duality T group T group (mathematics) T integration T norm T norm fuzzy logics T schema T square (fractal) T symmetry T table T theory T.C. Mits T1 space Table of bases Table of Clebsch Gordan coefficients Table of divisors Table of Lie …   Wikipedia

  • Logic and the philosophy of mathematics in the nineteenth century — John Stillwell INTRODUCTION In its history of over two thousand years, mathematics has seldom been disturbed by philosophical disputes. Ever since Plato, who is said to have put the slogan ‘Let no one who is not a geometer enter here’ over the… …   History of philosophy

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”