- Gradient
In
vector calculus , the gradient of ascalar field is avector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.A generalization of the gradient for functions on a
Euclidean space which have values in another Euclidean space is theJacobian . A further generalization for a function from oneBanach space to another is theFréchet derivative .Interpretations of the gradient
Consider a room in which the temperature is given by a scalar field T, so at each point x,y,z) the temperature is T(x,y,z) (we will assume that the temperature does not change in time). Then, at each point in the room, the gradient of T at that point will show the direction in which the temperature rises most quickly. The magnitude of the gradient will determine how fast the temperature rises in that direction.
Consider a hill whose height above sea level at a point x, y) is H(x, y). The gradient of H at a point is a vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a
dot product . Consider again the example with the hill and suppose that the steepest slope on the hill is 40%. If a road goes directly up the hill, then the steepest slope on the road will also be 40%. If instead, the road goes around the hill at an angle with the uphill direction (the gradient vector), then it will have a shallower slope. For example, if the angle between the road and the uphill direction, projected onto the horizontal plane, is 60°, then the steepest slope along the road will be 20% which is 40% times the cosine of 60°.This observation can be mathematically stated as follows. If the hill height function H is differentiable, then the gradient of H dotted with a unit vector gives the slope of the hill in the direction of the vector. More precisely, when H is differentiable the dot product of the gradient of H with a given unit vector is equal to the
directional derivative of H in the direction of that unit vector.Definition
The gradient (or gradient vector field) of a scalar function f(x) with respect to a vector variable x = (x_1,dots,x_n) is denoted by abla f or vec{ abla} f where abla (the
nabla symbol ) denotes the vectordifferential operator ,del . The notation operatorname{grad}(f) is also used for the gradient. The gradient of "f" is defined to be thevector field whose components are the partial derivatives of f. That is:: abla f = left(frac{partial f}{partial x_1 }, dots, frac{partial f}{partial x_n } ight). Here the gradient is written as arow vector , but it is often taken to be acolumn vector . When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only.Expressions for the gradient in 3 dimensions
The form of the gradient depends on the coordinate system used.
In
Cartesian coordinates , the above expression expands to:abla f(x, y, z) = left(frac{partial f}{partial x},frac{partial f}{partial y},frac{partial f}{partial z} ight)
which is often written using the standard
versor s i, j, k and:frac{partial f}{partial x}mathbf{i}+frac{partial f}{partial y}mathbf{j}+frac{partial f}{partial z}mathbf{k}
In
cylindrical coordinates , the gradient is given by harv|Schey|1992|pp=139-142::abla f( ho, heta, z) = frac{partial f}{partial ho}mathbf{e}_ ho+frac{1}{ ho}frac{partial f}{partial heta}mathbf{e}_ heta+frac{partial f}{partial z}mathbf{e}_z
where heta is the azimuthal angle and z is the axial coordinate and eρ, eθ and e"z" are unit vectors pointing along the coordinate directions.
In
spherical coordinates harv|Schey|1992|pp=139-142::abla f(r, heta, phi) = frac{partial f}{partial r}mathbf{e}_r+frac{1}{r}frac{partial f}{partial heta}mathbf{e}_ heta+frac{1}{r sin heta}frac{partial f}{partial phi}mathbf{e}_phi
where phi is the
azimuth angle and heta is the zenith angle.Example
For example, the gradient of the function in Cartesian coordinates : f(x,y,z)= 2x+3y^2-sin(z) is::abla f= left(frac{partial f}{partial x},frac{partial f}{partial y},frac{partial f}{partial z} ight) = left( 2, 6y, -cos(z) ight).
The gradient and the derivative or differential
Linear approximation to a function
The gradient of a function f from the
Euclidean space mathbb{R}^n to mathbb{R} at any particular point "x"0 in mathbb{R}^n characterizes the bestlinear approximation to "f" at "x"0. The approximation is as follows:: f(x) approx f(x_0) + ( abla f)_{x_0}cdot(x-x_0) for x close to x_0, where abla f)_{x_0} is the gradient of "f" computed at x_0, and the dot denotes thedot product on mathbb{R}^n. This equation is equivalent to the first two terms in the multi-variableTaylor Series expansion of "f" at "x"0.The differential or (exterior) derivative
The best linear approximation to a function f : mathbb{R}^n o mathbb{R} at a point x in mathbb{R}^n is a linear map from mathbb{R}^n to mathbb{R} which is often denoted by mathrm{d}f_x or Df(x) and called the differential or (total) derivative of f at x. The gradient is therefore related to the differential by the formula:abla f)_xcdot v = mathrm d f_x(v)for any v in mathbb{R}^n. The function mathrm{d}f, which maps x to mathrm{d}f_x, is called the differential or
exterior derivative of f and is an example of adifferential 1-form .If mathbb{R}^n is viewed as the space of (length n) column vectors (of real numbers), then one can regard mathrm{d}f as the row vector:mathrm{d}f = left( frac{partial f}{partial x_1}, dots, frac{partial f}{partial x_n} ight) so that mathrm{d}f_x(v) is given by matrix multiplication. The gradient is then the corresponding column vector, i.e., abla f = mathrm{d} f^T.
Gradient as a derivative
Let "U" be an
open set in R"n". If the function "f":"U" → R is differentiable, then the differential of "f" is the (Fréchet) derivative of "f". Thus ∇"f" is a function from "U" to the space R"n", R) such that:lim_{h o 0} frac{|f(x+h)-f(x) - abla f(x)cdot h{|h = 0where • is the dot product.As a consequence, the usual properties of the derivative hold for the gradient:
;LinearityThe gradient is linear in the sense that if "f" and "g" are two real-valued functions differentiable at the point "a"∈Rn, and α and β are two constants, then α"f"+β"g" is differentiable at "a", and moreover:ablaleft(alpha f+eta g ight)(a) = alpha abla f(a) + eta abla g (a).
;Product ruleIf "f" and "g" are real-valued functions differentiable at a point "a"∈Rn, then the
product rule asserts that the product ("fg")("x") = "f"("x")"g"("x") of the functions "f" and "g" is differentiable at "a", and:abla (fg)(a) = f(a) abla g(a) + g(a) abla f(a);Chain ruleSuppose that "f":"A"→R is a real-valued function defined on a subset "A" of Rn, and that "f" is differentiable at a point "a". There are two forms of the chain rule applying to the gradient. First, suppose that the function "g" is a
parametric curve ; that is, a function "g" : "I" → Rn maps a subset "I" ⊂ R into Rn. If "g" is differentiable at a point "c" ∈ "I" such that "g"("c") = "a", then:fcirc g)'(c) = abla f(a)cdot g'(c).
More generally, if instead "I"⊂Rk, then the following holds:
:D(fcirc g)(c) = (Dg(c))^T abla f(a)
where ("Dg")T denotes the transpose
Jacobian matrix .For the second form of the chain rule, suppose that "h" : "I" → R is a real valued function on a subset "I" of R, and that "h" is differentiable at the point "c" = "f"("a") ∈ "I". Then:abla (hcirc f)(a) = h'(c) abla f(a).
Transformation properties
Although the gradient is defined in term of coordinates, it is
contravariant under the application of anorthogonal matrix to the coordinates. This is true in the sense that if "A" is an orthogonal matrix, then:abla (f(Ax)) = A^T abla (f(Ax)) = A^{-1}( abla f)(Ax)which follows by the chain rule above. A vector transforming in this way is known as a contravariant vector, and so the gradient is a special type oftensor .The differential is more natural than the gradient because it is invariant under all coordinate transformations (or
diffeomorphism s), whereas the gradient is only invariant under orthogonal transformations (because of the implicit use of the dot product in its definition). Because of this, it is common to blur the distinction between the two concepts using the notion of covariant and contravariant vectors. From this point of view, the components of the gradient transform covariantly under changes of coordinates, so it is called a covariant vector field, whereas the components of a vector field in the usual sense transform contravariantly. In this language the gradient "is" the differential, as a covariant vector field is the same thing as a differential 1-form.ref|1Unfortunately this confusing language is confused further by differing conventions. Although the components of a differential 1-form transform covariantly under coordinate transformations, differential 1-forms themselves transform contravariantly (by pullback) under diffeomorphism. For this reason differential 1-forms are sometimes said to be contravariant rather than covariant, in which case vector fields are covariant rather than contravariant.
Further properties and applications
Level sets
If the partial derivatives of "f" are continuous, then the
dot product abla f)_xcdot v of the gradient at a point "x" with a vector "v" gives thedirectional derivative of "f" at "x" in the direction "v". It follows that in this case the gradient of "f" isorthogonal to thelevel set s of "f".Because the gradient is orthogonal to level sets, it can be used to construct a vector normal to a surface. Consider any manifold that is one dimension less than the space it is in (e.g., a surface in 3D, a curve in 2D, etc.). Let this manifold be defined by an equation e.g. "F"("x", "y", "z") = 0 (i.e., move everything to one side of the equation). We have now turned the manifold into a level set. To find a normal vector, we simply need to find the gradient of the function "F" at the desired point.
Conservative vector fields
The gradient of a function is called a gradient field. A gradient field is always a
conservative vector field : line integrals through a gradient field are path-independent and can be evaluated with thegradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a conservative vector field in asimply connected region is always the gradient of a function.The gradient on Riemannian manifolds
For any smooth function f on a
Riemannian manifold ("M","g"), the gradient of "f" is thevector field abla f such that for any vector field X, :g( abla f, X ) = partial_X f, qquad ext{i.e.,}quad g_x(( abla f)_x, X_x ) = (partial_X f) (x) where g_x( cdot, cdot ) denotes theinner product of tangent vectors at "x" defined by the metric "g" andpartial_X f (sometimes denoted "X"("f")) is the function that takes any point "x"∈"M" to thedirectional derivative of "f" in the direction "X", evaluated at "x". In other words, in acoordinate chart varphi from an open subset of "M" to an open subset of R"n", partial_X f)(x) is given by::sum_{j=1}^n X^{j} (varphi(x)) frac{partial}{partial x_{j(f circ varphi^{-1}) Big|_{varphi(x)},where "X""j" denotes the "j"th component of "X" in this coordinate chart.So, the local form of the gradient takes the form:
:abla f= g^{ik}frac{partial f}{partial x^{kfrac{partial}{partial x^{i.
Generalizing the case "M"=R"n", the gradient of a function is related to its
exterior derivative , since partial_X f) (x) = df_x(X_x). More precisely, the gradient abla f is the vector field associated to the differential 1-form d"f" using themusical isomorphism sharp=sharp^gcolon T^*M o TM (called "sharp") defined by the metric "g". The relation between the exterior derivative and the gradient of a function on R"n" is a special case of this in which the metric is the flat metric given by the dot product.ee also
*Gradient descent
*Curl
*Divergence
*Laplace operator
*Electrochemical gradient
*Level set
*Musical isomorphism
*Nabla
*Sobel operator
*Grade (slope)
*Slope
*Surface gradient References
*.
*.
Wikimedia Foundation. 2010.