- Lyapunov stability
In
mathematics , the notion of Lyapunov stability occurs in the study ofdynamical system s. In simple terms, if all solutions of the dynamical system that start out near an equilibrium point x_e stay near x_e forever, then x_e is Lyapunov stable. More strongly, if all solutions that start out near x_e converge to x_e, then x_e is asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known asstructural stability , which concerns the behaviour of different but "nearby" solutions to differential equations.Definition for continuous-time systems
Consider an autonomous nonlinear dynamical system
dot{x} = f(x(t)), ;;;; x(0) = x_0,
where x(t) in mathcal{D} subseteq mathbb{R}^n denotes the system state vector, mathcal{D} an open set containing the origin, and f: mathcal{D} ightarrow mathbb{R}^n continuous on mathcal{D}. Without loss of generality, we may assume that the origin is an equilibrium.
# The origin of the above system is said to be Lyapunov stable, if, for every epsilon > 0, there exists a delta = delta(epsilon) > 0 such that, if x(0)| < delta, then x(t)| < epsilon, for every t geq 0.
# The origin of the above system is said to be asymptotically stable if it is Lyapunov stable and if there exists delta > 0 such that if x(0) |< delta, then lim_{t ightarrow infty}x(t) = 0.
# The origin of the above system is said to be exponentially stable if it is asymptotically stable and if there exist alpha, eta, delta >0 such that if x(0)| < delta, then x(t)| leq alpha|x(0)|e^{-eta t}, for t geq 0.Conceptually, the meanings of the above terms are the following:
# Lyapunov stability of an equilibrium means that solutions starting "close enough" to the equilibrium (within a distance delta from it) remain "close enough" forever (within a distance epsilon from it). Note that this must be true for "any" epsilon that one may want to choose.
# Asymptotic stability means that solutions that start close enough not only remain close enough but also eventually converge to the equilibrium.
# Exponential stability means that solutions not only converge, but in fact converge faster than or at least as fast as a particular known rate alpha|x(0)|e^{-eta t}.The trajectory "x" is (locally) "attractive" if
:y(t)-x(t)| ightarrow 0
for t ightarrow infty for all trajectories that start close enough, and "globally attractive" if this property holds for all trajectories.
That is, if "x" belongs to the interior of its
stable manifold . It is "asymptotically stable" if it is both attractive and stable. (There are counterexamples showing that attractivity does not imply asymptotic stability. Such examples are easy to create using homoclinic connections.)Definition for iterated systems
The definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.
Let X,d) be a
metric space and fcolon X o X acontinuous function . A point xin X is said to be Lyapunov stable, if, for each epsilon>0, there is a delta>0 such that for all yin X, if:d(x,y)
holds, and one has
:d(f^n(x),f^n(y))
for all nin mathbb{N}.
We say that x is asymptotically stable if it belongs to the interior of its stable set, "i.e." if there is a delta>0 such that
:lim_{n oinfty} d(f^n(x),f^n(y))=0
whenever d(x,y)
. Lyapunov stability theorems
The general study of the stability of solutions of differential equations is known as
stability theory . Lyapunov stability theorems give only sufficient condition.Lyapunov second theorem on stability
Consider a function "V(x)" : "Rn" → "R" such that
* V(x) ge 0 with equality if and only if x=0 (positive definite)
* dot{V}(x(t)) < 0 (negative definite)Then "V(x)" is called a
Lyapunov function candidate and the system is asymptotically stable in the sense of Lyapunov (i.s.L.). (Note that V(0)=0 is required; otherwise V(x) = 1/(1+|x|) would "prove" that dot x(t) = x is locally stable. An additional condition called "properness" or "radial unboundedness" is required in order to conclude global asymptotic stability.)It is easier to visualise this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the
energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called theattractor . However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.Lyapunov's realisation was that stability can be proven without requiring knowledge of the true physical energy, providing a
Lyapunov function can be found to satisfy the above constraints.tability for linear state space models
A linear state space model
:dot{ extbf{x = A extbf{x}
is asymptotically stable if
:A^{T}M + MA + N = 0
has a solution where N = N^{T} > 0 and M = M^{T} > 0 (positive definite matrices). (The relevant Lyapunov function is V(x) = x^TMx.)
tability for systems with inputs
A system with inputs (or controls) has the form
:dot{ extbf{x = extbf{f(x,u)}
where the (generally time-dependent) input u(t) may be viewed as a "control", "external input","stimulus", "disturbance", or "forcing function". The study of such systems is the subjectof
control theory and applied incontrol engineering . For systems with inputs, one mustquantify the effect of inputs on the stability of the system. The main two approaches to thisanalysis areBIBO stability andinput to state stability .Example
Consider an equation, where compared to the
Van der Pol oscillator equation the friction term is changed::ddot{y} + y -epsilon left( frac{dot{y}^{3{3} - dot{y} ight) = 0
Let
:x_{1} = y , x_{2} = dot{y}
so that the corresponding system is
:dot{x_{2 = -x_{1} + epsilon left( fracx_{2^{3{3} - {x_{2 ight)
Let us choose as a Lyapunov function
:V = frac {1}{2} left(x_{1}^{2}+x_{2}^{2} ight)
which is clearly positive definite. Its derivative is
:dot{V} = x_{1} dot x_{1} +x_{2} dot x_{2}
:x_{1} x_{2} - x_{1} x_{2}+epsilon left(frac{x_{2}^4}{3} -{x_{2}^2} ight)
:epsilon left({x_{2}^2} - frac{x_{2}^4}{3} ight)
If the parameter epsilon is positive, stability is asymptotic for x_{2}^{2} < 3.
Barbalat's lemma and stability of time-varying systems
Assume that f is function of time only.
* Having dot{f}(t) o 0 does not imply that f(t) has a limit at t oinfty
* Having f(t) approaching a limit as t o infty does not imply that dot{f}(t) o 0.
* Having f(t) lower bounded and decreasing (dot{f}le 0) implies it converges to a limit. But it does not say whether or not dot{f} o 0 as t o infty.
Barbalat's Lemma says:If f(t) has a finite limit as t o infty and if dot{f} is uniformly continuous (or ddot{f} is bounded), then dot{f}(t) o 0 as t oinfty.
Usually, it is difficult to analyze the "asymptotic" stability of time-varying systems because it is very difficult to find Lyapunov functions with a "negative definite" derivative.
We know that in case of autonomous (time-invariant) systems, if dot{V} is negative semi-definite (NSD), then also, it is possible to know the asymptotic behaviour by invoking invariant-set theorems. However, this flexibility is not available for "time-varying" systems. This is where "Barbalat's lemma" comes into picture. It says:
:IF V(x,t) satisfies following conditions::#V(x,t) is lower bounded
:#dot{V}(x,t) is negative semi-definite (NSD)
:#dot{V}(x,t) is uniformly continuous in time (satisfied if ddot{V} is finite) :then dot{V}(x,t) o 0 as t o infty.The following example is taken from page 125 of Slotine and Li's book "Applied Nonlinear Control".
Consider a
non-autonomous system :dot{e}=-e + gcdot w(t):dot{g}=-e cdot w(t)This is non-autonomous because the input w is a function of time. Assume that the input w(t) is bounded.
Taking V=e^2+g^2 gives dot{V}=-2e^2 le 0
This says that V(t)<=V(0) by first two conditions and hence e and g are bounded. But it does not say anything about the convergence of e to zero. Moreover, the invariant set theorem cannot be applied, because the dynamics is non-autonomous.
Using Barbalat's lemma:
:ddot{V}= -4e(-e+gcdot w).
This is bounded because e, g and w are bounded. This implies dot{V} o 0 as t oinfty and hence e o 0. This proves that the error converges.
References
*
Lyapunov A.M. "Stability of motion", Academic Press, New-York and London,1966
* Jean-Jacques E. Slotine and Weiping Li, "Applied Nonlinear Control", Prentice Hall, Upper Saddle River, NJ, 1991ee also
* http://www.mne.ksu.edu/research/laboratories/non-linear-controls-lab
----
Wikimedia Foundation. 2010.