Method of undetermined coefficients

Method of undetermined coefficients

In mathematics, the method of undetermined coefficients, also known as the lucky guess method, is an approach to finding a particular solution to certain inhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator (the annihilator) in order to find the best possible form of the particular solution, a "guess" is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time consuming to perform.

Undetermined coefficients is not as general a method as variation of parameters, since it only works for differential equations that follow certain forms.[1]

Contents

Description of the method

Consider a linear non-homogeneous ordinary differential equation of the form

any(n) + a(n − 1)y(n − 1) + ... + a1y' + a0y = g(x).

The method consists of finding the general homogeneous solution yc for the complementary linear homogeneous differential equation

any(n) + a(n − 1)y(n − 1) + ... + a1y' + a0y = 0,

and a particular integral yp of the linear non-homogeneous ordinary differential equation based on g(x). Then the general solution y to the linear non-homogeneous ordinary differential equation would be

y = yc + yp.[2]

If g(x) consists of the sum of two functions h(x) + w(x) and we say that yp1 is the solution based on h(x) and yp2 the solution based on w(x). Then, using a superposition principle, we can say that the particular integral yp is

yp = yp1 + yp2.[2]

Typical forms of the particular integral

In order to find the particular integral, we need to 'guess' its form, with some coefficients left as variables to be solved for. This takes the form of the first derivative of complementary function. Below is a table of some typical functions and the solution to guess for them.

Function of x Form for y
k e^{a x}\! C e^{a x}\!
k x^n \mathrm{,}\; n = 0, 1, 2,\cdots\! K_n x^n + K_{n-1} x^{n-1} + \cdots + K_1 x + K_0 \!
k \cos(a x) \;\;\mathrm{or}\;\;k \sin(a x) \! K \cos(a x) + M \sin(a x) \!
k e^{a x} \cos(b x) \;\;\mathrm{or}\;\; ke^{a x} \sin(b x) \! e^{a x} (K \cos(b x) + M \sin(b x)) \!
\left(\sum_{i=1}^n k_i x^i\right) e^{a x} \cos(b x) \;\;\mathrm{or}\;\; \left(\sum_{i=1}^n k_i x^i\right) e^{a x} \sin(b x)\!    e^{a x} \left(\left(\sum_{i=1}^n Q_i x^i\right) \cos(b x) + \left(\sum_{i=1}^n R_i x^i\right) \sin(b x)\right)

If a term in the above particular integral for y appears in the homogeneous solution, it is necessary to multiply by a sufficiently large power of x in order to make the two solutions linearly independent. If the function of x is a sum of terms in the above table, the particular integral can be guessed using a sum of the corresponding terms for y.[1]

Examples

Example 1

Find a particular integral of the equation

y'' + y = t \cos {t}. \!

The right side t cos t has the form

 P_n e^{\alpha t} \cos{\beta t} \!

with n=1, α=0, and β=1.

Since α + iβ = i is a simple root of the characteristic equation

\lambda^2 + 1 = 0 \!

we should try a particular integral of the form


\begin{align}
y_p &= t [F_1 (t) e^{\alpha t} \cos{\beta t} + G_1 (t) e^{\alpha t} \sin{\beta t}] \\
&= t [F_1 (t) \cos{t} + G_1 (t) \sin{t}]\\
&= t [(A_0 t + A_1) \cos{t} + (B_0 t + B_1) \sin{t}] \\
&= (A_0 t^2 + A_1 t) \cos{t} + (B_0 t^2 + B_1 t) \sin{t} .\\
\end{align}

Substituting yp into the differential equation, we have the identity


\begin{align}t \cos{t} &= y_p'' + y_p \\
&= [(A_0 t^2 + A_1 t) \cos{t} + (B_0 t^2 + B_1 t) \sin{t}]'' \\
&\quad + [(A_0 t^2 + A_1 t) \cos{t} + (B_0 t^2 + B_1 t) \sin{t}] \\
&= [2A_0 \cos{t} + 2(2A_0 t + A_1)(- \sin{t}) + (A_0 t^2 + A_1 t)(- \cos{t})] \\
&\quad +[2B_0 \sin{t} + 2(2B_0 t + B_1) \cos{t} + (B_0 t^2 + B_1 t)(- \sin{t})] \\
&\quad +[(A_0 t^2 + A_1 t) \cos{t} + (B_0 t^2 + B_1 t) \sin{t}] \\
&= [4B_0 t + (2A_0 + 2B_1)] \cos{t} + [-4A_0 t + (-2A_1 + 2B_0)] \sin{t}. \\
\end{align}

Comparing both sides, we have


\begin{array}{rrrrl}
&&4B_0&&=1\\
2A_0 &&& + 2B_1 &= 0 \\
-4A_0 &&&& = 0 \\
&-2A_1 &+ 2B_0 && = 0 \\
\end{array}

which has the solution A0 = 0, A1 = 1/4, B0 = 1/4, B1 = 0. We then have a particular integral

y_p = \frac {1} {4} t \cos{t} + \frac {1} {4} t^2 \sin{t}.
Example 2

Consider the following linear inhomogeneous differential equation:

\frac{dy}{dx} = y + e^x.

This is like the first example above, except that the inhomogeneous part (ex) is not linearly independent to the general solution of the homogeneous part (c1ex); as a result, we have to multiply our guess by a sufficiently large power of x to make it linearly independent.

Here our guess becomes:

yp = Axex.

By substituting this function and its derivative into the differential equation, one can solve for A:

\frac{d}{dx} \left( A x e^x \right) = A x e^x + e^x
Axex + Aex = Axex + ex
A = 1.

So, the general solution to this differential equation is thus:

y = c1ex + xex.
Example 3

Find the general solution of the equation:

\frac{dy}{dt} = t^2 - y

f(t), t2, is a polynomial of degree 2, so we look for a solution using the same form,

yp = At2 + Bt + C, where
\frac{d y_p}{dt} = 2 A t + B

Plugging this particular integral with constants A, B, and C into the original equation yields,

2At + B = t2 − (At2 + Bt + C), where
t2At2 = 0 and Bt = 2At and C = B

Replacing resulting constants,

yp = t2 − 2t + 2

To solve for the general solution,

y = yp + yc

where yc is the homogeneous solution yc = c1e t, therefore, the general solution is:

y = t2 − 2t + 2 + c1e t

References

  1. ^ a b Ralph P. Grimaldi (2000). "Nonhomogeneous Recurrence Relations". Section 3.3.3 of Handbook of Discrete and Combinatorial Mathematics. Kenneth H. Rosen, ed. CRC Press. ISBN 0-8493-0149-1.
  2. ^ a b Dennis G. Zill (2001). A first course in differential equations - The classic 5th edition. Brooks/Cole. ISBN 0-534-37388-7.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Method of variation of parameters — In mathematics, variation of parameters also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. It was developed by the Italian French mathematician Joseph Louis Lagrange.For first… …   Wikipedia

  • Annihilator method — In mathematics, the annihilator method is a procedure used to find a particular solution to certain types of inhomogeneous ordinary differential equations. It is equivalent to the method of undetermined coefficients, and the two names are… …   Wikipedia

  • Partial fraction — In algebra, the partial fraction decomposition or partial fraction expansion is a procedure used to reduce the degree of either the numerator or the denominator of a rational function (also known as a rational algebraic fraction). In symbols, one …   Wikipedia

  • Recurrence relation — Difference equation redirects here. It is not to be confused with differential equation. In mathematics, a recurrence relation is an equation that recursively defines a sequence, once one or more initial terms are given: each further term of the… …   Wikipedia

  • Linear differential equation — In mathematics, a linear differential equation is a differential equation of the form: Ly = f ,where the differential operator L is a linear operator, y is the unknown function, and the right hand side fnof; is a given function (called the source …   Wikipedia

  • Ordinary differential equation — In mathematics, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable. A simple example is Newton s second law of… …   Wikipedia

  • List of mathematics articles (M) — NOTOC M M estimator M group M matrix M separation M set M. C. Escher s legacy M. Riesz extension theorem M/M/1 model Maass wave form Mac Lane s planarity criterion Macaulay brackets Macbeath surface MacCormack method Macdonald polynomial Machin… …   Wikipedia

  • Dynamic programming — For the programming paradigm, see Dynamic programming language. In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems… …   Wikipedia

  • Bellman equation — A Bellman equation (also known as a dynamic programming equation), named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes… …   Wikipedia

  • List of solution strategies for differential equations — Exact= * Method of undetermined coefficients * Integrating factor:For y +a(x)y = b(x) let M(x)=exp{int a(x),dx} then::y(x) = frac{int b(x) M(x), dx + C}{M(x)}., * Method of variation of parameters * Separable differential equationNumerical… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”