Secant method

Secant method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function "f".

The method

The secant method is defined by the recurrence relation

:x_{n+1} = x_n - frac{x_n-x_{n-1{f(x_n)-f(x_{n-1})} f(x_n).

As can be seen from the recurrence relation, the secant method requires two initial values, "x"0 and "x"1, which should ideally be chosen to lie close to the root.

Derivation of the method

Given "x""n"−1 and "x""n", we construct the line through the points ("x""n"−1, "f"("x""n"−1)) and ("x""n", "f"("x""n")), as demonstrated in the picture on the right. Note that this line is a secant or chord of the graph of the function "f". In point-slope form, it can be defined as

: y - f(x_n) = frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1 (x-x_n).

We now choose "x""n"+1 to be the root of this line, so "x""n"+1 is chosen such that

: f(x_n) + frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1 (x_{n+1}-x_n) = 0.

Solving this equation gives the recurrence relation for the secant method.

Convergence

The iterates "x""n" of the secant method converge to a root of "f", if the initial values "x"0 and "x"1 are sufficiently close to the root. The order of convergence is α, where: alpha = frac{1+sqrt{5{2} approx 1.618 is the golden ratio. In particular, the convergence is superlinear.

This result only holds under some technical conditions, namely that "f" be twice continuously differentiable and the root in question be simple (i.e., that it not be a repeated root).

If the initial values are not close to the root, then there is no guarantee that the secant method converges.

Comparison with other root-finding methods

The secant method does not require that the root remain bracketed like the bisection method does, and hence it does not always converge. The false position method uses the same formula as the secant method. However, it does not apply the formula on "x""n"−1 and "x""n", like the secant method, but on "x""n" and on the last iterate "x""k" such that "f"("x""k") and "f"("x""n") have a different sign. This means that the false position method always converges.

The recurrence formula of the secant method can be derived from the formula for Newton's method: x_{n+1} = x_n - frac{f(x_n)}{f'(x_n)} by using the finite difference approximation: f'(x_n) approx frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1. If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against α ≈ 1.6). However, Newton's method requires the evaluation of both "f" and its derivative at every step, while the secant method only requires the evaluation of "f". Therefore, the secant method may well be faster in practice. For instance, if we assume that evaluating "f" takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor α² ≈ 2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor 2), so the secant method is faster.

Generalizations

Broyden's method is a generalization of the secant method to more than one dimension.

Example code

The following C code was written for clarity instead of efficiency. It was designed to solve the same problem as solved by the Newton's method and false position method code: to find the positive number "x" where cos("x") = "x"3. This problem is transformed into a root-finding problem of the form "f"("x") = cos("x") − "x"3 = 0.

In the code below, the secant method continues until one of two conditions occur:
# |x_{n+1} - x_n| < epsilon
# n > m for some given "m" and "ε".

#include #include double f(double x) { return cos(x) - x*x*x; } double SecantMethod(double xn_1, double xn, double e, int m) { int n; double d; for (n = 1; n <= m; n++) { d = (xn - xn_1) / (f(xn) - f(xn_1)) * f(xn); if (fabs(d) < e) return xn; xn_1 = xn; xn = xn - d; } return xn; } int main(void) { printf("%0.15f ", SecantMethod(0, 1, 5E-11, 100)); return 0; }

After running this code, the final answer is approximately 0.865474033101614. The initial, intermediate, and final approximations are listed below, correct digits are underlined.

: x_0 = 0 ,!: x_1 = 1 ,!: x_2 = 0.685073357326045 ,!: x_3 = 0.underline{8}41355125665652 ,!: x_4 = 0.underline{8}70353470875526 ,!: x_5 = 0.underline{865}358300319342 ,!: x_6 = 0.underline{86547}3486654304 ,!: x_7 = 0.underline{8654740331}63012 ,!: x_8 = 0.underline{865474033101614} ,!

The following graph shows the function "f" in red and the last secant line in blue. In the graph, the "x"-intercept of the secant line seems to be a good approximation of the root of "f".

External links

* [http://math.fullerton.edu/mathews/a2001/Animations/RootFinding/SecantMethod/SecantMethod.html Animations for the secant method]
* [http://twt.mpei.ac.ru/MAS/Worksheets/secant.mcd Secant method of zero (root) finding on Mathcad Application Server]
* [http://numericalmethods.eng.usf.edu/topics/secant_method.html Secant Method] Notes, PPT, Mathcad, Maple, Mathematica, Matlab at "Holistic Numerical Methods Institute"
*
* [http://math.fullerton.edu/mathews/n2003/SecantMethodMod.html Module for Secant Method by John H. Mathews]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • secant method — Смотри метод секущих …   Энциклопедический словарь по металлургии

  • Secant — is a term in mathematics. It comes from the Latin secare (to cut). It can refer to: * a secant line, in geometry * the trigonometric function called secant * the secant method, a root finding algorithm in numerical analysismathdab …   Wikipedia

  • Brent's method — In numerical analysis, Brent s method is a complicated but popular root finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of …   Wikipedia

  • False position method — The false position method or regula falsi method is a term for problem solving methods in algebra and calculus. In simple terms, these methods begin by attempting to evaluate a problem using test ( false ) values for the variables, and then… …   Wikipedia

  • Broyden's method — In mathematics, Broyden s method is a quasi Newton method for the numerical solution of nonlinear equations in more than one variable. It was originally described by C. G. Broyden in 1965. [cite journal last = Broyden first = C. G. title = A… …   Wikipedia

  • Quasi-Newton method — In optimization, quasi Newton methods (also known as variable metric methods) are well known algorithms for finding local maxima and minima of functions. Quasi Newton methods are based on Newton s method to find the stationary point of a function …   Wikipedia

  • Müller's method — is a root finding algorithm, a numerical method for solving equations of the form f(x) = 0. It is first presented by D. E. Müller in 1956. Müller s method is based on the secant method, which constructs at every iteration a line through two… …   Wikipedia

  • Steffensen's method — In numerical analysis, Steffensen s method is a root finding method. It is similar to Newton s method and it also achieves quadratic convergence, but it does not use derivatives. The method is named after Johan Frederik Steffensen.imple… …   Wikipedia

  • BFGS method — In mathematics, the Broyden Fletcher Goldfarb Shanno (BFGS) method is a method to solve an unconstrained nonlinear optimization problem. The BFGS method is derived from the Newton s method in optimization, a class of hill climbing optimization… …   Wikipedia

  • Newton's method — In numerical analysis, Newton s method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real valued function. The… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”