The Jenkins-Traub Algorithm is a standard in the field of numerical computation of polynomial roots, fundamentally developed as a numerical algorithm specifically for the task of computing polynomial roots. In other words, (i) because it was planned from the outset for numerical purposes rather than being simply an adaptation of an analytic formula, it is extremely robust, effectively minimizing the effects of computer round-off error, while (ii) also being extremely efficient compared to more general methods not written specifically for the task of computing polynomial roots; in fact, the algorithm converges to polynomial roots at a rate better than quadratic. Furthermore, since being introduced over thirty years ago, the algorithm has had time to be rigorously tested and has successfully proven its quality; as a result, it has gained a wide distribution as evidenced by its incorporation in commercial software products and the posting on the NETLIB website of source code for programs based on the algorithm.
An Introduction to the Field of Root-finding
Given a function which is defined in terms of one or more independent variables, the roots (also called zeros) of the equation are the values of the independent variable(s) for which the function equals [tex]0[/tex]:
[TEX]\displaystyle f(z1, z2, z3, z4, z5, . . . ) = 0[/TEX]
Note that, in general, the z values are complex numbers, comprised of real and imaginary components (indicated by the [TEX]i[/TEX]): [TEX]z = x + iy[/TEX].
Consider the following equation:
[TEX]\displaystyle 2z_{1} + 0.5z_{2}z_{3} + 3.14z_{4}^{2}+1.414\ln(z_{5})=23[/TEX]
The values of [TEX]z_{1}, z_{2}, z_{3}, z_{4}[/TEX] and [TEX]z_{5}[/TEX] which satisfy this equation may not be possible by analytical methods, so the equation would be rearranged into the following form:
[TEX]\displaystyle 2z_{1} + 0.5z_{2}z_{3} + 3.14z_{4}^{2}+1.414\ln(z_{5}) – 23 = 0[/TEX]
which is of the form
[TEX]\displaystyle f(z1, z2, z3, z4, z5) = 0[/TEX]
Once in this form (the standard form), solving the original equation becomes a matter of finding the [TEX]z_{1}, z_{2}, z_{3}, z_{4}[/TEX] and [TEX]z_{5}[/TEX] values for which f equals [tex]0[/tex], and well-developed fields of mathematics and computer science provide several root-finding techniques for solving such a problem. Note that since the theme of this article is polynomial root-finding, further examples will focus on single-variable equations, specifically, polynomials.
Analytical Root-finding Techniques
Consider the following quadratic equation:
[TEX]\displaystyle f(z) = z^{2} + z – 2[/TEX]
This equation is a second-degree polynomial–the highest power applied to the independent variable is 2. Consequently, this equation has two roots; an n-degree polynomial equation has n roots.
Because the equation is a simple polynomial, a first approach to finding its zeros might be to put the equation into the form [TEX](z – \alpha_{1})(z – \alpha_{2})[/TEX] and make a few educated guesses; a person could quickly determine that [TEX]\alpha_{1} = 1[/TEX] (a strictly real result) is one root of this equation. This root could then be divided out of the original polynomial
[TEX]\displaystyle \frac{(z^{2} + z -2)}{(z-1)} = (z+2) [/TEX]
to yield the second root: [TEX]\alpha_{2} = -2[/TEX].
In this simple example, the two roots were found easily; the second root was found immediately after the first one was known. However, in most real-world problems the roots of a quadratic are not found so easily. Furthermore, in problems where the original polynomial is of degree greater than two, when one root is found, the other roots do not follow immediately more sophisticated techniques are required.
Among the techniques available are analytical ones, in other words, techniques that yield explicit, algebraic results. For example, in the case of a quadratic equation, explicit expressions for the two roots are possible via the Quadratic Formula: once a quadratic equation is arranged into the standard form
[TEX]\displaystyle \bold{a}z^{2} + \bold{b}z + \bold{c} = 0[/TEX]
(a, b and c are known constants), the two roots are found via the Quadratic Formula:
[TEX]\displaystyle \alpha = \frac{{ – b \pm \sqrt {b^2 – 4ac} }}{{2a}}[/TEX]
The quantity within the square root sign [TEX](b^2 – 4ac)[/TEX] is called the discriminant and determines whether the solution is complex or strictly real. If the discriminant is less than [tex]0[/tex], the numerator contains an imaginary component and the roots are, therefore, complex. If the discriminant is greater than or equal to [tex]0[/tex], the solutions are real. In fact, if the discriminant is [tex]0[/tex], the two roots are real and equal:
[TEX]\displaystyle \alpha_{1} = \frac{{ – b + \sqrt {0} }}{{2a}} = \frac{-b}{2a}[/TEX]
[TEX]\displaystyle \alpha_{2} = \frac{{ – b – \sqrt {0} }}{{2a}} = \frac{-b}{2a}[/TEX]
In addition to the Quadratic Formula for quadratic equations, analogous formulae exist for polynomials of degree three and four. However, for polynomials of degree five and higher, analytical solutions are not possible (except in special cases), only numerical solutions are possible.
Numerical Root-finding Techniques
For the multitude of real-world problems that are not amenable to an analytical solution, numerical root-finding is a well-developed field offering a wide variety of tools from which to choose; many computer programs written for the computation of roots are based on algorithms that make them applicable to the computation of roots of functions in addition to polynomials, and virtually all of them employ an iterative approach that terminates once the desired degree of tolerance has been achieved. For example, the bisection method is a simple and reliable method for computing roots of a function when they are known ahead of time to be real only. The algorithm starts with the assumption that a zero is somewhere on a user-supplied interval [a,b]. In other words, the function value f(a) is of the opposite sign of f(b) – in going from a to b, f either goes from being positive to being negative, or it goes from being negative to being positive. A point, m, in the middle of the interval [a,b] is then selected. The function is evaluated at point m: f(m). If the sign of f(m) is the same as the sign of f(a), the desired zero is not in the interval [a,m]. In this case, the interval is cut in half and the new interval becomes [m,b]. On the other hand, if the sign of f(m) is not the same as the sign of f(a), the desired zero is in the interval [a,m]. In this case, the interval is cut in half and the new interval becomes [a,m]. This process is repeated until either the interval converges to a desired tolerance or a value of f is found that is within an acceptable tolerance, in which case the value of z at this point is the value returned by the program as the root.
In addition to the bisection method, more elegant–and faster converging–techniques are available: the False Position method, Brent’s method, the Newton-Raphson method, the secant method, and others. The Newton-Raphson method uses the function and its derivative to quickly converge to a solution. This method is good for situations in which the derivatives are either known or can be calculated with a low computational cost. In other situations, the cost of computing derivatives may be too high. The secant method, by comparison, does not require the use of derivatives, but the calculation of each iterate requires the use of the two previous iterates and does not converge to a solution as quickly as the Newton-Raphson method. In some situations, this method, too, may be deemed impractical. Usually, when some thought is given to a problem, a particular technique can be applied to it that is more appropriate than another. In fact, for some problems, one technique may fail to find a solution at all, whereas another technique will succeed. For other problems, several techniques may, indeed, be able to solve the problem and the numerical analyst may select the one that is simply more computationally efficient.
The Jenkins-Traub Algorithm
The field of numerical root-finding is so well-developed that the use of a good-quality numerical program is recommended when numerical results are sought even when an analytical solution to a problem is known. In other words, even though an analytical solution to, say, the quartic equation exists, writing a computer program that simply implements the textbook formula is not recommended; computer round-off errors often render results of such programs meaningless. The use of a robust numerical program, based upon sound theory and an excellent algorithm, and coded to thoroughly deal with computer round-off errors, is the recommended action. The Jenkins-Traub Algorithm is such an algorithm; it is a three-stage, extremely effective, globally convergent algorithm designed specifically for computing the roots of polynomials.
Stage One is the “No Shift” stage; the main purpose of this stage is to accentuate the smaller zeros. The search for a zero is started by taking an initial guess of [tex]0[/tex] for a fixed number, M, of iterations (M is usually assigned the value 5 on the basis of numerical experience(1)).
Stage Two is the “Fixed Shift” stage, the purpose of this stage is to separate zeros of equal or almost equal magnitude. As a starting point in this stage, the following value is used:
[TEX]\displaystyle s = \beta e^{i\theta}[/TEX]
[TEX]\beta[/TEX] is a lower bound on the magnitudes of the probable zeros in the cluster. [TEX]\theta[/TEX] could be taken at random, since the cluster could be anywhere in the complex plane; however, in practice [TEX]\theta[/TEX] is usually initialized to 49°, putting s near the middle of the first quadrant of the complex plane. After a certain number of iterations, if s does not converge to a root, s is assigned a new value by increasing [TEX]\theta[/TEX] by 94°. Repeated attempts would have the search for a root start with points in all four quadrants of the complex plane until the search is returned to the first quadrant. Should the search, indeed, return to the first quadrant, successive cycles start at points 16° away from the starting point of the preceding cycle.
Stage Three is the “Variable Shift” stage, which is terminated when the computed value of the polynomial at a possible zero is less than or equal to a specified bound.
In addition to the three fundamental stages around which it was developed, the Jenkins-Traub Algorithm incorporates several other techniques for making it as effective as possible. One of those techniques is deflation of the polynomial by synthetic division each time a root is found. Consider the following monic polynomial:
[TEX]\displaystyle z^n + \bold{a_{n-1}}z^{n-1}+ \bold{a_{n-2}}z^{n-2}+\bold{a_{n-3}}z^{n-3} + \ldots + \bold{a_{1}}z + \bold{a_0} = 0[/TEX]
The [TEX]\bold{a_i}[/TEX] are known constants and, in general, are complex. Now say the root [TEX]z_1[/TEX] ([TEX]z_1 = x + iy[/TEX]) has been found. Synthetic division would be employed to divide that root out of the original polynomial:
[TEX]\displaystyle \frac{z^n + \bold{a_{n-1}}z^{n-1}+ \bold{a_{n-2}}z^{n-2}+\bold{a_{n-3}}z^{n-3} + \ldots + \bold{a_0}}{z-z_1}[/TEX]
to yield
[TEX]\displaystyle \bold{b_{n-1}}z^{n-1}+ \bold{b_{n-2}}z^{n-2}+\bold{a_{b-3}}z^{n-3} + \ldots + \bold{b_0}[/TEX]
The [TEX]\bold{b_i}[/TEX] are new constants. The root-finding process is then repeated on this new–simpler–polynomial. As each root is found, the polynomial becomes successively simpler and each successive iteration of the algorithm involves, in general, fewer computations.
For polynomials whose coefficients are real only, when a complex root is found ([TEX]z_1 = x + iy[/TEX]), an additional benefit arises: that root’s complex conjugate is also a root ([TEX]z_2 = x – iy[/TEX]). In other words, two roots are computed and the polynomial can be deflated by two degrees in a single iteration of the algorithm. Furthermore, this deflation involves real only, rather than complex, operations; the product of these two roots is a real quadratic equation:
[TEX]\displaystyle (z-z_1)(z – z_2) = z^2 – 2xz + (x^2 + y^2)[/TEX]
In this case, synthetic division is employed on the polynomial as follows:
[TEX]\displaystyle \frac{z^n + \bold{a_{n-1}}z^{n-1}+ \bold{a_{n-2}}z^{n-2}+\bold{a_{n-3}}z^{n-3} + \ldots + \bold{a_0}}{z^2 – 2xz + (x^2 + y^2)}[/TEX]
In fact, for computing roots of polynomials which have only real coefficients, a modified version of the Jenkins-Traub Algorithm has been written which incorporates several features that take advantage of the characteristics of real-only polynomials to yield significant decreases in program execution time; “If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast.”(4)
The Jenkins-Traub Algorithm not only deflates the polynomial as roots are computed, it computes the roots roughly in order of increasing magnitude. This approach is taken because deflation of a polynomial can be unstable unless done by factors in order of increasing magnitude. By starting the search for roots with an initial guess of [tex]0[/tex], as is done in Stage One, roots are, indeed, computed roughly in order of increasing magnitude, the factors by which the polynomial is successively deflated are roughly in order of increasing magnitude the deflation of the polynomial is made quite stable.
As a quick check of the effectiveness of the Jenkins-Traub Algorithm, consider a contrived numerical example:
[TEX]\displaystyle f(z) = (z – 1.001)(z – 0.998)(z – 1.00002)(z – 0.99999)[/TEX]
The roots of this polynomial are very close and should test how effectively the algorithm discerns near-multiple roots:
1.001
0.998
1.00002
0.99999
In addition, the polynomial is real-only so that readers may confirm the results with a free, ready-to-use, online Javascript program (URL given below).
Expanded, the test polynomial becomes
[TEX]\displaystyle \displaystyle z^{4} – 3.99901z^{3} + 5.9970279898z^{2} – 3.9970259795802z + [/TEX] [TEX]0.9990079897802004[/TEX]
and the solutions computed by the Javascript root-finder follow:
0.9980051486669109
0.9998618452719573
1.0001539207016332
1.0009890853594987
The program, in fact, did a very good job:
- no errors were returned,
- no erroneous imaginary components were found,
- four distinct roots were computed, and
- the computed roots are extremely close to the exact roots (round-off errors are unavoidable).
Conclusion
Since its introduction in 1969 (in Michael Jenkins’ PhD thesis) the Jenkins-Traub Algorithm has been rigorously tested and has proven itself as an excellent algorithm for the computation of the roots of polynomials. It is extremely robust, is globally convergent for any distribution of zeros, and converges at a better than quadratic rate. Attesting to its high quality is the fact that source code for programs (written in FORTRAN) based on the algorithm have been posted on the NETLIB site, a repository of high-quality numerical programs; the complex coefficient version is posted as CPOLY (https://www.netlib.org/toms/419) and a real coefficient only version has been posted as RPOLY (https://www.netlib.org/toms/493). Translations into C++ and Javascript are also available; for example, Polynomial Root-finder (https://www.akiti.ca/PolyRootRe.html) is a Javascript translation of RPOLY and has been successfully tested with data published in Jenkins’ PhD thesis. In addition, the algorithm is incorporated in commercial software products (i.e. Mathematica). The proven effectiveness of the Jenkins-Traub Algorithm and its wide-spread distribution make it an extremely important tool in the toolbox of numerical analysts.
References
- “A First Course in Numerical Analysis: Second Edition”
Anthony Ralston and Philip Rabinowitz
McGraw-Hill Book Company
New York
1978 - “Numerical Methods and Software”
David Kahaner, Cleve Moler, Stephen Nash
Prentice-Hall, Inc.
Englewood Cliffs, New Jersey
1989 - “Numerical Recipes. The Art of Scientific Computing.”
William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling
Cambridge University Press
New York
1986
(3rd edition now available.) - Wikipedia
Jenkins-Traub Algorithm for Polynomial Zeros - Jenkins, M.A. Three-stage variable-shift iterations for the solution of polynomial equations with a posteriori error bounds for the zeros. Diss., Rep. CS 138, Comput. Sci.
Dep., Stanford U., Stanford, Cal., 1969.
This article was written by David Binner. You can visit his math site at https://akiti.ca. If you’d like to write for Math-Blog.com, please email us at [tex]submissions@math-blog.com[/tex].
I believe b^2 – 4*a*c is called the discriminant, not the determinant.
Thanks for the info. Is anyone doing anything special for Pi Day, 3/14?
I really liked this article, even if(or perhaps because) it was a lot deeper than most of the articles here. Has there been a similar multi-staged approach tried on multivariable problems?
Hello,
ihave use the 493 netlib program to solve an only 8 degree polynome and … it has a bug !
be careful
thank you for this great article!!
fyi, i recently found a java implementation of jenkins-traub: https://www.numericalmethod.com/javadoc/suanshu/com/numericalmethod/suanshu/analysis/function/polynomial/root/jenkinstraub/JenkinsTraubReal.html
I have translated two C++ codes (complex and real coefficients) for Jenkis-Traub algorithm for C# and VB.NET, and is availiable free if anybody’s interested:
https://www.codeproject.com/Articles/552678/Polynomial-Equation-Solver
Kenneth
The [TEX] inlined equations are not rendering on my browser – is this a browser issue?