From: dil@idaccr.org (David Lieberman)
Subject: Re: Roots of higher order polynomials
Date: 08 Jun 1999 10:40:53 -0400
Newsgroups: sci.math.research
Keywords: efficiency of Euclid's algorithm for g.c.d.

   In article <7j6799$5m3$1@manifold.math.ucdavis.edu>,
   Greg Kuperberg <greg@math.ucdavis.edu> wrote:
   >In article <030619990902577004%edgar@math.ohio-state.edu.nospam>,
   >G. A. Edgar <edgar@math.ohio-state.edu.nospam> wrote:
   >>Wouldn't the Euclidean algorithm (to find the greatest common
   >>divisor of two polynomials) be easier than evaluating
   >>a large determinant [of the resultant matrix]?
   >I think that if you are free to use a special sparse matrix method to
   >compute the determinant, the two algorithms are almost equivalent.


The euclidean algorithm is far more efficient than the computation of
the resultant determinant. For one thing you only need two rows of
storage rather than n+m and for another when you subtract a multiple
of one polynomial from the other you are simultaneously reducing many
rows of the resultant matrix.

The determinant computation is O((n+m)^3), while euclid is O(n^2) (in
fact one has improvements on euclid due to Aho Hopcroft and Ullman
which make it O(n^(1 + epsilon)) i.e. it is O(n log^k(n)) for some k
if I remember correctly.

Moreover, euclid gives more information at the end...ie it identifies
the gcd rather than simply saying whether a nontrivial gcd exists.

More precisely, euclid gives different information at the end...  this
is particularly noticeable when the ring of coefficients is not a
field e.g. when one is dealing with multivariate polynomials.

The resultant determinant is still defined and is computed using only
operations in the coefficient ring, and is quite useful at least in
theory, for doing elimination.  It is quite costly to compute.  The
euclidean algorithm does not compute this resultant and in fact
requires one to work in the ring of fractions.  In the case of a
domain, euclid plus bookkeeping allows one to compute the
resultant. (See Cox, Little and O'Shea).  What is known about the most
efficient way to evaluate the resultant in general?
==============================================================================

From: djb@koobera.math.uic.edu (D. J. Bernstein)
Subject: Re: Roots of higher order polynomials
Date: 8 Jun 1999 18:42:30 GMT
Newsgroups: sci.math.research

> (in fact one has improvements on euclid due to Aho Hopcroft and Ullman
> which make it O(n^(1 + epsilon))

Actually, the improvement on Euclid's algorithm was published in 1938 by
D. H. Lehmer.

Knuth in 1971 observed that Lehmer's method, in conjunction with fast
multiplication, takes essentially linear time when the parameters are
chosen sensibly.

Schoenhage in 1971 chose parameters very carefully, and showed that one
could compute the continued fraction of a ratio of n-digit numbers in
time O(M(n) log n) where M(n) is the time to multiply n-digit numbers.

The story doesn't end here, for two reasons. First, Euclid's algorithm
is critical for k[x] as well as Z; Schoenhage's method can be simplified
considerably for k[x]. Second, one can reduce the time of Schoenhage's
method by a big constant factor, especially in applications that only
need (e.g.) a selected pair of convergents.

Moenck in 1973 stated a reasonably simple gcd algorithm for k[x] using
Schoenhage's method, and claimed incorrectly that the algorithm would
also work for Z. This is where Aho, Hopcroft, and Ullman show up: they
repeated and oversimplified Moenck's algorithm, producing an algorithm
that doesn't even work for polynomials.

Brent, Gustavson, and Yun in 1980 gave several useful speedups of
Moenck's algorithm. They also pointed out (but neither stated explicitly
nor proved) a generalization of Moenck's algorithm to compute any pair
of convergents. Montgomery in 1992 independently stated and proved a
similar generalization with some of the same speedups.

---Dan