The Similarities and Differences Between Polynomials and Integers
The Similarities and Differences Between Polynomials and Integers
Abstract
The purpose of this paper is to examine the two domains of the
integers and the polynomials, in an attempt to understand the nature
of complexity in these very basic situations. Can we formalize the
integer algorithms which shed light on the polynomial domain, and
vice versa? When will the casting of one in the other speed up an
existing algorithm? Why do some problems not lend themselves to
this kind of speed-up?
We give several simple and natural theorems that show how prob-
lems in one domain can be embedded in the other, and we examine
the complexity-theoretic consequences of these embeddings. We also
prove several results on the impossibility of solving integer problems
by mimicking their polynomial counterparts.
1 Introduction
It is a fact frequently remarked upon that polynomials and integers share
a number of characteristics. Usually the Fast Fourier Transform is then
Supported by NSF grants DMS-8807202 and CCR-9204630.
y Supported by NSF grant CCR-9207797.
1
given as an example of this phenomenon, and the problem of multiplying
polynomials is shown to be embedable as a problem in multiplying integers.
In 1974, Borodin and Moenck 6], extended examples of Cabay 11],
Brown 10] and Collins 14], and examined the issue of fast modular trans-
forms (evaluating mod p for integer problems, evaluating at integer points for
polynomial problems) as a general technique for algorithmic speed-up in the
two domains. In 1982, shortly before the Lenstra, Lenstra, Lovasz 19] poly-
nomial time algorithm for polynomial factorization, Adleman and Odlyzko
gave a reduction from polynomial irreducibility testing and factorization to
integer primality testing and factorization respectively 2]. More recently,
Char, Geddes and Gonnet 12], and independently, Schonhage 30], gave a
fast algorithm for computing polynomial gcds which relies on computing in-
teger gcds, and then interpolating these values to nd the polynomial gcd.
The idea of embedding polynomial arithmetic is also the basis for Schonhage's
fast algorithm for polynomial multiplication and division 28].
These ideas can only be carried so far, because of the problem with carries.
Can we formalize the integer algorithms which shed light on the polynomial
domain, and vice versa? When will the casting of one in the other be useful,
that is, when will it speed up an existing algorithm? Why do some problems
not lend themselves to this kind of speed-up?
We give several simple and natural theorems that show how problems in
one domain can be embedded in the other, and we examine the complexity-
theoretic consequences of these embeddings. Where integer algorithms are
unexpectedly faster than their polynomial analogues, we examine how the
structure of the problems benets from the structure of the integers. Finally
we prove several results on the impossibility of solving integer problems by
mimicking their polynomial counterparts.
2
comparing the complexity of the two domains, we will dene the coecient
size of a(x) to be c = maxflog jaij j 0 i ng. Thus the size of a(x),
written a(x)]], is nc. With these bounds in mind, we have the following
table of well-known results. In the table below, let (n) = n log n log log(n).
(The bound for polynomial division assumes that c log(n + 1).)
problem integer a = polynomial a(x) =
an;110n;1 + : : : + a0 an;1xn;1 + : : : + a0, jaij < 2c
multiplication (n) 31] (nc) 28]
division (n) 15] (nc) 28, 32]
gcd (n) log n 27] (nc) log n 3, 28]
irreducibility nlog log n 1, 13] polynomial in n c 34]
factorization 2(1=4+)n 26, 33] n5+ c 23]
Figure 1: Deterministic Time Complexity of Some Basic Problems
There are a number of things to note in this table. For the rst two
problems (which happen to be the basic building blocks), the algorithms for
the integer version of the problems have comparable running times with the
algorithms for the polynomial versions. For the last two { these are problems
in which in some sense we seek to undo the eect of carries { the polynomial
problems have exponentially faster algorithms.
We will think of the variable assigned in the last line of ; as its output.
Thus, for any ring R, ; induces a map
;R : Rr ! R:
In the next two theorems we will consider two invertible mappings between
Z and Z x].
Theorem 3.2 Let ; be a straight-line program as above. Let c 2 be a
natural number, and dene 'c : Z ! Z x] to be the map that takes the integer
an;1cn;1 + : : : + a0 in c-ary notation to the polynomial an;1 xn;1 + : : : + a0.
Let ';c 1 (p(x)) = p(c). Then ';c 1 is a right inverse of 'c and the following
diagram commutes:
('c)r
Zr - (Z x])r
;Z ;Zx]
? ';c 1 ?
Z Z x]
Proof This follows immediately from the fact that (f (x) op g(x))ja =
f (a) op g(a) where op 2 f+ ; g. 2
4
Corollary 3.3 For any c 2, the complexity of computing ;Z on r n;digit
integers written in c;ary is no more than the complexity of computing ;Zx]
on r polynomials of degree n with coecients of absolute value less than
c, plus the time to evaluate a polynomial of degree
j
no more than 2j n with
coecients of absolute value less than (2k;j cn)2 at c.
Proof Clearly the translation of the r n;bit integers takes O(rn log c)
steps. If ;Zx] is a k-step straight-line program with j multiplications, each
of the log c;bit coecients may have grown in size. The worst case scenario
occurs when the k ; j non-multiplication operations occur rst, followed
by the j multiplications. In the rst addition, the coecients can at most
double. Thus after k ; j additions the coecients will be of size less than
2k;j c, requiring at most log(c) + k ; j bits per coecient. The polynomials
involved will still be of degree n at most.
Now the rst multiplication will yield at most 2(log(c)+ k ; j )+log n bits
per coecient, and 2n terms. The next multiplication will yield (2(2(log(c)+
k ; j ) + log n)) + log 2n bits per coecient, and 4n terms. By the j th multi-
plication, we will have:
2(: : : (2(2(log(c) + k ; j ) + log n) + log 2n) + : : :) + log 2j;2n
bits per coecient, with 2j n terms. But
2(: : : (2(2(log(c) + k ; j ) + log n) + log 2n) + : : :) + log 2j;2 n =
2j (log(c) + k ; j ) + 2j;1 log n + 2j;2 log 2n + : : : + log 2j;1 n <
2j (log(c) + k ; j ) + 2j log n =
2j (log(c) + log(n) + k ; j ):
to a length of 2j (log(cn) + k ; j ) bits at most. There will be 2j n such
coecients, and thus we have the bound of the theorem. 2
We can also embed any polynomial problem as an integer one, but we
have to work a little harder to do that. We need to consider how to represent
negative as well as positive coecients of the polynomial. Fortunately there
is some work by Avizienis 4] on signed digit representations of integers which
handles this problem.
Dene the map
m (a0 : : : an;1) = n0 ;1 aimi
5
for jaij m. We call the notation of the right balanced m;ary.
To go from ordinary m;ary notation to balanced m;ary is simply a digit-
by-digit copy clearly in N C 0. To go the other way is a single subtraction
of m;ary integers, thus it is in AC 0. The advantage of balanced m;ary
notation is that it gives very ecient implementations of + ; :
Theorem 3.4 (Borodin, Cook, Pippenger) 5] Under balanced m;ary no-
tation, addition and negation of integers can be implemented in NC 0, and
multiplication in NC 1 .
Balanced m;ary notation allows us to use integers to represent polyno-
mials in a way that easily represents both positive and negative coecients.
Let Zm denote the set of formal sums, ni=0 ;1 a mi . The following is a map
i
from polynomials to balanced m;ary representation of the integers:
;1 aixi ! n;1 ai mi :
m : ni=0 i=0
It is not hard to see that this map is invertible on the set of polynomials with
coecients bounded in absolute value by m2;1 . This provides the key to a
map between polynomials and integers that gives an analogue to Theorem
3.2.
Theorem 3.5 Let ; be a straight-line program as above. Let c 2 be a
natural number and let Zc x] be the set of polynomials with integer coecients
bounded by c in absolute value. Dene : Z x] ! Z to be the map that takes
the polynomial an;1xn;1 + : : : + a0 to the integer ni=0 ;1 a mi , where m =
j
i
(cn2k+1;j )2 . Then ();1 exists, and is given by
();1( n;1 a mi ) = a xn;1 + : : : + a :
i=0 i n;1 0
6
Proof The idea is that we translate each polynomial to an integer, do the
computation, and translate back. We will do this via a series of maps, each
one of which is one-to-one. Let = m m, and let
m(an;1xn;1 + : : : + a0) = ni=0
;1 a mi
i
where m = (cn2k+1;j )2j . Furthermore, let
(m);1( aimi) = anxn + : : : + a0:
Here m : Zm ! Z is the map dened by evaluating the formal sum. In order
to show that ;Zx] and m;1(;m1 (;Z (m (m)))) commute, it suces to show
(1) that (m);1;Zm m and ;Zx] commute, and (2) that (m);1;Z m and ;Zm
commute. That the second diagram commutes is well-known from the work
of Avizienis 4].
To show that the rst diagram commutes, it will suce to show that
we have picked m suciently large that there are no carries in the balanced
m;ary arithmetic, and thus that the coecients are acting independently.
Thus the map is one to one.
Since ; has k steps, of which at most j are multiplications, the worst
case scenario occurs when the k ; j non-multiplication operations occur rst,
followed by the j multiplications. Since each addition or subtraction at most
doubles the absolute value of a coecient, we have a tight upper bound,
c0 = c2k;j
for the magnitude of the coecients after these k ; j operations.
Thus the largest possible coecients would arise from taking the poly-
omial,
f = c0(xn + xn;1 + + x + 1)
and squaring it j times. Each squaring of the polynomial increases the mag-
nitude of coecients by at most squaring them and then multiplying by the
degree. Thus an overestimate of the size of the resulting coecients is that
2c1 < (cn2k+1;j )2j = m
By the way we have chosen m, the magnitude of any resulting coecients
is less than m2 . Thus m;1(;Zm (m)) commutes with ;Zx] . 2
7
Corollary 3.6 For any c 2, the complexity of computing ;Zx] on r degree
n polynomials with coecients bounded by c in absolute value is no more
than the complexity of computing ;Z on r n;digit integers written in base
m, where m = (cn2k+1;j )2j .
Proof Obvious, from the above comments. 2
One way to think of Theorem 3.5 is that it is a very simple technique
which enables one to implement polynomial arithmetic on a machine with
large integer arithmetic, bearing in mind that the computations are far from
ecient. Thus if one wanted to compute the product of the polynomials
x2 + x x3 + 1 2x + 5
one could do this by doing arithmetic base m, for a suitably chosen m.
Using the algorithm above, we discover k = 2 j = 2 n = 3 c = 5, and thus
m = 304 . We nd:
x2 + 2x ! 2(304 ) + (304 )2
x3 + 1 ! 1 + (304 )3
2x + 5 ! 5 + 2(304 )
The product is:
10(304 ) + 9(304 )2 + 27(304 )3 + 10(304 )4 + 5(304 )5 + 2(304)6
which translates, of course, to:
10x + 9x2 + 27x3 + 10x4 + 5x5 + 26
which is of course the product of the three polynomials.
To decrease the size of the blow-up, we could occasionally reduce by
converting to an integer, and then back to a polynomial.
On the surface both Theorem 3.2 and Theorem 3.5 appear to have ex-
ponential blow-ups. In fact, the transformations behave radically dierently
with regard to this issue. In transforming an integer problem to a polynomial
one, the exponential blow-up occurs when writing the bits of the result. In
the transformation of the polynomial problem to an integer one, the exponen-
tial blow-up occurs in the size of the input to the polynomial problem. Thus
8
the blow-up happens even if the result turns out not to have an exponential
number of digits.
Looking at it another way, we have shown an exponential blow-up is
sucient to mimic any polynomial problem as an integer one. If we could
show that mimicking any polynomial problem as an integer problem requires
such a blow-up, we would be showing that polynomial problems are easier
than their integer counterparts. That would be interesting indeed.
It is not true. In looking at the table on page 3, there are several examples
in which the algorithm for the integer problem is faster than its polynomial
counterpart. The explanation for this behavior is quite simple. In those
problems the carries are performed during the operations, and not held until
the end. The resulting data compression enables a speed-up of the algorithm.
One can make use of the fast integer problem to develop a faster technique for
the polynomial problem. For example, Char, Geddes and Gonnet 12], and
Schonhage 30] computed gcds of degree n polynomials by substituting n + 1
values into the polynomials, computing the integer gcds and interpolating to
nd the polynomial gcd. They did this via a probabilistic algorithm.
Suppose you wish to compute the gcd(g(x) h(x)), where g(x) h(x) are
polynomials with coecients in Z . Char et al rst compute \primitive"
representatives for g(x) and h(x), polynomials g1(x) and h1(x) such that
g(x) = z1g1(x), h(x) = z2h1(x), and the coecients of g1(x) and h1(x)
are primitive. Then they randomly pick values ai for x, and compute bi =
gcd(g1(ai) h1(ai)). If their sample space is reasonably sized, then with high
probability the value they compute bi will in fact equal a multiple of the
gcd(g1(x) h1(x)) evaluated at ai. Since g1(x) and h1(x) are primitive poly-
nomials, the integer multiplier is easy to nd and remove. Using the value
they have computed, they interpolate to nd the gcd. With high probability,
their solution is correct.
Note that a deterministic algorithm can be used in the above, using large
integers. However, in practice the algorithm works well with values cho-
sen at random in a range smaller than which has been guaranteed to work
deterministically, cf. Theorem 4.3.
Interpolation provides another general framework for mapping between
polynomials and integers:
Theorem 3.7 Let ; be a straight-line program as above. Let d(s) be a
bound such that for all polynomials f1 : : : fr 2 Z x] of degree at most s,
9
;Zx]f1 : : : fr ] has degree at most d(s). Let i = i0 : : : id(s) be any d(s) + 1
distinct integers. Let i(p(x)) = (p(i0) : : : p(id(s)), and let i(Z x]) ! Z x]
be the polynomial interpolation function. Let Z s x] be the polynomials in Z x]
of degree at most s. Then the following diagram commutes:
i - (Z )(d(s)+1)
(Z sx])
;Zx] (;dZ(s)+1)
? (i);1 ?
Z x] (Z )(d(s)+1)
Proof This follows immediately from the fact that any polynomial of degree
k can be determined by computing its value on k +1 points and interpolating.
2
Corollary 3.8 The complexity of computing ;Zx] on r degree n polynomials
with coecients bounded by c in absolute value is no more than the complexity
of computing ;dZ(s)+1 on d(s) + 1 r-tuples of n-digit integers written base c,
plus the time to evaluate a polynomial of degree no more than d(s) with
coecients of absolute value less than 2j (c + k log n) at c.
Multiplication, division and gcd share the good fortune of having a single
small degree polynomial as a result of the computation. Thus one way to
perform the operations for polynomials is to use Theorem 3.7. Although the
problem of factorization also has small degree polynomials for its results,
the number of factors may be as large as n. The problem of recombination
after substituting integer values { which factors to pair with which { admits
exponentially many possibilities. Thus Theorem 3.7 does not seem to lead
to a fast technique for polynomial factorization.
4 Adding Division
In this section we make some observations which imply that two out of three
of the above transformations are still valid when the straight line programs
include divisions.
10
The version of the division algorithm we use will be equally appropriate
for polynomials and integers:
Division Integers Polynomials
Input: a b 2 Z , b 6= 0 a b 2 Z x] b monic
Output: q r 2 Z ; 2 < r 2 q r 2 Z x] deg(r) <deg(b)
j b j j bj
such that a = bq + r
We begin with the case that does not work. Recall the map dened in
Theorem 3.2 taking integers base c to polynomials:
'c : Z ! Z x]
an;1 n;
c + :::+ a
1
0 7 an;1 xn;1 + : : : + a0
!
Obviously this map does not preserve division as the following example
with c = 10 shows:
11
kX
;1 !
procedure RECIPROCAL aixi
i=0
if k = 1 then return(1=a0) 0 1
kX
;1
else f q(x) RECIPROCAL@ aixi;k=2A
i=k=2 !
kX
;1
r(x) 2q(x)x ; (q(x))
(3=2)k;2 2
aixi
i=0
return(br(x)=xk;2c)
Figure 4.1: Algorithm From 3] to compute bx2k;2= Pki=0
;1 a xi c
i
Theorem 4.2 Theorem 3.5 still holds for straightline programs including d
divisions by monic polynomials in addition to the j multiplications and the
k ; j additions orj+subtractions as long as the bound m is modied so that
2 k+4d+1 4 d n
m = (cn 2 ) .
Proof We have to show that for suciently large m, if P and D are
polynomials with D monic such that the result of the polynomial division
algorithm is
P = QD+R
then it follows that the result of the integer division algorithm for (P )=(D)
is
(P ) = (Q) (D) + (R)
Since the values of the division algorithm are unique and preserves
additions and multiplications, it suces to show that j(R)j < j(D)j=2.
The idea is that even after divisions the coecients remain less than half
of the new value of m. Since the degree of R is less than the degree of D it
follows that j(R)j < j(D)j=2, as desired.
The proof is: use the algorithm from 3] (see Figure 4.1) to compute the
reciprocal of D: C = bxn=Dc. The polynomial reciprocal is computed using
2 log(degree(P )) additions and the same number of multiplications. Some
truncations are also performed but these of course do not increase the size of
any coecients. The quotient, Q is then computed by multiplying C times P
12
and doing one more truncation. Another subtraction gives R as well. Recall
that the degree is bounded by 2j n. Thus the bound follows from Theorem
3.5 by substituting k + 4d log(2j n) for k and j + 2d log(2j n) for j . 2
For interpolation, the following lemma implies that as long as the substi-
tuted values are large enough, Theorem 3.7 remains true when the straight
line program includes divisions.
Theorem 4.3 Theorem 3.7 goes through with straight-line programs includ-
ing division as one of the allowable operations as long as the substituted values
are suciently large, i.e., jj (cn 2
2 k +4 d +1 ) .
4 j +d n
a() = Qb() + R
;jb()j=2 < R jbj=2
a(x) = q(x)b(x) + r(x)
deg(r) < deg(b)
It thus suces to show that
jr()j < jb()j=2:
Since deg(r) < deg(b), this follows when is suciently large. In particular,
2 k+4d+1 4j+d n
the value m = (cn 2 ) from Theorem 4.2 suces. 2
13
5 Polynomials are Simpler than Integers
We know that carries complicate matters. But it is one thing to assert
something that every rst grade teacher can agree upon, and quite another
to prove it.
The factorization of an integer and its analogous polynomial do not always
match. Because multiplication in the integer domain involves carries, the
factorization of an integer does not necessarily map to a factorization of the
related polynomial. The !ip situation occurs when a composite polynomial
maps to a prime integer. This can happen if all but one of the factors of the
polynomial evaluate to 1 when substituting the base b for the variable x.
On the other hand, sometimes integers and polynomials do behave anal-
ogously: 144 = 12 12 and x2 + 4x + 4 = (x + 2)(x + 2). However, 144
also has the factorization 9 16 but x2 + 4x + 4 6= (x ; 1)(x + 6). Base 9,
we have 14410 = 1709 = 109 179 , and x2 + 7x = x(x + 7). There is no
base b such that all possible factorizations of 144 have analogous polynomial
factorizations.
This is a result of unique factorization. The two distinct factorizations
we have written for 14410 are factorizations into composite integers. The
factorizations for the polynomial are factorizations into irreducible polyno-
mials. Thus there are several factorizations for the integer, but only a single
factorization for the polynomial.
If we try bases 2 or 3, we get some prime factors from the factorization
of the analogous polynomial. In base 2, we have 14410 = 100100002 which
gives rise to x7 + x4 = x4(x3 + 1), or the integer factorization 24 9. In base
3 we get 14410 = 121003 , which leads to x4 + 2x3 + x2. That factors into
x2(x + 1)2, which leads to the integer factorization 3242. Larger bases than
3 do not yield prime factors.
The interweaving that results from carries is the underlying reason. Let
Cb be the ratio of multiplications which have carries in an nn multiplication
n
table base b to all multiplications in an n n table. Then for all bases b 2,
limn!1 Cbn = 1: More interesting to show would be that most products
involve a carry in all reasonably sized bases, (bases b log n).
We could mimic an integer factorization problem as a polynomial one
by splitting the coecients of the polynomial in such a way as to undo the
eect of the carries. For example, if one wants to factor 1729 by viewing it as
a polynomial, one would consider the factorization of a number of dierent
14
polynomials: x3 +7x2 +2x +9 x3 +6x2 +12x +9 x3 +5x2 +22x +9, etc. Such
an approach was actually used in 9] to check primality of certain integers.
The diculty with this approach is that it leads to an exponential number of
possibilities. This is because one or more can be borrowed from each nonzero
digit, independently of all the other digits.
The rest of this section is devoted to making precise some of the above
ideas, and proving some small results.
The polynomials we are interested in are of a specic type, those whose
coecients run between 0 and b ; 1 for some base b. Let Pbn be the set of all
polynomials of degree at most n whose coecients lie between 0 and b ; 1.
Let Nbn be that subset of Pbn whose leading and constant terms are both
non-zero. Finally let Ibn(x) be the subset of Nbn which consists of irreducible
polynomials. There is the following long-standing but unproved conjecture:
Conjecture 5.1 24] For all bases b 2, the limn!1 jIbn(x)j=jNbn (x)j = 1:
By contrast \most" integers factor. The Prime Number Theorem states:
Theorem 5.2 Let
(x) be the number of primes less than x. Then
limx!1 (x)xln x = 1.
A surprising fact is that sometimes integer irreducibility (primality) can
carry over to polynomial irreducibility. Brillhart, Filaseta and Odlyzko 8]
have shown:
Theorem 5.3 If a prime p is expressed in the number system with base b 2
as p = nk=0 ak bk 0 ak b ; 1, then the polynomial nk=0 ak xk is irreducible.
It is well known that there is no polynomial over Z which represents only
primes. Thus the converse of Theorem 5.3 is false. But it is false in an
even stronger manner. It is not just the case that an irreducible polynomial
sometimes maps to a composite. Let Irreducible Polynomials Base b (I PB )
be the hypothesis that most polynomials base b are irreducible, as per Con-
jecture 5.1. Under that hypothesis, we nd that an irreducible polynomial
maps to a composite almost all the time. (Note that in the following b is
one to one on Nbn .)
15
Observation 5.4 Assuming I PB , most composites do not arise from re-
ducible polynomials. More precisely, let b : Nbn ! Z be the usual map from
polynomials to integers base b. Then for all bases b 2,
!1 Prob(f 2 Nb is reducible j b (f (x)) is composite) = 0:
n
nlim
We now turn our attention to a potentially simpler problem than fac-
torization, that of computing \square" parts. Long before it was known
how to factor polynomials in polynomial time, there was a fast algorithm
for computing the \square" part of a polynomial if f (x) = g2(x)h(x),
with (g(x) h(x)) = 1, then gcd(f (x) f 0(x)) = g(x). By iterating such a
procedure, in polynomial time one could determine a factorization f (x) =
g1(x)g22(x) : : :gkk (x) where (gi(x) gj (x)) = 1 if i 6= j . But no similarly fast
algorithm has been found for integers. We say an integer n is \squarefree"
if all the prime factors of n have exponent 1. The present fastest algorithm
to compute a squarefree decomposition of integers gives a reduction to '(n)
18]. This presently takes exponential time to compute.
We have seen polynomial algorithms translate to integer algorithms, and
vice versa. In its rst blush, polynomial factorization appears to shed no light
on integer factorization. So it also seems that the fast polynomial multiple
factor decomposition also sheds no light on the integer situation. It has been
known for over a century that a non-trivial portion of integers have a square
factor:
Theorem 5.5 (Gegenbauer)17] Let Q(x) bep the number of squarefree num-
bers not exceeding x. Then Q(x) = 6x2 + O( x).
By contrast, under I PB , most polynomials do not. Consider the follow-
ing diagram:
;1
Zbn b - Nbn(x)
g h = gcd(f (x) f 0(x))
? b ?
Zbn Nbn(x)
where Zbn is the set of integers base b with at most n + 1 digits, and not
ending with a zero.
16
Theorem 5.6 Assuming I PB , for all bases b 2, if the above diagram
commutes, then g(x) = 1 almost everywhere.
Proof Since Zbn is the set of n +1-base b integers with non-zero leading and
last bits, b;1 (Zbn) = Nbn. Now by I PB , we know that limn!1 jIbnj=jNbn j = 1.
In a strong sense, most polynomials will be squarefree, and thus for all b 2,
h(b;1 (Zbn)) = 1 almost everywhere. Then g = b (h((b);1))(Zbn) = 1 almost
everywhere. 2
On the other hand, by Theorem 5.5, a non-trivial proportion of the inte-
gers will have square factors. Thus,
Corollary 5.7 There is no algorithm g : Z ! Z that \simulates" the com-
putation for polynomials and computes square factors, even if we computed
this map for polynomially many dierent bases b.
What can we say without hypotheses? It is possible to construct an
innite set of examples such that the simulation of integer arithmetic by
polynomials does not lead to computing square factors:
Theorem 5.8 For each base b 2, there is an innite set Mb of integers
mbk such that the integer version of dierentiation and computing gcd's does
not yield the computation of the largest square factor dividing mbk .
6 Open Questions
This paper is an introduction into what we hope will become a useful tech-
nique for proving upper and lower bounds for integer and polynomial prob-
lems. Our theorems are sparse, our open questions are almost everywhere
dense. We leave the reader with several of what appear to be the easier of
these, since solving the more dicult ones would be tantamount to proving
a superpolynomial lower bound for the problem of integer factorization, or
showing that P 6= NP .
1. (a) Improve the bound in Theorem 3.2, or show that in some suciently
general framework, this is best possible.
(b) Improve the bound in Theorem 3.5, or show that in some suciently
general framework, this is best possible.
2. It would be interesting to compute the bounds necessary for Lemma
4.4 and compare them with the probabilistic bounds of 12] and 30]
for polynomial gcds.
3. The set of questions we are asking here can be naturally extended to the
questions of parallel complexity. Here one of the most striking examples
is the N C algorithms for polynomial gcds using subresultants. No N C
integer algorithm is known. In this context it is natural to examine
the Chinese Remainder Theorem as a transformation for parallelizing
integer computations.
4. Let C (n) be the set of all composite integers less than n. Show that
Carry(n) = fm 2 C (n)jm = "pai i has a carry for all bases c 2 c log mg
has more than p(log n) elements for all polynomials p(x).
5. Without assuming I PB , prove that Observation 5.4 holds for a dense
set of composites.
19
6. (a) Show that for a densep set of integers Mk = fmjjmj < 2k g, for
every base b, 2 b m and for all but nitely many values of k,
the integer version of the polynomial algorithm for computing square
decompositions fails.
(b) Prove that computing the square decomposition of an integer is
computationally equivalent to factoring.
(c) It is surprising that on occasion the trick of dierentiation and
computing gcds does give a partial factorization. It would be interesting
to see if one could characterize those integers for which this algorithm
does lead to a partial factorization.
Acknowledgements: Thanks to Mike Paterson and an anonymous referee
for helpful comments and corrections.
References
1] L. Adleman, C. Pomerance, R. Rumely, \On Distinguishing Prime Num-
bers from Composites," Math. Ann., Vol. 117 (1983), pp. 173-206.
2] L. Adleman and A. Odlyzko, \Irreducibility Testing and Factorization
of Polynomials," Math. Comp., Vol. 41 (1983), pp. 699-709.
3] A. Aho, J. Hopcroft and J. Ullman, The Design and Analysis of Com-
puter Algorithms, Addison-Wesley, 1974.
4] A. Avizienis, \Signed-Digit Number Representations for Fast Parallel
Arithmetic," IRE Transactions on Electronic Computers, Vol. 10 (1961),
pp. 389-400.
5] A. Borodin, S. Cook, and N. Pippenger, \Parallel Computation for Well-
Endowed Rings and Space-Bounded Probabilistic Machines," Informa-
tion and Control, 58, 1983, (113-136).
6] A. Borodin and R. Moenck, \Fast Modular Transforms," J. Comput.
Sys. Sci., Vol. 8 (1973), pp. 366-386.
7] A. Borodin and I. Munro, The Computational Complexity of Algebraic
and Numeric Problems, American Elsevier (1975).
20
8] J. Brillhart, M. Filaseta and A. Odlyzko, \On an Irreducibility Theorem
of A. Cohn," Can. J. Math, Vol. 33 (1981), pp. 1055-1059.
9] J.Brillhart, D.H.Lehmer and J.L.Selfridge, \New Primality Criteria and
Factorizations of 2m 1," Mathematics of Computation Vol 29 (1975),
No. 130, pp.620-647.
10] W. Brown, \On Euclid's Algorithm and the computation of polynomial
greatest common divisors," J. Assoc. Comput. Mach., Vol. 18, pp. 478-
504.
11] S. Cabay, \Exact Solutions of Linear Equations," Proc. of the Second
Sympositum on Symbolic and Algebraic Manipulation, 1971.
12] B. Char, K. Geddes, and G. Gonnet, \GCDHEU: Heuristic Polynomial
GCD Algorithm Based on Integer GCD Computation," J. Symb. Com-
put., (1989), pp. 31-45.
13] H. Cohen, A. Lenstra, \Implementation of a New Primality Test," Math.
Comp., Vol. 48 (1987), pp. 103-121.
14] G. Collins, \The Calculation of Multivariate Polynomial Resultants," J.
Assoc. Comput. Mach., Vol. 18, pp. 515-532.
15] S. Cook, \On the Minimum Computation Time of Functions," Disser-
tation, Harvard University (1966).
16] P. Gallagher, \The Large Sieve and Probabilistic Galois Theory," Proc.
Symp. in Pure Math. (1972), pp. 92-101.
17] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Num-
bers, Oxford University Press, 1971.
18] S. Landau, \Some Remarks on Computing the Square Parts of Integers,"
Information and Computation , Vol. 78, No. 3 (1988), pp. 246-253.
19] A.K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz, \Factoring Polynomials
with Rational Coecients," Mathematische Annelan, 261 (1982), pp.
513-534.
21
20] Yishay Mansour, Baruch Schieber, Prason Tiwari, \Lower Bounds for
Computations with the Floor Operation," SIAM J. Comput. 20(2), 315-
327.
21] Yishay Mansour, Baruch Schieber, Prason Tiwari, \A Lower Bound
for Integer Greatest Common Divisor Computations," J. Assoc. Com-
put. Mach. 38(2) (1991), 453-471.
22] M. Mignotte, \An Inequality about Factors of Polynomials", Math.
Comp., Vol. 28 (1974), pp. 1153-1157.
23] V.S. Miller, \Factoring Polynomials via Relation-Finding," Theory of
Computing and Systems, Lecture Notes in Computer Science 601 (1992),
pp. 115-121.
24] A. Odlyzko, private communication.
25] Victor Pan, \Complexity of Computations with Matrices and Polyno-
mials," SIAM Review 34(2) (1992), 225-262.
26] J. Pollard, \Theorems on Factorization and Primality Testing," Proc.
Cambridge Philos. Soc., Vol. 76 (1974), pp. 521-528.
27] A. Schonhage, \Schnelle Berechnung von Kettenbruchentwilungen,"
Acta Informatica, Vol. 1 (1971), pp. 139-144.
28] A. Schonhage, \Asymptotically Fast Algorithms for the Numerical Mul-
tiplication and Division of Polynomials with Complex Coecients," Lec-
ture Notes in Computer Science Vol. 144 (1982), pp. 3-15.
29] A. Schonhage, \ Factorization of Univariate Integer Polynomials by Dio-
phantine Approximation and an Improved Basis Reduction Algorithm,"
ICALP 1984, Lecture Notes in Computer Science Vol. 172 (1984), pp.
436-447.
30] A. Schonhage, \Probabilistic Computation of Integer Polynomial
GCDs," Journal of Algorithms, Vol. 9 (1988), pp. 365-371.
31] A. Schonhage and V. Strassen, \Schnelle Multiplikation grosser Zahlen,"
Computing, Vol. 7 (1971), pp. 281-292.
22
32] M. Sieveking, \An Algorithm for Division of Power Series," Computing,
Vol. 10 (1972), pp. 153-156.
33] V. Strassen, \Einege Resultate uber Berechnungskomplexitat", Jahres-
ber. Deutch. Math.-Verein, Vol. 78 (1976-77), pp. 1-8.
34] P. Weinberger, \Finding the Number of Factors of a Polynomial," J.
Algorithms, vol. 5 (1984), pp. 180-186.
23