0% found this document useful (0 votes)
57 views

How To Print Floating-Point Numbers Accurately

How to Print Floating-point Numbers Accurately

Uploaded by

Jan-Peter
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

How To Print Floating-Point Numbers Accurately

How to Print Floating-point Numbers Accurately

Uploaded by

Jan-Peter
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

How to Print Floating-Point Numbers Accurately

Guy L. Steele Jr. Jon L White


Thinking Machines Corporation Lucid, Inc.
245 First Street 707 Laurel Street
Cambridge, Massachusetts 02142 Menlo Park, California 94025
gls0think. corn j onl0lucid. corn

Abstract 1 Introduction

We present algorithms for accurately converting Isn’t it a pain when you ask a computer to divide 1.0 by
floating-point numbers to decimal representation. The 10.0 and it prints 0.0999999? Most often the arithmetic
key idea is to carry along with the computation an ex- is not to blame, but the printing algorithm.
plicit representation of the required rounding accuracy. Most people use decimal numerals. Most contempo-
We begin with the simpler problem of converting rary computers, however, use a non-decimal internal
fixed-point fractions. A modification of the well-known representation, usually based on powers of two. This
algorithm for radix-conversion of fixed-point fractions raises the problem of conversion between the internal
by multiplication explicitly determines when to termi- representation and the familiar decimal form.
nate the conversion process; a variable number of digits The external decimal representation is typically used
are produced. The algorithm has these properties: in two different ways. One is for communication with
l No information is lost; the original fraction can be people. This divides into two cases. For j&d-format
recovered from the output by rounding. output, a fixed number of digits is chosen before the
l No “garbage digits” are produced. conversion process takes place. In this situation the pri-
l The output is correctly rounded. mary goal is controlling the precise number of charac-
0 It is never necessary to propagate carries on ters produced so that tabular formats will be correctly
rounding. aligned. Contrariwise, in free-format output the goal
is to print no more digits than are necessary to com-
We then derive two algorithms for free-format out-
municate the value. Examples of such applications are
put of floating-point numbers. The first simply scales
Fortran list-directed output and such interactive lan-
the given floating-point number to an appropriate frac-
guage systems as Lisp and APL. Here the number of
tional range and then applies the algorithm for fractions.
digits to be produced may be dependent on the value
This is quite fast and simple to code but has inaccura-
to be printed, and so the conversion process itself must
cies stemming from round-off errors and oversimplifica-
determine how many digits to output.
tion. The second algorithm guarantees mathematical
The second way in which the external decimal rep-
accuracy by using multiple-precision integer arithmetic
resentation is used is for inter-machine communication.
and handling special cases. Both algorithms produce no
The decimal representation is a convenient and conven-
more digits than necessary (intuitively, the “1.3 prints
tional machine-independent format for transferring nu-
as 1.2999999” problem does not occur).
merical information from one computer to another. Us-
Finally, we modify the free-format conversion al-
ing decimal representation avoids incompatibilities of
gorithm for use in jized-format applications. Infor-
word length, field format, radix (binary versus hexadec-
mation may be lost if the fixed format provides too
imal, for example), sign representation, and so on. Here
few digit positions, but the output is always correctly
the main objective is to transport the numerical value
rounded. On the other hand, no “garbage digits” are
as accurately as possible; that the representation is also
ever produced, even if the fixed format specifies too
easy for people to read is a bonus. (The IEEE 754
many digit positions (intuitively, the u4/3 prints as
floating-point standard [IEEE851 has ameliorated this
1.333333328366279602” problem does not occur).
Permission to copy without fee all or part of this material is granted
problem by providing standard binary formats, but the
provided that the copies are not made or distributed for direct com- VAX, IBM 370, and Cray-1 are still with us and will be
mercial advantage, the ACM copyright notice and the title of the for yet a while longer.)
publication and its date appear, and notice is given that copying is
by permission of the Association for Computing Machinery. To copy
otherwise, or to republish, requires a fee and/or specific permission.
01990 ACM 0-89791-364-7/90/0006/0112 $1.50
Proceedings of the ACM SIGPLAN’SO Conference on

IProgramming Language Design and Implementation.


White Plains, New York, June 20-22, 1990.
I
112
We deal here with the output problem: methods of internal identity requirement. We would also like con-
converting from the internal representation to the exter- version from external form to internal form and back to
nal representation. The corresponding input problem is be an identity; we call this the external identity requim-
also of interest and is addressed by another paper in thii ment. However, there are some difficulties with these
conference [ClingerSO]. The two problems are not quite requirements.
identical because we make the asymmetrical assumption The first difficulty is that an exact representation is
that the internal representation has a fixed number of not always possible. A finite integer in any radix can
digits but the external representation may have a vary- be converted into a finite integer in any other radix.
ing number of digits. (Compare this with Matula’s ex- This is not true, however, of finite-length fractions; a
cellent work, in which both representations are assumed fraction that has a finite representation in one radix may
to be of iixed precision [Matula68] [Matula’70].) have an indefinitely repeating representation in another
In the following discussion we assume the existence radix. This occurs only if there is some prime number
of a program that solves the input problem: it reads p that divides the one radii but not the other; then
a number in external representation and produces the l/p (among other numbers) has a finite representation
input representation whose exact value is closest, of all in the first radix but not in the second. (It should be
possible internal representation values, to the exact nu- noted, by the way, that this relationship between radices
merical value of the external number. In other words, is different from the relationship of incommensumbility
this ideal input routine is assumed to round correctly in as Matula defines it [Matula70]. For example, radices
all cases. 10 and 20 are incommensurable, but there is no prime
We present algorithms for fixed-point-fraction output that divides one and not the other.)
and floating-point output that decide dynamically what It does happen that any binary, octal, or hexadeci-
is an appropriate number of output digits to produce mal number has a finite decimal representation (because
for the external representation. We recommend the last there is no prime that divides 2k but not 10). Therefore
algorithm, Dragon4, for general use in compilers and when B = 10 and b is a power of two we can guarantee
run;time systems; it is completely accurate and accom- the internal identity requirement by printing an exact
modates a wide variety of output formats. decimal representation of a given binary number and
We express the algorithms in a general form for con- using an ideal rounding input routine.
verting from any radix b (the internal radii) to any This solution is gross overkill, however. The exact
other radix B (the ezternal radix); b and B may be any decimal representation of an n-bit binary number may
two integers greater than 1, and either may be larger require n decimal digits. However, es a rule fewer than
than the other. The reader may find it helpful to think n digits are needed by the rounding input routine to
of B as being 10, and b as being either 2 or 16. Infor- reconstruct the exact binary value. For example, to
mally we use the terms “decimal” and “external radix” print a binary floating-point number with a 27-bit frac-
interchangeably, and similarly “binary” and “internal tion requires up to 30 decimal digits. (A 27-bit frac-
radix”. We also speak interchangeably of digits being tion can represent a 30-bit binary fraction whose dec-
‘output” or Uprintedn. imal representation has a non-sero leading digit; this
is discussed further below.) However, 10 decimal dig-
2 Properties of Radix Conversion its always suffice to communicate the value accurately
enough for the input routine to reconstruct the 27-
We assume that a number to be converted from one bit binary representation [Matula68], and many par-
radix to another is perfectly accurate and that the goal ticular values can be expressed accurately with fewer
is to express that exact value. Thus we ignore issues of than 10 digits. For example, the 27-bit binary floating-
errors (such as floating-point round-off or truncation) in point number .110011001100110011001100110~ x 2-‘,
the calculation of that value. which is equal to the 29-bit binary fixed-point frac-
The decimal representation used to communicate a tion .00011001100110011001100110011~, is also equal to
numerical value should be accurate. This is especially the decimal fraction .09999999962747097015380859375,
important when transferring values between comput- which has 29 digits. However, the decimal fraction 0.1
ers. In particular, if the source and destination com- is closer in value to the binary number than to any other
puters use identical floating-point representations, we 27-bit floating-point binary number, and so suffices to
would like the value to be communicated exactly; the satisfy the internal identity requirement. Moreover, 0.1
destination computer should reconstruct the exact bit is more concise and more readable.
pattern that the source computer had. In this case we We would like to produce enough digits to preserve
require that conversion from internal form to external the information content of a binary number, but no
form and back be an identity function; we call this the more. The difficulty is that the number of digits needed

113
depends not only on the precision of the binary number other possibilities, such. as using truncation (always
but on its value. rounding towards zero) instead of true rounding. If the
Consider for a moment the superficially easier prob- input conversion rounds but the output conversion trun-
lem of converting a binary floating-point number to cates, the internal identity requirement can still be met,
octal. Suppose for concreteness that our binary num- but more external digits may be required; the condition
bers have a six-bit significand. It would seem that two is BN-’ > 2 x bn - 1. If conversion truncates in both
octal digits should be enough to express the value of directions, however, it is possible that the internal iden-
the binary number, because one octal digit expresses tity requirement cannot be met, and that repeated con-
three bits of information. However, the precise oc- versions may cause a value to “drift” very far from its
tal representation of the binary floating-point number original value. This implies that if a data base is main-
.101101~ x 2-l is .264s x 8O. The exponent of the bi- tained in decimal, and is updated by converting it to bi-
nary floating-point number specifies a shifting of the nary, processing it, and then converting back to decimal,
significand so that the binary point is Tn the middle” values not updated may nevertheless be changed, after
of an octal digit. The first octal digit thus represents many iterations, by a substantial amount [Matula70].
only two bits of the original binary number, and the That is why we assume and require proper rounding.
third only one bit. We find that three octal digits are Rounding can guarantee the integrity of the data (in
needed because of the difference in “graininess” of the the form of the internal identity requirement) and also
exponent (shifting) specitications of the two radices. will minimize the number of output digits. Truncation
The number N of radix-B digits that may be required cannot and will not.
in general to represent an n-digit radix-b floating-point The external identity requirement obviously cannot
number is derived in [Matula68]. The condition is: be strictly satisfied, because by assumption there are a
finite number of internally representable values and an
B*-l > bn infinite number of external values. For every internal
This may be reformulated as: value there are many corresponding external representa-
tions. From among the several external representations
N>--!f--+1 that correspond to a given internal representation, we
log, B prefer to get one that is closest in value but also as short
and if N is to be minimized we therefore have: as possible.
Satisfaction of the internal identity requirement im-
N = 2 + [n/&g, B)] plies that an external consistency requirement be met:
No more than this number of digits need ever be output, if an internal value is printed out, then reading that ex-
but there may be some numbers that require exactly act same external value and then printing it once more
that many. will produce the same external representation. This
As an example, in converting a binary floating-point is important to the user of an interactive system. If
number to decimal on the PDP-10 [DEC73] we have one types PRINT 3.1415926535 and the system replies
b = 2, B = 10, and n = 27. One might naively sup- 3.1415926, the user might think “Well and good; the
pose that 27 bits equals 9 octal digits, and if nine octal computer has truncated it, and that’8 all I need type
digits suffice surely 9 decimal digits should suffice also. from now on.” The user would be rightly upset to
However, using Matula’s formula above, we find that to then type in PRINT 3.1415926 and receive the response
print such a number may require 2 + L27/(log, lo)] = 3.1415925! Many language implementations indeed ex-
2 + /27/(3.3219...)J = 2 + [8.1278...] = 10 decimal hibit such undesirable “drifting” behavior.
digits. There are indeed some numbers that require 10 Let us formulate our desires. First, in the interest
digits, such as the one which is greater than the rounded of brevity and readability, conversion to external form
27-bit floating-point approximation to 0.1 by two units should produce as few digits as possible. Now suppose,
in the last place. On the PDP-10 this is the number for some internal number, that there are several exter-
represented in octal as 1756314631508, which represents nal representations that can be used: all have the same
the value .63146315s x 8 -l. It is intuitively easy to see number of digits. all can be used to reconstruct the in-
why ten digits might be needed: splitting the number ternal value precisely, and no shorter external number
into an integer part (which is zero) and a fractional part suffices for the reconstruction. In this case we prefer the
must shift three zero bits into the significand from the one that is closest to the value of the binary number.
left, producing a 30-bit fraction, and 9 decimal digits is (As we shall see, all the external number8 must be alike
not enough to represent all 30-bit fractions. except in the last digit; satisfying this criterion there-
The condition specified above assumes that the con- fore involves only the correct choice for the last digit to
versions in both directions round correctly. There are be output.)

114
On the other hand, suppose that the number of dig- value to represent the precision of the fraction. When-
its to be output has been predetermined (fixed-format ever the fraction is multiplied by B, the error M is
output); then we desire that the printed value be as also multiplied by B. Therefore M always has a value
close as possible to the binary value. In short, the ex- against which the residual fraction may be meaningfully
ternal form should always be correctly rounded. (The compared to decide whether the precision has been ex-
Fortran 77 standard requires that floating-point output hausted.
be correctly rounded [ANSI76, 13.9.5.21; however, many
Fortran implementations do round not correctly, espe- Algorithm (FP)s:
cially for numbers of very large or very small magni- Finite-Precision Fixed-Point Fraction Printout’
tude.) Given:
We first present and prove an algorithm for printing
fired-point fractions that has four useful properties: l An n-digit radix-b fraction f, 0 < f < 1:
(a) Information preservation. No information is lost
in the conversion. If we know the length n of the
original radix-b fraction, we can recover the original
f =.f-lf-2f-a..-f-n=c f-i xb-’
i=l
fraction from the radix-B output by converting back
to radix-b and rounding to n digits.
(b) Minimum-length output. No more digits than nec-
l A radix B (an integer greater than 1).
essary are output to achieve (a). (This implies that Output: The digits Fi of an N-digit (N 2 1) radix-B
the last digit printed is non-zero.) fraction F:
(c) Correct rounding. The output is correctly rounded; N
that is, the output generated is the closest approx- F = . F-1F-,F-a.. . F-N = c Fmi x B-’
imation to the original fraction among all radix-B i=l
fractions of the same length, or is one of two closest
approximations, and in the latter case it is easy to such that:
choose either of the two by any arbitrary criterion. (a) IF - fl < 7
(d) Left-to-right genenrtion. It is never necessary to
(b) N is the smallest integer 2 1 such that (a) can be
propagate carries. Once a digit has been generated,
true.
it is the correct one.
These properties are characterized still more rigorously (4 IF - fl 5 7
in the description of the algorithm itself. (d) Each digit is output before the next is generated;
From thii algorithm we then derive a similar method it is never necessary to “back up” for corrections
for printing integers. The two methods are then com- Procedure: see Table 1. m
bined to produce two algorithms for printing floating- The algorithmic procedure is expressed in pseudo-
point numbers, one for free-format applications and one ALGOL. We have used the loop-while-repeat (“n + f
for fixed-format applications. loop”) construct credited to Dahl [Knuth74]. The loop
is exited from the while-point if the while-condition
3 Fixed-Point Fraction Output is ever false; thus one may read “while B:” as
“if not B then exitloop fi”. The cases statement
Following [Knuth68], we use the notation 1x1 to mean is to be taken as a guarded if construction [Dijkstra76];
the greatest integer not greater than z. Thus 13.51 = 3 the branch labeled by the true condition is executed,
and I-3.51 = -4. If z is an integer, then \xJ = 2. and if both conditions are true (here, if R = i) either
Similarly, we use the notation [z] to mean the smallest branch may be chosen. The ambiguity in the cases
integer not less than x; [z] z - ]--zj . Also following statement here reflects the point of ambiguity when
[Knuth68], we use the notation {z} to mean z mod 1,
‘It is difficult to give short, meaningfnl names to each of a
or z - lz], the fractional part of Z. We always indicate
group of algorithms that are similar. Here we give each algorithm
numerical multiplication of a and b explicitly as a x b, a long name that has enough adjectives and other qualifiers to
because we use simple juxtaposition to indicate digits distinguish it Born the others, but these long names are unwieldy.
in a place-value notation. For convenience, and as a bit of a joke, two kinds of abbreviated
names are used here. Algorithms that are intermediate versions
The idea behind the algorithm is very simple. The
along a path of development have names such as “(FP)3” that are
potential error in a fraction of limited precision may be effectively contracted acronyms for the long names. Algorithms
described as being equal to one-half the minimal non- that are in “final form” have names of the form “Dragonk” for
zero value of the least significant digit of the fraction. integers ic; these algorithms have long -es whose acronyms
form sequences of letters “F” and “P” that specify the shape of
The algorithm merely initializes a variable M to this so-called “dragon curves” [Gardner??].

115
Table 1: Procedure for Finite-Precision Fixed-Point RkXBsk+~F-iXBsi=f (ii)
Fraction Printout ((FP)s) i=l

By & and Rk we mean the values of M and R at the


begin
top of the loop as a function of k (the values of M and
k e0; R change if and only if k is incremented). Invariant (i)
R + f; is easily verified; we shall take it for granted. Invariant
M c b-“/2; (G) is a little more complicated.
loop Basis. After the first two assignments in the proce-
k+-k+l; dure, k = 0 and & = f . The summation in (ii) has no
Ut 1RxBJ; summands and is therefore zero, yielding
R+-(Rx B);
&xB-‘+O=f
Me-MxB;
whileRLMandR<l-M: which is true, because initially R = f.
F-k + u; Induction. Suppose (ii) is true for k. We note that
repeat; the only path backward in the loop sets F-(k+I) +
[Rk x R] . It follows that:
cases
R 2 f : F-k + U; k

R>): F-k+-U+l; f= Rk X Bsk+CF-iX B-’


i=l
endcases;
N ck;
= (B x Rk) x B- @+I) + -&F-; x B-’
end
i=l

=({RkxB}+lRkxBJ)xB- (k+l) + kFwi x B-”


two radix-B representations of length n are equidistant
i=l
from f. A given implementation may use any decision
method when R = 3, such as “always U”, which effec-
= (Rk+l + F--(k+l)) X B-(k+l) $ 2 F-i X B-’
tively means to round down; “always U + l”, meaning to
i=l
round up; or WU E 0 ( mod 2) then U else U + l”,
k+l
meaning to round so that the last digit is even. Note,
= Rk+l X B-(“+l) + C F-i X B-’
however, that using “always U” does not always round i=l
down if rounding up will allow one fewer digit to be
printed. which establishes the desired invariant.
This method is a generalization of the one presented The procedure definitely terminates, because M ini-
by Taranto [Taranto59] and mentioned in exercise 4.4.3 tially has a strictly positive value and is multiplied by
of [Knuth69].’ B (which is greater than 1) each time through the loop.
Eventually we have M > 4, at which point R 2 M
and R 5 1 - M cannot both be true and the loop must
4 Proof of Algorithm (FP)z terminate. (Of course, the loop may terminate before
(The reader who is more interested in practical appli- M > i, depending on R.)
cations than in details of proof may wish to skip this When the procedure terminates, there may be one of
section.) two cases, depending on RN. Because of the rounding
step, we have one of the following:
In order to demonstrate that Algorithm (FP)s satis-
fies the four claimed properties (a)-(d), it is useful first f=F+RNxB-N ifRN<i
to prove, by induction on k, the following invariants true
N
at the top of the loop body: f=F-(l-RN)xB- ifRN2;

M _ b-” x Bk Let us define R’ as follows:


k-
2 (i)
R’ = RN ifRN<$
2By the way, the paper that was forward-referenced in the
answer to exercise 4.4.3 in [Knuth81] was M early draft of this R’ = ~-RN ifRN>i
paper that contained only Algorithm (FP)3, its proof, and a not-
yet-correct generalization to floating-point numbers. Because 0 5 R,, 5 1, we know that 0 5 R’ 5 3; we can
therefore write:

116
f-F=R*xB-N if&s+ producing a contradiction and proving property (b).
Property (d) (left- to-right generation) follows imme-
F-f=R’xB-N if&z+ diately, because the only digit that is ever corrected
(by adding one) is the last one output. If that correc-
From this we have:
tion were to produce a carry (i.e., U = B - 1, and so
,F-f,=R*xB-“57 U+ 1 = B) then it would mean that the N-digit radix-
B output would end in a zero digit, implying that a
proving property (c) (correct rounding). shorter radix-B fraction could have satisfied property
To prove property (a) (information preservation), we (a); but this iz not possible by property (b).
consider the two cases MN > i and MN 5 i. If we The algorithms that follow are all variants of the one
have MN > 3, then just proven. No other proofs are presented below; the
necessary ideas for proving the remaining algorithms are
tFn simple modifications of those in the proof just given.
IF- fl< 7 <MNxB-~ =-
2 However, the code given for later algorithms contains
assertions of important invariants; from these invariants
BN x b-n
because MN = r) . If we have MN 5 a, then complete proofs may be reconstructed.
with the fact that Ri < MN or RN > 1 - MN we have
R* < MN, and so 5 Some Observations

The same routine can be used for converting fractions of


IF-fl=R’xB-“<MNxB-N=~
various lengths (handling, for example, both single and
double precision) provided that the radix-b arithmetic
Property (a) therefore holds in all cases.
is sufficiently accurate for the longest. The only part of
To prove property (b) ( minimum-length output), let
the algorithm dependent on n is the initialization of M.
us suppose that, to the contrary, there is some P-digit
All of the arithmetic is easy to perform in radix b;
radix-B fraction G such that:
the only operations used are multiplication, integer and
fractional part, subtraction from 1, and comparison.
P<N and IG-fl<y
There is a difficulty if the radix b is odd, because the
In fact, we assume P = N - 1 without loss of generality initialization of M requires division by 2; if the prob-
by allowing G to have trailing zero digits to fill out to lem should arise, one can either do the arithmetic in
P places. Now from invariant (ii) we know that: an even radix b’ that is a multiple of b, or initialize M
to b-n (instead of b-“/2) and change the comparison
R>M~R<l-Mto2xR>Mh2xR<2-M.
RpxB-P+eF-ixB-i=f The accuracy required for the arithmetic is n + 1 +
i=l [logb(B - 1)1 radix-b digits: n is the length of the frac-
It follows easily, because B and the F-i are integers and tion (and of all remainders), 1 more is needed for the
0 5 Rp < 1, that the closest P-digit representations to factor of a in M, and [log,(B - 1)1 more are needed
f are for the result of the multiplication to produce a radix-B
G1 = . F-I F-zF-3.. . Fm(p-~) F-p digit in radix b.
Algorithm (FP)’ is (almost) suitable for output of
and
floating-point numbers; it can be used if the floating-
Gs = . F-IF--~F--~... F+q(F-p + 1). point number is first scaled properly.

But because the iteration in the algorithm did not ter- Algorithm Dragona:
minate we know that: Floating-Point Printout
Given:
Rp 2 &fp = b-” ; BP
l A radix-b floating-point number v = f x bte-P),
and where e, f, and p are integers such that p 2 0 and
RP I(1 -MP). 0 5 f < bp.
It follows that whether G is chosen to be G1 or Gz (and l A radix B (an integer greater than 1).
any other choice is even worse) that:
Output: A radix-B floating-point approximation to Q,
b-” using exponential notation if Q is very large or very
(G-f(>MpxB-P=T small.

117
Procedure: If f = 0 then simply print “0.0”. Otherwise
proceed as follows. First compute u’ = u Q B-“, where Table 2: Procedure for Indefinite Precision Integer
“a” denotes floating-point multiplication and where z Printout ((IP)2)
is chosen so as to make Y’ < bp. (If exponential for- begin
mat is to be used, one normally chooses z so that the k +- 0;
result either is between l/B and 1 or is between 1 and
R e d;
B.) Next, print the integer part of v’ by any suitable
M tb";
method, such as the usual division-remainder technique;
s t 1;
after that print a decimal point. Then take the frac-
tional part f of v’, let n = p - [log* v’J - 1, and apply loop
Algorithm (FP)8 to f and n. Finally, if 2 was not zero, StSx B;
the letter “E” and a representation of the scale factor z kck+l;
are printed. I while(2xR)+M>2xS:
We note that if a floating-point number is the same repeat;
size (in bits, or radix-b digits, or whatever) as an inte- Hck-1;
ger, then arithmetic on integers of that size is often suffi- assert S = B(“+‘)
ciently accurate, because the bits used for the floating- loop
point exponent are usually more than enough for the k+-k-l;
extra digits of precision required.
S t S/B;
This method for floating-point output is not entirely
accurate, but is very simple to code, typically requir- u + [RISJ;
ing only single-precision integer arithmetic. For appli- RcRmodS;
cations where producing pleasant output is important, while2xR>Mand2xRs(2xS)-M:
program and data space must be minimized, and the in- Dk + u;
ternal identity requirement may be relaxed, this method repeat;
is excellent. An example of such an application might cases
be an implementation of BASIC for a microcomputer, 2xR5S: Dk+U;
where pleasant output and memory conservation are 2xRLS: Dk+U+l;
more important than absolutely guaranteed accuracy. endcases;
[The last sentence was written circa 1981. Nowadays all N + k;
but the tiniest computers have the memory space for the end
full algorithm.] This algorithm was used for many years
in the MacLisp interpreter and found to be satisfactory
for most purposes. 6 Conversion of Integers
There are two problems that prevent Algorithm
Dragon2 from being completely accurate for floating- To avoid the round-off errors introduced by scaling
point output. floating-point numbers, we develop a completely ac-
The first problem is that the spacing between adj, curate floating-point output routine that performs no
cent floating-point numbers of a given precision is not floating-point computations. As a first step, we exhibit
uniform. Let t) be a floating-point number, and let V- an algorithm for printing integers that has a tetmina-
and v+ be respectively the next-lower and next-higher tion criterion similar to that of Algorithm (FP)‘.
floating-point numbers of the same precision. For most
values of v, we have v+ - v = v - v-. However, if v is an Algorithm (IP)‘:
integral power of b, then instead v+ - v = b x (v - v-); Indefinite Precision Integer Printout
that is, the gap below v is smaller than the gap above Given:
v by a factor of b.
The second problem is that the scaling may cause l An h-digit radix-b integer d, accurate to position
loss of information. There may be two numbers, very n (n 2 0):
close together in value, which when scaled by the same
power of B result in the same scaled number, because of
d = d,,dh-1. . . d,,+ld,(n zeros). = 2 a$ x b’
rounding. The problem is inherent in the floating-point
representation. See [Matula68] for further discussion of
this phenomenon. l AradixB.

118
Output: Integers H and N (H 2 N 1 1) as an H- Output: Integers H and N and digits DI, (H > k 2 N)
digit radix-B integer D: such that if one defines the value

D = DHDH-I.. . DN+IDN(N zeros). = 5 Di x B’ V = 5 Di x B’


i=N i=N

such that: then:


v- +v v+v+
(a) [D-d! < f (4 - <V<-
(b) N is the largest integer, and H the smallest, such (b) H 2 the smallest iiteger (that is, furthest from oo)
that la1 can be true. and N the largest integer (that is, furthest from
-m) such that (a) can be true
(c) ID -ii,‘< B”
(d) Each digit is” output before the next is generated.
(c) IV - VI 5 “”
Procedure: see Table 2. I (d) Each digit is’ output before the next is generated.
In Algorithm (IP)l all quantities are integers. We Procedure: see Tables 3 and 4. I
avoid the use of fractional quantities by introducing a Unfortunately, this algorithm requires f # 0; it does
scaling factor S and logically replacing the quantities R not work properly for zero. Notice that relationship
and M by R/S and M/S. Whereas in Algorithm (FP)3 (a) is not stated in the symmetric form IV - VI < t.
the variables R and M were repeatedly multiplied by B, The relationship is not symmetrical because of the phe-
in Algorithm (IP)a the variable S is repeatedly divided nomenon of unequal gaps.
by B. As in algorithm (IP)a, all arithmetic operations pro-
duce integer results and all variables take on integer
values. This is done by using the scale factor S =
7 Free-Format Floating-Point Output
s&&,(1,-(e-p)) = lb@-“1. Note, however, that some
From these two algorithms, (FP)’ and (IP)‘, it is now of these integer values may be very large. If this algo-
easy to synthesize an algorithm for “perfect” conversion rithm is used to print (in decimal) floating-point num-
of a (positive) fixed-precision floating-point number to a bers in IEEE standard single-precision floating-point
free-format floating-point number in another radix. We format [IEEE81, IEEE85], integers as large as 2154 may
shall assume that a floating-point number f is repre- be calculated; for double-precision format integers as
sented as a tuple of three integers: the exponent e, the large as 21°50 may be encountered. Such a large integer
“fraction” or “significand” m, and the precision p. To- can be represented in fewer than five 32-bit words (for
gether these integers represent the mathematical value single precision) or forty 32-bit words (for double pre-
f= m x b(‘-P). We require m < bp; a normalized rep- cision). While multi-precision integer arithmetic is re-
resentation additionally requires m 2 b(P-l), but we do quired in the general case the storage requirements are
not depend on this. certainly not impractical. The multiprecision arithmetic
We define the function S/L& of two integer arguments does not have to be quite as general as that presented
z and n (Z > 0): in [KnuthSO, $4.3.11. 0 ne must add, subtract, and com-
pare multiprecision integers; multiply and divide multi-
shi&,(x,n) z lx x b”J precision integers by B; and divide one multiprecision
integer by another (but only in situations where it is
This function is intended to be trivial to implement in known that the result will be a radix-B digit, which is
radix-b arithmetic: it is the result of shifting x by n much simpler than the general case). The size of the in-
radix-b digit positions, to the left for positive n (intro- tegers involved depends primarily on the exponent value
ducing low-order zeros) or to the right for negative n of the floating-point number to be printed. For floating-
(discarding digits shifted to the right of the “radix-b point numbers of reasonable magnitude, 32-bit or 64-bit
point”). integer arithmetic is likely to suffice and so execution
speed is typically not a problem.
Algorithm (FPP)‘: The successor v+ and predecessor v- to a positive
Fixed-Precision Positive Floating-Point Printout floating-point number v = f x b(‘-P) are taken to be
Given:
v+ = v + b(“-P)
a Three radix-b integers e, f, and p, representing the
number v = f x bte+‘), withpz OandO < f < bP, and

l AradixB. V- = if f = b(P-‘) then v - bte-P-1) else 21 - b@-P) fi

119
Table 3: Fixed-Precision Positive Table 4: Procedure Simple-Fizup
Floating-Point Printout (( FPP)‘)
procedure Simple-Fizup;
begin begin
assert f # 0 assert R/S = v
R + Shifib(f) m=(e - no)); assert M-/S = M+/S = bte-p)
S t shifib(l,max(O, -(e - p))); iff= shi&(l,p - 1) then
assert R/S = f x b(‘-p) = v M+ + Shift,(M+,l);
M- t shiflb(l, mu(e - p, 0)); R t- Shift,(R,l);
M++M-; s t shiftb(S, 1);
Simple-F&up; fi;
Htk-1; k t 0;
loop loop
assert (R/S) x Bk = v
assert (R/S) x B” + 2 Di x B’ = u assert (M-/S) x Bk = v - v-
i=k assert (M+/S) x Bk = v+ -2)
kc&-l;
while R < [S/B] :
U + I@ x B)/S_(; &+--k-l;
Rt(Rx B)modS;
RtRxB;
M-tM-xB;
M-tM-xB;
M+tM+xB;
M+tM+xB;
lowc2xR<M-;
repeat;
hight2xR>(2xS)-M+;
assert k = min(O, 1 + [log, vJ)
while (not low) and (not high) :
loop
Dk + u;
assert (R/S) x Bk = v
repeat;
assert (M-/S) x Bk = v - v-
H
assert (M+/S) x Bk = v+ -v
comment Let Vk = C Di x B’.
i=k+l
while(2xR)+M+>2xS:
v- +v ScSx B;
assert low * - < (u X B” + Vk) 5 v k+-k+1;
2
v+ + v repeat;
assert high =$ u 5 ((u -k 1) x Bk + vk) < 2
cases assert k = 1 + [ logB x$+]
low and not high : Dk t U; end:
high and not low : Dk t- U + 1;
low and high :
cases
2xR5S: Dk+U;
2xRzS: DktU$l;
endcases;
endcases;
N tk;
end;

120
The formula for v+ does not have to be conditional, Fourth, the digits are generated by the last loop; this
even when the representation for v+ requires a larger loop should be compared with the one in Table 1.
exponent e than v does in order to satisfy f < bp. The
formula for v- , however, must take into account the sit- 8 Fixed-Format Floating-Point Output
uation where f = b(J’- ‘1, because v- may then use the
next-smaller value for e, and so the gap size is smaller by Algorithm (FPP)r simply generates digits, stopping
a factor of b. There may be some question as to whether only after producing the appropriate number of digits
this is the correct criterion for floating-point numbers for precise free-format output. It needs more flexible
that are not normalized. Observe, however, that this means of cutting off digit generation for fixed-format
is the correct criterion for IEEE standard floating-point applications. Moreover, it is desirable to intersperse
format [IEEE85], because all such numbers are normal- formatting with digit generation rather than buffering
ized except those with the smallest possible exponent, digits and then re-traversing the buffer.
so if v is denormalized then v- must also be denormal- It is not difficult to adapt Algorithm (FPP)’ for fixed-
ized and have the same value for the exponent e. format output, where the number of digits to be out-
To compensate for the phenomenon of unequal gaps, put is predetermined independently of the floating-point
the variables M- and M+ are given the value that M value to be printed. Using this algorithm avoids giving
would have in Algorithm (IP)‘, and then adjustments the illusion of more internal precision than is actually
are made to M+, R, and S if necessary. The simplest present, because it cuts off the conversion after suffi-
way to account for unequal gaps would be to divide ciently many digits have been produced; the remaining
M- by b. However, this might cause M- to have a character positions are then padded with blanks or ze-
non-integral value in some situations. To avoid this, ros.
we instead scale the entire algorithm by an additional As an example, suppose that a Fortran program
factor of b, by multiplying M+, R, and S by b. is written to calculate z, and that it indeed calcu-
The presence of unequal gaps also induces an asym- lates a floating-point number that is the best pos-
metry in the termination test that reveals a previously sible approximation to z for that floating-point for-
hidden problem. In Algorithm (IP)‘, with M- = M+ = mat, precise to, say, 27 bits. However, it then prints
M, it was the case that if R > (2 x S) - M+ and the value with format F20.18, producing the output
2 x R < S, then necessarily R < M- also. Here this u3.1415Q265160560607Q”.
is not necessarily so, because M- may be smaller than This is indeed accurately the value of that floating-
M+ by a factor of b. The interpretation of this is that point number to that many places, but the last ten
in some situations one may have 2 x R < S but nev- digits are in some sense not justified, because the in-
ertheless must round up to the digit U + 1, because ternal representation is not that precise. Moreover,
the termination test succeeds relative to M+ but fails this output certainly does not represent the value of
relative to M- . A accurately; in format F20.18, u should be printed
This problem is solved by rewriting the termination as “3.141592653589793238". The free-format print-
test into two parts. The boolean variable low is true ing procedure described above would cut off the conver-
if the loop may be exited because M- is satisfied (in sion after nine digits, producing “3.14159265” and no
which case the digit U may be used). The boolean vari- more. The fixed-format procedure to be developed be-
able high is true if the loop may be exited because M+ low would therefore produce “3.14159265 9,
is satisfied (in which case the digit U + 1 may be used). or “3.141592650000000000”. Either of these is much
If both variables are true, then either digit of U and less misleading as to internal precision.
U + 1 may be used, as far as information-preservation The fixed-format algorithm works essentially uses the
is concerned; in this case, and this case only, the com- free-format output method, differing only in when con-
parison of 2 x R to S should be done to satisfy the version is cut off. If conversion is cut off by exhaustion
correct-rounding property. of precision, then any remaining character positions are
Algorithm (FPP) a is conceptually divided into four padded with blanks or zeros. If conversion is cut off by
parts. First, the variables R, S, M-, and M+ are ini- the format specification, the only problem is producing
tialized. Second, if the gaps are unequal then the neces- correctly rounded output. This is easily almost-solved
sary adjustment is made. Third, the radix-B weight H by noting that the remainder R can correctly direct
of the first digit is determined by two loops. The first rounding at any digit position, not just the last. In
loop is needed for positive H, and repeatedly multiplies terms of the programs, almost all that is necessary is to
S by B; the second loop is needed for negative H, and terminate a conversion loop prematurely if appropriate,
repeatedly multiplies R, M-, and M+ by B. (Divisions and the following cases statement will properly round
by B are of course avoided to prevent roundoff errors.) the last digit produced.

121
This doesn’t solve the entire problem, however, be- The interface convention is that the “user process” must
cause if conversion is cut off by the format specification first send the message
then the early rounding may require propagation of car-
ries; that is, the proof of property (d) fails to hold. This FORMAT! (a, e, f, p, B)
can be taken care of by adjusting the values of M- and
M+ appropriately. In principle this adjustment some- to initiate floating-point conversion and then may re-
times requires the use of non-integral values that cannot ceive characters from the FORMAT process until a “0”
be represented exactly in radix b; in practice such values character is received. The *@” character serves as a ter-
can be rounded to get the proper effect. minator and should be discarded. Any of the formatting
As noted above, it is indeed not difficult to adapt processes shown below may be used; to get free-format
the simplified algorithm for a particular fixed format. conversion, for example, one would execute
However, straightforwardly adapting it to handle a va- [ USER :: user process 1
riety of fixed formats is clumsy. We tried to do this, FORMAT :: Free-Format ]
and found that we had to introduce many switches and GENERATE :: Dragon4 ]
flags to control the various aspects of formatting, such
as total field width, number of digits before and after and to get fixed-format exponential formatting one
the decimal point, width of exponent field, scale factor would execute
(as for “P” format specifiers in Fortran), and so on. The
result was a tangled mess, very difficult to understand [ USER :: user process ]
and maintain. FORMAT :: Fized-Format Exponential 1
We finally untangled the mess by completely separat- GENERATE :: Dragon4 ]
ing the generation of digits from the formatting of the We have found it easy to write new formatting routines.
digits. We added a few interface parameters to produce
a digit generator (called Dragon4) to which a variety
of formatting processes could be easily interfaced. The 9 Implementations
generator and the formatter execute as coroutines, and In 1981 we coded a version of Algorithm Dragon4 in
it is assumed that the “user” process executes as a third Pascal, including the formatting routines shown here as
coroutine. well as formatters for Fortran I%,F, and G formats and for
For purposes of exposition we have borrowed cer- PL/I-style picture formats such as $$$ , $$$ , $$9.99CR.
tain features of Hoare’s CSP (Communicating Sequen- This suite has been tested thoroughly. This algorithm
tial Processes) notation [Hoare78]. The statement has also been used in various Lisp implementations for
both free-format and fixed-format floating-point output.
GENERATE! (zr, . . . , z,,)
A portable and performance-tuned implementation in C
means that a message containing values zl,. . . , z, is to is in progress.
be sent to the GENERATE process; the process that exe-
cutes such a statement then continues its own execution 10 Historical Note and Acknowledgments
without waiting for a reply. The statement
The work reported here was done in the late 1970’s.
FORMAT?(zl,...,z,) This is the first published appearance, but earlier vex-
sions of this paper have been circulated “unpublished”
means that a message containing values 21,. . . , z, is
through the grapevine for the last decade. The algo-
to be received from the FORMAT process; the process
rithms have been used for years in a variety of language
that executes such a statement waits if necessary for a
implementations, perhaps most notably in Zetalisp.
message to arrive. (The notation is symmetrical in that
Why wasn’t it published earlier? It just didn’t seem
the sender and receiver of a message must each know
like that big a deal; but we were wrong. Over the years
the other’s name.) We do not borrow any of the control
floating-point printers have continued to be inaccurate
structure notations of CSP, and we do not worry about
and we kept getting requests for the unpublished draft.
the matter of processes failing.
We thank Donald Knuth for giving us that last required
To perform floating-point output one must effectively
push to submit it for publication.
execute three coroutines together. In the CSP notation
The paper has been almost completely revised for pre-
one writes:
sentation at this conference. An intermediate version of
[ USER :: user process ] the algorithm, Dragon3 (Free-Format Perfect Positive
FORMAT :: formatting process ] Floating-Point Printout), has been omitted from this
GENERATE :: Drogon‘j ] presentation for reasons of space; it is of interest only

122
in being closest in form to what was actually first im- Working Group. “A Proposed Standard for Binary
plemented in MacLisp. We have allowed a few anachro- Floating-Point Arithmetic.” Computer 14, 3 (March
nisms to remain in this presentation, such as the sugges- 1981), 51-62.
tion that a tiny version of the printing algorithm might [IEEE851 IEEE. IEEE Standard for Binary Floating-Point
be required for microcomputer implementations of BA- Arithmetic. ANSI/IEEE Std 754-1985 (New York, 1985).
SIC. You will find that most of the references date back [Jensen741 Jensen, Kathleen, and Wirth, Niklaus. PAS-
to the 1960’s and 19’70’s. CAL User Manual and Report. Second edition. Springer-
Helpful and valuable comments on drafts of this paper Verlag (New York, 1974).
were provided by Jon Bentley, Barbara K. Steele, and [Knuth68] Knuth, Donald E. The Art of Computer Pro-
Daniel Weinreb. We are also grateful to Donald Knuth gramming, Volume 1: Fundamental Algorithms. Addison-
and Will Clinger for their encouragement. Wesley (Reading, Massachusetts, 1968).
The first part of this work was done in the mid to late [Knuth69] Knuth, Donald E. The Art of Computer Pro-
1970’s while both authors were at the Massachusetts gramming, Volume L: Seminumerical Algorithms. First
Institute of Technology, in the Laboratory for Computer edition. Addison-Wesley (Reading, Massachusetts, 1969).
Science and the Artificial Intelligence Laboratory. From [Knuth74] Knuth, DonaId E. “Structured Programming
1978 to 1980, Steele was supported by a Fanny and John with GO TO Statements.” Computing Survey3 6, 4 (De-
Hertz Fellowship. cember 1974).
This work was carried forward by Steele at Carnegie- [Knuth81] Knuth, Donald E. The Art of Computer Pro-
Mellon University, Tartan Laboratories, and Thinking gramming, Volume 2: Seminumerical Algorithms. Second
Machines Corporation, and by White at IBM T. J. Wat- edition. Addison-Wesley (Reading, Massachusetts, 1981).
son Research and Lucid, Inc. We thank these institu- [MatuIa68] MatuIa, David W. “In-and-Out Conversions.”
tions for their support. Communications of the ACM 11, 1 (January 1968), 47-
This work was supported in part by the Defense Ad- 50.
vanced Research Projects Agency, Department of De- [Matnla’lO] Matula, David W. “A Formalization of
fense, ARPA Order 3597, monitored by the Air Force Floating-Point Numeric Base Conversion.” IEEE Trand-
Avionics Laboratory under contract F33615-78-C-1551. actions on Computers C-19, 8 (August 1970), 681-692.
The views and conclusions contained in this document [Moon741 Moon, David A. MacLiap Reference Manual, Re-
are those of the authors and should not be interpreted vision 0. Massachusetts Institute of Technology, Project
as representing the official policies, either expressed or MAC (Cambridge, Massachusetts, April 1974).
implied, of the Defense Advanced Research Projects [Taranto59] Taranto, Donald. “Binary Conversion, with
Agency or the U.S. Government. Fixed Decimal Precision, of a Decimal Fraction.” Com-
munications of the ACM 2, 7 (July 1959), 27.
11 References

[ANSI761 American National Standards Institute. Draft


proposed AN.9 Fortmn (BSR X3.9). Reprinted az ACM
SIGPLAN Notices 11, 3 (March 1976).
[Clinger90] Cling er, William D. How to read floating point
numbers accurately. Proc. ACM SIGPLAN ‘90 Confer-
ence on Progr amming Language Design and Implementa-
tion (White Plains, New York, June 1990).
[DEC73] Digital Equipment Corporation. DecSystem 10
Assembly Language Handbook. Third edition. (Maynard,
Massachusetts, 1973).
[Dijkstra?G] Dijkzt ra, Edsger W. A Discipline of Program-
ming. Prentice-Hall (Englewood Cliffs, New Jersey, 1976).
[Gardner77] Gardner, Martin. “The Dragon Curve and
Other Problems.” In Mathematical Magic Show. Knopf
(New York, 1977), 203-222.
[Hoare78] Hoarc, C. A. R. “Communicating Sequential
Processes.” Communications of the ACM 21, 8 (August
1978), 666-677.
[IEEE811 IEEE Computer Society Standard Committee,
Microprocessor Standards Subcommittee, Floating-Point

123
Table 5: Procedure Dragon4 (Formatter-Feeding Pre Table 6: Procedure Pisup
cess for Floating-Point Printout, Performing F’ree-
procedure F&up;
Format Perfect Positive Floating-Point Printout)
begin
process Dragond; iff= shift,(l,p - 1) then
begin comment Account for unequal gaps.
FORMAT? (b, e, f, p, B, CutoflMode, CutoflPlace); M+ t shi&,(M+, 1);
assert CutofMode = Urehtive” * CutoffPlace 5 0 R t shi&.(R, 1);
Round WpFlag t false; St shi&(S, 1);
if f = 0 then FORMAT ! (0, k) else fi;
R + Wb(f, m=(e - P,0)); k t 0;
St &i&(1, max(O, -(e - p))); loop
M- t shi,&(l, max(e -p, 0)); while R < [S/B] :
M+ + M-; ktk-1;
Fizup; RtRxB;
loop M-tM-xB;
ktk-1; M+tM+xB;
u t l(R x WSJ i repeat;
Rc(Rx B)modS; loop
M- tM- xB; loop
M+tM+xB; while (2 x R) + M+ 12 x S :
low t 2 x RC M-; StSxB;
if Round UpFlag ktk+l;
then high c 2 x R >(2 x S) - M+ repeat ;
else high + 2 x R > (2 x S) - M+ fi; comment Perform any necessary adjustment
while not low and not high of M- and M+ to take into account the for-
and k # Cuto#Place : matting requirements.
FORMAT ! (V, k); case CutofiMode of
repeat; %omnal” : CutoffPlace c k;
cases “absolute* : CutoffAdjust;
low and not high : FORMAT! (V, k); “relative” :
high and not low : FORMAT! (U + 1,k); CutoffPlace t k + CutoflPlace;
(low and high) or (not low and not high) : CutofiAdjust;
cases endcase;
2 x R 2 S: FORMAT! (U, k); while(2xR)+M+>2xS:
2x RZS: FORMAT!(U+l,k); repeat;
endcases; end;
endcases;
fi;
comment Henceforth this process will generate as
many u- 1” digits as the caller desires, along
with appropriate values of k.
loop k + k - 1; FORMAT! (-1, k) repeat;
end;
Table 7: Procedure fill
procedure fill(k, c);
comment Send k copies of the character c to the
USER process. No characters are sent if k = 0.
for i from 1 to k do USER! (c) od;

124
Table 8: Procedure CutofAdjust Table 10: Formatting process for free-format output

procedure CutoffAdjust; process &e-Format;


begin begin
a t CutoffPlace - k; USER ? (h et f, P, B);
Y + s; GENERATE! (b, e, f,p, B, “normaI”, 0);
cases GENERATE ? (V, k);
a?O: forjc1toadoytyxB; ifk < 0 then
a<O: forjtlto -adoy+ry/BJ; USER! (“0”);
endcases; USER! (“.“);
assert y = IS x BOJ f;ZZ(-k, “0”)
M- + max(y, M-); fi;
M+ t max(y, AI+); loop
if M+ = y then RoundUpFlag t true fi; DigitChar(
end; ifk = 0 then USER! (“.“) fi;
GENERATE ? (U, k);
while U # -1 or k 2 -1:
repeat;
USER ! (“6” );
end;

Table 9: Procedure DigitChar


procedure DigitChar(
Table 11: Formatting process for fixed-format output
case U of
comment A digit that is -1 is treated as a zero process Fized-Format;
(one that is not significant). Here we print a begin
blank for it; fixed Fortran formats might prefer USER ? (h e, f, P, & w, d);
8 zero. assertd>OAw~max(d+1,2)
-1: USER! (” “); ccw-d-l;
0 : USER! (“9”); GENERATE! (b,e, f,p, B, “absolute”, -d);
1 : USER! (“1”); GENERATE ? (U, k);
2 : USER! (42”); ifk < c then
3 : USER! (“3”); ifk<Othen
4 : USER! (“4”); ifc > 0 then fiU(c - 1, u “); USER! (“on) fi;
5 : USER! (“5”); USER! (‘V’);
6 : USER ! (“6”); fiZi(min(-k, d), “0”);
7 : USER! (“7”); else jiZr(c - k - 1, u “) fi;
8 : USER! (“8”); loop
9 : USER! (“9”); whilek>-d:
10 : USER! (“A”); Digit Chart U);
11 : USER! (“B”); ifk = 0 then USER! (“.“) fi;
12 : USER! (“C”); GENERATE ? (U, k) ;
13: USER! (“D”); repeat;
14 : USER! (“E”); else fiZZ(w, u*n) A;
15 : USER ! (“F”); USER ! (“0”);
endcase; end:

125
Table 12: Formatting process for free-format exponen- Table 13: Formatting process for fixed-format exponen-
tial output tial output

process Free-Fomnal Exponential; process Fixed-Format Exponential;


begin begin
USER ? (h et f, P, B); USER ? (he, f, P, B, wl d, 2);
GENERATE! (b,e, f,p,B, “normal”, 0); assertd~OAx>lAw>d+x+3
GENERATE ? (U, expt); ctw-d-x-3;
DigiiChar(U); GENERATE! (b, e, f,p, B, “relative”, -d);
USER! (“2’); GENERATE ? (U, ezpept);
loop j t 1;
GENERATE ? (U, k); a t 0;
whileU#-1: loop
DigitChar( j +- j x B;
repeat ; cr+--a+1;
if k = ezpt - 1 then USER ! (“0”) fi; while j 5 Iezptl :
USER! (“E”); repeat;
if ezpt < 0 then USER ! (“-“); ezpl t - ezpi ii; ifa< x then
j t 1; j;ZZ(c - 1, u “);
loop j t j x I?; while j 5 ezpt : repeat; DigitChar(
loop USER! (“2’);
i +- LVBJ; for q t 1 to d do
DigitChar(jezpZ/j]); GENERATE ? (U, k) ;
ezpeptt ezpt mod j; DigiZChar(U);
whilej>l: od;
repeat; USER ! (‘92”);
USER! (“0”); if ezpt < 0 then
end; USER ! (‘Q’)
else
USER ! (“+“)
fi;
ezpt e 1expt 1;
fiZZ(x - a, UO”);
loop
i +-- UIBJ;
DigiiChar( [expt/jJ);
ezpt t expt mod j;
whilej>l:
repeat;
else $ZZ(w, u*n) fi;
USER ! (‘W’);
end

126

You might also like