0% found this document useful (0 votes)
20 views145 pages

Logarithm - Wikipedia

The logarithm is the inverse function of exponentiation. The logarithm of a number x with base b is the exponent that b must be raised to produce x. There are three common bases: base 10 (common logarithm), base e (natural logarithm), and base 2 (binary logarithm). Logarithms were introduced to simplify calculations and are now used in many areas of science, mathematics, and technology.

Uploaded by

Excellent Enebia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views145 pages

Logarithm - Wikipedia

The logarithm is the inverse function of exponentiation. The logarithm of a number x with base b is the exponent that b must be raised to produce x. There are three common bases: base 10 (common logarithm), base e (natural logarithm), and base 2 (binary logarithm). Logarithms were introduced to simplify calculations and are now used in many areas of science, mathematics, and technology.

Uploaded by

Excellent Enebia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 145

Logarithm

In mathematics, the logarithm is the


inverse function to exponentiation. That
means that the logarithm of a number x to
the base b is the exponent to which b must
be raised to produce x. For example, since
1000 = 103, the logarithm base 10 of 1000
is 3, or log10 (1000) = 3. The logarithm of x
to base b is denoted as logb (x), or without
parentheses, logb x, or even without the
explicit base, log x, when no confusion is
possible, or when the base does not
matter such as in big O notation.

Plots of logarithm functions, with three commonly


used bases. The special points logb b = 1 are
indicated by dotted lines, and all curves intersect in
logb 1 = 0.

The logarithm base 10 is called the


decimal or common logarithm and is
commonly used in science and
engineering. The natural logarithm has the
number e ≈ 2.718 as its base; its use is
widespread in mathematics and physics,
because of its very simple derivative. The
binary logarithm uses base 2 and is
frequently used in computer science.

Logarithms were introduced by John


Napier in 1614 as a means of simplifying
calculations.[1] They were rapidly adopted
by navigators, scientists, engineers,
surveyors and others to perform high-
accuracy computations more easily. Using
logarithm tables, tedious multi-digit
multiplication steps can be replaced by
table look-ups and simpler addition. This is
possible because the logarithm of a
product is the sum of the logarithms of the
factors:
provided that b, x and y are all positive and
b ≠ 1. The slide rule, also based on
logarithms, allows quick calculations
without tables, but at lower precision. The
present-day notion of logarithms comes
from Leonhard Euler, who connected them
to the exponential function in the 18th
century, and who also introduced the letter
e as the base of natural logarithms.[2]

Logarithmic scales reduce wide-ranging


quantities to smaller scopes. For example,
the decibel (dB) is a unit used to express
ratio as logarithms, mostly for signal
power and amplitude (of which sound
pressure is a common example). In
chemistry, pH is a logarithmic measure for
the acidity of an aqueous solution.
Logarithms are commonplace in scientific
formulae, and in measurements of the
complexity of algorithms and of geometric
objects called fractals. They help to
describe frequency ratios of musical
intervals, appear in formulas counting
prime numbers or approximating
factorials, inform some models in
psychophysics, and can aid in forensic
accounting.

The concept of logarithm as the inverse of


exponentiation extends to other
mathematical structures as well. However,
in general settings, the logarithm tends to
be a multi-valued function. For example,
the complex logarithm is the multi-valued
inverse of the complex exponential
function. Similarly, the discrete logarithm
is the multi-valued inverse of the
exponential function in finite groups; it has
uses in public-key cryptography.
Motivation

The graph of the logarithm base 2 crosses the x-


axis at x = 1 and passes through the points (2, 1),
(4, 2), and (8, 3), depicting, e.g., log2(8) = 3 and
23 = 8. The graph gets arbitrarily close to the y-axis,
but does not meet it.

Addition, multiplication, and


exponentiation are three of the most
fundamental arithmetic operations. The
inverse of addition is subtraction, and the
inverse of multiplication is division.
Similarly, a logarithm is the inverse
operation of exponentiation.
Exponentiation is when a number b, the
base, is raised to a certain power y, the
exponent, to give a value x; this is denoted

For example, raising 2 to the power of 3


gives 8:

The logarithm of base b is the inverse


operation, that provides the output y from
the input x. That is, is
equivalent to if b is a positive real
number. (If b is not a positive real number,
both exponentiation and logarithm can be
defined but may take several values, which
makes definitions much more
complicated.)
One of the main historical motivations of
introducing logarithms is the formula

by which tables of logarithms allow


multiplication and division to be reduced
to addition and subtraction, a great aid to
calculations before the invention of
computers.

Definition

Given a positive real number b such that


b ≠ 1, the logarithm of a positive real
number x with respect to base b[nb 1] is the
exponent by which b must be raised to
yield x. In other words, the logarithm of x
to base b is the unique real number y such
that .[3]

The logarithm is denoted "logb x"


(pronounced as "the logarithm of x to
base b", "the base-b logarithm of x", or
most commonly "the log, base b, of x").

An equivalent and more succinct definition


is that the function logb is the inverse
function to the function .

Examples

log2 16 = 4, since 24 = 2 × 2 × 2 × 2 = 16.


Logarithms can also be negative:
since

log10 150 is approximately 2.176, which


lies between 2 and 3, just as 150 lies
between 102 = 100 and 103 = 1000.
For any base b, logb b = 1 and logb 1 = 0,
since b1 = b and b0 = 1, respectively.

Logarithmic identities

Several important formulas, sometimes


called logarithmic identities or logarithmic
laws, relate logarithms to one another.[4]
Product, quotient, power, and root

The logarithm of a product is the sum of


the logarithms of the numbers being
multiplied; the logarithm of the ratio of two
numbers is the difference of the
logarithms. The logarithm of the p-th
power of a number is p times the
logarithm of the number itself; the
logarithm of a p-th root is the logarithm of
the number divided by p. The following
table lists these identities with examples.
Each of the identities can be derived after
substitution of the logarithm definitions
or in the left hand
sides.
Formula Example

Product

Quotient

Power

Root

Change of base

The logarithm logb x can be computed


from the logarithms of x and b with
respect to an arbitrary base k using the
following formula:[nb 2]

Typical scientific calculators calculate the


logarithms to bases 10 and e.[5]
Logarithms with respect to any base b can
be determined using either of these two
logarithms by the previous formula:

Given a number x and its logarithm


y = logb x to an unknown base b, the base
is given by:

which can be seen from taking the


defining equation to the
power of
Particular bases

Plots of logarithm for bases 0.5, 2, and e

Among all choices for the base, three are


particularly common. These are b = 10,
b = e (the irrational mathematical constant
≈ 2.71828), and b = 2 (the binary
logarithm). In mathematical analysis, the
logarithm base e is widespread because of
analytical properties explained below. On
the other hand, base-10 logarithms (the
common logarithm) are easy to use for
manual calculations in the decimal
number system:[6]

Thus, log10 (x) is related to the number of


decimal digits of a positive integer x: the
number of digits is the smallest integer
strictly bigger than log10 (x).[7] For
example, log10(1430) is approximately
3.15. The next integer is 4, which is the
number of digits of 1430. Both the natural
logarithm and the binary logarithm are
used in information theory, corresponding
to the use of nats or bits as the
fundamental units of information,
respectively.[8] Binary logarithms are also
used in computer science, where the
binary system is ubiquitous; in music
theory, where a pitch ratio of two (the
octave) is ubiquitous and the number of
cents between any two pitches is the
binary logarithm, times 1200, of their ratio
(that is, 100 cents per equal-temperament
semitone); and in photography to measure
exposure values, light levels, exposure
times, apertures, and film speeds in
"stops".[9]

Many disciplines write log x as an


abbreviation for logb x when the intended
base can be inferred based on the context
or discipline (or when the base is
indeterminate or immaterial). In computer
science, log usually refers to log2, and in
mathematics log usually refers to loge.[10]
In other contexts, log often means
log10.[11] The following table lists common
notations for logarithms to these bases
and the fields where they are used. The
"ISO notation" column lists designations
suggested by the International
Organization for Standardization (ISO
80000-2).[12]
Base Name for ISO
Other notations Used in
b logb x notation

computer science, information


binary
2 lb x[13] ld x, log x, lg x,[14] log2 x theory, bioinformatics, music
logarithm
theory, photography

log x mathematics, physics, chemistry,


natural [nb 3]
(in mathematics[18] and many statistics, economics,
e ln x
logarithm programming languages[nb 4]), information theory, and
loge x engineering

various engineering fields (see


log x, log10 x
common decibel and see below),
10 lg x (in engineering, biology,
logarithm logarithm tables, handheld
astronomy)
calculators, spectroscopy

logarithm to
b logb x mathematics
base b

History

The history of logarithms in seventeenth-


century Europe saw the discovery of a new
function that extended the realm of
analysis beyond the scope of algebraic
methods. The method of logarithms was
publicly propounded by John Napier in
1614, in a book titled Mirifici
Logarithmorum Canonis Descriptio
(Description of the Wonderful Canon of
Logarithms).[19][20] Prior to Napier's
invention, there had been other techniques
of similar scopes, such as the
prosthaphaeresis or the use of tables of
progressions, extensively developed by
Jost Bürgi around 1600.[21][22] Napier
coined the term for logarithm in Middle
Latin, "logarithmus," derived from the
Greek, literally meaning, "ratio-number,"
from logos "proportion, ratio, word" +
arithmos "number".
The common logarithm of a number is the
index of that power of ten which equals
the number.[23] Speaking of a number as
requiring so many figures is a rough
allusion to common logarithm, and was
referred to by Archimedes as the "order of
a number".[24] The first real logarithms
were heuristic methods to turn
multiplication into addition, thus
facilitating rapid computation. Some of
these methods used tables derived from
trigonometric identities.[25] Such methods
are called prosthaphaeresis.

Invention of the function now known as


the natural logarithm began as an attempt
to perform a quadrature of a rectangular
hyperbola by Grégoire de Saint-Vincent, a
Belgian Jesuit residing in Prague.
Archimedes had written The Quadrature of
the Parabola in the third century BC, but a
quadrature for the hyperbola eluded all
efforts until Saint-Vincent published his
results in 1647. The relation that the
logarithm provides between a geometric
progression in its argument and an
arithmetic progression of values,
prompted A. A. de Sarasa to make the
connection of Saint-Vincent's quadrature
and the tradition of logarithms in
prosthaphaeresis, leading to the term
"hyperbolic logarithm", a synonym for
natural logarithm. Soon the new function
was appreciated by Christiaan Huygens,
and James Gregory. The notation Log y
was adopted by Leibniz in 1675,[26] and the
next year he connected it to the integral

Before Euler developed his modern


conception of complex natural logarithms,
Roger Cotes had a nearly equivalent result
when he showed in 1714 that[27]
Logarithm tables, slide rules,
and historical applications

The 1797 Encyclopædia Britannica explanation of logarithms

By simplifying difficult calculations before


calculators and computers became
available, logarithms contributed to the
advance of science, especially astronomy.
They were critical to advances in
surveying, celestial navigation, and other
domains. Pierre-Simon Laplace called
logarithms
"...[a]n admirable artifice which, by
reducing to a few days the labour of
many months, doubles the life of the
astronomer, and spares him the errors
and disgust inseparable from long
calculations."[28]

As the function f(x) = bx is the inverse


function of logb x, it has been called an
antilogarithm.[29] Nowadays, this function
is more commonly called an exponential
function.
Log tables

A key tool that enabled the practical use of


logarithms was the table of logarithms.[30]
The first such table was compiled by
Henry Briggs in 1617, immediately after
Napier's invention but with the innovation
of using 10 as the base. Briggs' first table
contained the common logarithms of all
integers in the range from 1 to 1000, with a
precision of 14 digits. Subsequently, tables
with increasing scope were written. These
tables listed the values of log10 x for any
number x in a certain range, at a certain
precision. Base-10 logarithms were
universally used for computation, hence
the name common logarithm, since
numbers that differ by factors of 10 have
logarithms that differ by integers. The
common logarithm of x can be separated
into an integer part and a fractional part,
known as the characteristic and mantissa.
Tables of logarithms need only include the
mantissa, as the characteristic can be
easily determined by counting digits from
the decimal point.[31] The characteristic of
10 · x is one plus the characteristic of x,
and their mantissas are the same. Thus
using a three-digit log table, the logarithm
of 3542 is approximated by
Greater accuracy can be obtained by
interpolation:

The value of 10x can be determined by


reverse look up in the same table, since
the logarithm is a monotonic function.

Computations

The product and quotient of two positive


numbers c and d were routinely calculated
as the sum and difference of their
logarithms. The product cd or quotient c/d
came from looking up the antilogarithm of
the sum or difference, via the same table:
and

For manual calculations that demand any


appreciable precision, performing the
lookups of the two logarithms, calculating
their sum or difference, and looking up the
antilogarithm is much faster than
performing the multiplication by earlier
methods such as prosthaphaeresis, which
relies on trigonometric identities.

Calculations of powers and roots are


reduced to multiplications or divisions and
lookups by
and

Trigonometric calculations were facilitated


by tables that contained the common
logarithms of trigonometric functions.

Slide rules

Another critical application was the slide


rule, a pair of logarithmically divided
scales used for calculation. The non-
sliding logarithmic scale, Gunter's rule,
was invented shortly after Napier's
invention. William Oughtred enhanced it to
create the slide rule—a pair of logarithmic
scales movable with respect to each other.
Numbers are placed on sliding scales at
distances proportional to the differences
between their logarithms. Sliding the
upper scale appropriately amounts to
mechanically adding logarithms, as
illustrated here:

Schematic depiction of a slide rule. Starting from 2 on the lower scale, add the distance to 3 on
the upper scale to reach the product 6. The slide rule works because it is marked such that the
distance from 1 to x is proportional to the logarithm of x.

For example, adding the distance from 1 to


2 on the lower scale to the distance from 1
to 3 on the upper scale yields a product of
6, which is read off at the lower part. The
slide rule was an essential calculating tool
for engineers and scientists until the
1970s, because it allows, at the expense
of precision, much faster computation
than techniques based on tables.[32]

Analytic properties

A deeper study of logarithms requires the


concept of a function. A function is a rule
that, given one number, produces another
number.[33] An example is the function
producing the x-th power of b from any
real number x, where the base b is a fixed
number. This function is written as
f(x) = b x. When b is positive and unequal
to 1, we show below that f is invertible
when considered as a function from the
reals to the positive reals.

Existence

Let b be a positive real number not equal


to 1 and let f(x) = b x.

It is a standard result in real analysis that


any continuous strictly monotonic function
is bijective between its domain and range.
This fact follows from the intermediate
value theorem.[34] Now, f is strictly
increasing (for b > 1), or strictly decreasing
(for 0 < b < 1),[35] is continuous, has
domain , and has range . Therefore,
f is a bijection from to . In other
words, for each positive real number y,
there is exactly one real number x such
that .

We let denote the inverse


of f. That is, logb y is the unique real
number x such that . This function
is called the base-b logarithm function or
logarithmic function (or just logarithm).
Characterization by the product
formula

The function logb x can also be essentially


characterized by the product formula

More precisely, the logarithm to any base


b > 1 is the only increasing function f from
the positive reals to the reals satisfying
f(b) = 1 and[36]
Graph of the logarithm function

The graph of the logarithm function


logb (x) (blue) is obtained by
reflecting the graph of the function bx
(red) at the diagonal line (x = y).

As discussed above, the function logb is


the inverse to the exponential function
. Therefore, their graphs
correspond to each other upon exchanging
the x- and the y-coordinates (or upon
reflection at the diagonal line x = y), as
shown at the right: a point (t, u = bt) on the
graph of f yields a point (u, t = logb u) on
the graph of the logarithm and vice versa.
As a consequence, logb (x) diverges to
infinity (gets bigger than any given
number) if x grows to infinity, provided that
b is greater than one. In that case, logb(x)
is an increasing function. For b < 1,
logb (x) tends to minus infinity instead.
When x approaches zero, logb x goes to
minus infinity for b > 1 (plus infinity for
b < 1, respectively).
Derivative and antiderivative

The graph of the natural logarithm


(green) and its tangent at x = 1.5
(black)

Analytic properties of functions pass to


their inverses.[34] Thus, as f(x) = bx is a
continuous and differentiable function, so
is logb y. Roughly, a continuous function is
differentiable if its graph has no sharp
"corners". Moreover, as the derivative of
f(x) evaluates to ln(b) bx by the properties
of the exponential function, the chain rule
implies that the derivative of logb x is
given by[35][37]
That is, the slope of the tangent touching
the graph of the base-b logarithm at the
point (x, logb (x)) equals 1/(x ln(b)).

The derivative of ln(x) is 1/x; this implies


that ln(x) is the unique antiderivative of
1/x that has the value 0 for x = 1. It is this
very simple formula that motivated to
qualify as "natural" the natural logarithm;
this is also one of the main reasons of the
importance of the constant e.

The derivative with a generalized


functional argument f(x) is
The quotient at the right hand side is
called the logarithmic derivative of f.
Computing f'(x) by means of the derivative
of ln(f(x)) is known as logarithmic
differentiation.[38] The antiderivative of the
natural logarithm ln(x) is:[39]

Related formulas, such as antiderivatives


of logarithms to other bases can be
derived from this equation using the
change of bases.[40]
Integral representation of the natural
logarithm

The natural logarithm of t is the


shaded area underneath the graph of
the function f(x) = 1/x (reciprocal of
x).

The natural logarithm of t can be defined


as the definite integral:

This definition has the advantage that it


does not rely on the exponential function
or any trigonometric functions; the
definition is in terms of an integral of a
simple reciprocal. As an integral, ln(t)
equals the area between the x-axis and the
graph of the function 1/x, ranging from
x = 1 to x = t. This is a consequence of the
fundamental theorem of calculus and the
fact that the derivative of ln(x) is 1/x.
Product and power logarithm formulas
can be derived from this definition.[41] For
example, the product formula
ln(tu) = ln(t) + ln(u) is deduced as:

The equality (1) splits the integral into two


parts, while the equality (2) is a change of
variable (w = x/t). In the illustration below,
the splitting corresponds to dividing the
area into the yellow and blue parts.
Rescaling the left hand blue area vertically
by the factor t and shrinking it by the same
factor horizontally does not change its
size. Moving it appropriately, the area fits
the graph of the function f(x) = 1/x again.
Therefore, the left hand blue area, which is
the integral of f(x) from t to tu is the same
as the integral from 1 to u. This justifies
the equality (2) with a more geometric
proof.

A visual proof of the product formula of the natural logarithm


The power formula ln(tr) = r ln(t) may be
derived in a similar way:

The second equality uses a change of


variables (integration by substitution),
w = x1/r.

The sum over the reciprocals of natural


numbers,

is called the harmonic series. It is closely


tied to the natural logarithm: as n tends to
infinity, the difference,
converges (i.e. gets arbitrarily close) to a
number known as the Euler–Mascheroni
constant γ = 0.5772.... This relation aids in
analyzing the performance of algorithms
such as quicksort.[42]

Transcendence of the logarithm

Real numbers that are not algebraic are


called transcendental;[43] for example, π

and e are such numbers, but is

not. Almost all real numbers are


transcendental. The logarithm is an
example of a transcendental function. The
Gelfond–Schneider theorem asserts that
logarithms usually take transcendental, i.e.
"difficult" values.[44]

Calculation

The logarithm keys (LOG for base 10


and LN for base e) on a TI-83 Plus
graphing calculator

Logarithms are easy to compute in some


cases, such as log10 (1000) = 3. In general,
logarithms can be calculated using power
series or the arithmetic–geometric mean,
or be retrieved from a precalculated
logarithm table that provides a fixed
precision.[45][46] Newton's method, an
iterative method to solve equations
approximately, can also be used to
calculate the logarithm, because its
inverse function, the exponential function,
can be computed efficiently.[47] Using look-
up tables, CORDIC-like methods can be
used to compute logarithms by using only
the operations of addition and bit
shifts.[48][49] Moreover, the binary logarithm
algorithm calculates lb(x) recursively,
based on repeated squarings of x, taking
advantage of the relation
Power series

Taylor series

The Taylor series of ln(z) centered at


z = 1. The animation shows the
first 10 approximations along with the
99th and 100th. The approximations
do not converge beyond a distance of
1 from the center.

For any real number z that satisfies


0 < z ≤ 2, the following formula
holds:[nb 5][50]
Equating the function ln(z) to this infinite
sum (series) is shorthand for saying that
the function can be approximated to a
more and more accurate value by the
following expressions (known as partial
sums):

For example, with z = 1.5 the third


approximation yields 0.4167, which is
about 0.011 greater than
ln(1.5) = 0.405465, and the ninth
approximation yields 0.40553, which is
only about 0.0001 greater. The nth partial
sum can approximate ln(z) with arbitrary
precision, provided the number of
summands n is large enough.

In elementary calculus, the series is said


to converge to the function ln(z), and the
function is the limit of the series. It is the
Taylor series of the natural logarithm at
z = 1. The Taylor series of ln(z) provides a
particularly useful approximation to
ln(1 + z) when z is small, |z| < 1, since
then
For example, with z = 0.1 the first-order
approximation gives ln(1.1) ≈ 0.1, which is
less than 5% off the correct value 0.0953.

Inverse hyperbolic tangent

Another series is based on the inverse


hyperbolic tangent function:

for any real number z > 0.[nb 6][50] Using


sigma notation, this is also written as
This series can be derived from the above
Taylor series. It converges quicker than the
Taylor series, especially if z is close to 1.
For example, for z = 1.5, the first three
terms of the second series approximate
ln(1.5) with an error of about 3 × 10−6. The
quick convergence for z close to 1 can be
taken advantage of in the following way:
given a low-accuracy approximation
y ≈ ln(z) and putting

the logarithm of z is:


The better the initial approximation y is,
the closer A is to 1, so its logarithm can be
calculated efficiently. A can be calculated
using the exponential series, which
converges quickly provided y is not too
large. Calculating the logarithm of larger z
can be reduced to smaller values of z by
writing z = a · 10b, so that
ln(z) = ln(a) + b · ln(10).

A closely related method can be used to


compute the logarithm of integers. Putting
in the above series, it follows
that:
If the logarithm of a large integer n is
known, then this series yields a fast
converging series for log(n+1), with a rate

of convergence of .

Arithmetic–geometric mean
approximation

The arithmetic–geometric mean yields


high-precision approximations of the
natural logarithm. Sasaki and Kanada
showed in 1982 that it was particularly
fast for precisions between 400 and 1000
decimal places, while Taylor series
methods were typically faster when less
precision was needed. In their work ln(x)
is approximated to a precision of 2−p (or
p precise bits) by the following formula
(due to Carl Friedrich Gauss):[51][52]

Here M(x, y) denotes the arithmetic–


geometric mean of x and y. It is obtained
by repeatedly calculating the average
(x + y)/2 (arithmetic mean) and
(geometric mean) of x and y then let those
two numbers become the next x and y.
The two numbers quickly converge to a
common limit which is the value of
M(x, y). m is chosen such that

to ensure the required precision. A larger


m makes the M(x, y) calculation take
more steps (the initial x and y are farther
apart so it takes more steps to converge)
but gives more precision. The constants π
and ln(2) can be calculated with quickly
converging series.

Feynman's algorithm

While at Los Alamos National Laboratory


working on the Manhattan Project, Richard
Feynman developed a bit-processing
algorithm, to compute the logarithm, that
is similar to long division and was later
used in the Connection Machine. The
algorithm uses the fact that every real
number 1 < x < 2 is representable as a
product of distinct factors of the form
1 + 2−k. The algorithm sequentially builds
that product P, starting with P = 1 and
k = 1: if P · (1 + 2−k) < x, then it changes P
to P · (1 + 2−k). It then increases by one
regardless. The algorithm stops when k is
large enough to give the desired accuracy.
Because log(x) is the sum of the terms of
the form log(1 + 2−k) corresponding to
those k for which the factor 1 + 2−k was
included in the product P, log(x) may be
computed by simple addition, using a table
of log(1 + 2−k) for all k. Any base may be
used for the logarithm table.[53]

Applications

A nautilus shell displaying a


logarithmic spiral

Logarithms have many applications inside


and outside mathematics. Some of these
occurrences are related to the notion of
scale invariance. For example, each
chamber of the shell of a nautilus is an
approximate copy of the next one, scaled
by a constant factor. This gives rise to a
logarithmic spiral.[54] Benford's law on the
distribution of leading digits can also be
explained by scale invariance.[55]
Logarithms are also linked to self-
similarity. For example, logarithms appear
in the analysis of algorithms that solve a
problem by dividing it into two similar
smaller problems and patching their
solutions.[56] The dimensions of self-
similar geometric shapes, that is, shapes
whose parts resemble the overall picture
are also based on logarithms. Logarithmic
scales are useful for quantifying the
relative change of a value as opposed to
its absolute difference. Moreover, because
the logarithmic function log(x) grows very
slowly for large x, logarithmic scales are
used to compress large-scale scientific
data. Logarithms also occur in numerous
scientific formulas, such as the
Tsiolkovsky rocket equation, the Fenske
equation, or the Nernst equation.
Logarithmic scale

A logarithmic chart depicting the


value of one Goldmark in Papiermarks
during the German hyperinflation in
the 1920s

Scientific quantities are often expressed


as logarithms of other quantities, using a
logarithmic scale. For example, the decibel
is a unit of measurement associated with
logarithmic-scale quantities. It is based on
the common logarithm of ratios—10 times
the common logarithm of a power ratio or
20 times the common logarithm of a
voltage ratio. It is used to quantify the loss
of voltage levels in transmitting electrical
signals,[57] to describe power levels of
sounds in acoustics,[58] and the
absorbance of light in the fields of
spectrometry and optics. The signal-to-
noise ratio describing the amount of
unwanted noise in relation to a
(meaningful) signal is also measured in
decibels.[59] In a similar vein, the peak
signal-to-noise ratio is commonly used to
assess the quality of sound and image
compression methods using the
logarithm.[60]
The strength of an earthquake is
measured by taking the common
logarithm of the energy emitted at the
quake. This is used in the moment
magnitude scale or the Richter magnitude
scale. For example, a 5.0 earthquake
releases 32 times (101.5) and a 6.0
releases 1000 times (103) the energy of a
4.0.[61] Apparent magnitude measures the
brightness of stars logarithmically.[62] In
chemistry the negative of the decimal
logarithm, the decimal cologarithm, is
indicated by the letter p.[63] For instance,
pH is the decimal cologarithm of the
activity of hydronium ions (the form
+
hydrogen ions H take in water).[64] The
activity of hydronium ions in neutral water
is 10−7 mol·L−1, hence a pH of 7. Vinegar
typically has a pH of about 3. The
difference of 4 corresponds to a ratio of
104 of the activity, that is, vinegar's
hydronium ion activity is about
10−3 mol·L−1.

Semilog (log–linear) graphs use the


logarithmic scale concept for
visualization: one axis, typically the vertical
one, is scaled logarithmically. For example,
the chart at the right compresses the
steep increase from 1 million to 1 trillion to
the same space (on the vertical axis) as
the increase from 1 to 1 million. In such
graphs, exponential functions of the form
f(x) = a · b x appear as straight lines with
slope equal to the logarithm of b. Log-log
graphs scale both axes logarithmically,
which causes functions of the form
f(x) = a · x k to be depicted as straight lines
with slope equal to the exponent k. This is
applied in visualizing and analyzing power
laws.[65]

Psychology

Logarithms occur in several laws


describing human perception:[66][67] Hick's
law proposes a logarithmic relation
between the time individuals take to
choose an alternative and the number of
choices they have.[68] Fitts's law predicts
that the time required to rapidly move to a
target area is a logarithmic function of the
distance to and the size of the target.[69] In
psychophysics, the Weber–Fechner law
proposes a logarithmic relationship
between stimulus and sensation such as
the actual vs. the perceived weight of an
item a person is carrying.[70] (This "law",
however, is less realistic than more recent
models, such as Stevens's power law.[71])

Psychological studies found that


individuals with little mathematics
education tend to estimate quantities
logarithmically, that is, they position a
number on an unmarked line according to
its logarithm, so that 10 is positioned as
close to 100 as 100 is to 1000. Increasing
education shifts this to a linear estimate
(positioning 1000 10 times as far away) in
some circumstances, while logarithms are
used when the numbers to be plotted are
difficult to plot linearly.[72][73]
Probability theory and statistics

Three probability density functions


(PDF) of random variables with log-
normal distributions. The location
parameter μ, which is zero for all
three of the PDFs shown, is the mean
of the logarithm of the random
variable, not the mean of the variable
itself.

Distribution of first digits (in %, red


bars) in the population of the 237
countries of the world. Black dots
indicate the distribution predicted by
Benford's law.

Logarithms arise in probability theory: the


law of large numbers dictates that, for a
fair coin, as the number of coin-tosses
increases to infinity, the observed
proportion of heads approaches one-half.
The fluctuations of this proportion about
one-half are described by the law of the
iterated logarithm.[74]

Logarithms also occur in log-normal


distributions. When the logarithm of a
random variable has a normal distribution,
the variable is said to have a log-normal
distribution.[75] Log-normal distributions
are encountered in many fields, wherever a
variable is formed as the product of many
independent positive random variables, for
example in the study of turbulence.[76]
Logarithms are used for maximum-
likelihood estimation of parametric
statistical models. For such a model, the
likelihood function depends on at least
one parameter that must be estimated. A
maximum of the likelihood function occurs
at the same parameter-value as a
maximum of the logarithm of the
likelihood (the "log likelihood"), because
the logarithm is an increasing function.
The log-likelihood is easier to maximize,
especially for the multiplied likelihoods for
independent random variables.[77]

Benford's law describes the occurrence of


digits in many data sets, such as heights
of buildings. According to Benford's law,
the probability that the first decimal-digit
of an item in the data sample is d (from 1
to 9) equals log10 (d + 1) − log10 (d),
regardless of the unit of measurement.[78]
Thus, about 30% of the data can be
expected to have 1 as first digit, 18% start
with 2, etc. Auditors examine deviations
from Benford's law to detect fraudulent
accounting.[79]

The logarithm transformation is a type of


data transformation used to bring the
empirical distribution closer to the
assumed one.
Computational complexity

Analysis of algorithms is a branch of


computer science that studies the
performance of algorithms (computer
programs solving a certain problem).[80]
Logarithms are valuable for describing
algorithms that divide a problem into
smaller ones, and join the solutions of the
subproblems.[81]

For example, to find a number in a sorted


list, the binary search algorithm checks the
middle entry and proceeds with the half
before or after the middle entry if the
number is still not found. This algorithm
requires, on average, log2 (N)
comparisons, where N is the list's
length.[82] Similarly, the merge sort
algorithm sorts an unsorted list by dividing
the list into halves and sorting these first
before merging the results. Merge sort
algorithms typically require a time
approximately proportional to
N · log(N).[83] The base of the logarithm is
not specified here, because the result only
changes by a constant factor when
another base is used. A constant factor is
usually disregarded in the analysis of
algorithms under the standard uniform
cost model.[84]
A function f(x) is said to grow
logarithmically if f(x) is (exactly or
approximately) proportional to the
logarithm of x. (Biological descriptions of
organism growth, however, use this term
for an exponential function.[85]) For
example, any natural number N can be
represented in binary form in no more than
log2 N + 1 bits. In other words, the amount
of memory needed to store N grows
logarithmically with N.
Entropy and chaos

Billiards on an oval billiard table. Two


particles, starting at the center with
an angle differing by one degree, take
paths that diverge chaotically
because of reflections at the
boundary.

Entropy is broadly a measure of the


disorder of some system. In statistical
thermodynamics, the entropy S of some
physical system is defined as

The sum is over all possible states i of the


system in question, such as the positions
of gas particles in a container. Moreover,
pi is the probability that the state i is
attained and k is the Boltzmann constant.
Similarly, entropy in information theory
measures the quantity of information. If a
message recipient may expect any one of
N possible messages with equal
likelihood, then the amount of information
conveyed by any one such message is
quantified as log2 N bits.[86]

Lyapunov exponents use logarithms to


gauge the degree of chaoticity of a
dynamical system. For example, for a
particle moving on an oval billiard table,
even small changes of the initial
conditions result in very different paths of
the particle. Such systems are chaotic in a
deterministic way, because small
measurement errors of the initial state
predictably lead to largely different final
states.[87] At least one Lyapunov exponent
of a deterministically chaotic system is
positive.

Fractals

The Sierpinski triangle (at the right) is constructed by repeatedly


replacing equilateral triangles by three smaller ones.

Logarithms occur in definitions of the


dimension of fractals.[88] Fractals are
geometric objects that are self-similar in
the sense that small parts reproduce, at
least roughly, the entire global structure.
The Sierpinski triangle (pictured) can be
covered by three copies of itself, each
having sides half the original length. This
makes the Hausdorff dimension of this
structure ln(3)/ln(2) ≈ 1.58. Another
logarithm-based notion of dimension is
obtained by counting the number of boxes
needed to cover the fractal in question.

Music
Four different octaves shown on a linear scale, then shown on a logarithmic scale (as the ear hears them).

Logarithms are related to musical tones


and intervals. In equal temperament, the
frequency ratio depends only on the
interval between two tones, not on the
specific frequency, or pitch, of the
individual tones. For example, the note A
has a frequency of 440 Hz and B-flat has a
frequency of 466 Hz. The interval between
A and B-flat is a semitone, as is the one
between B-flat and B (frequency 493 Hz).
Accordingly, the frequency ratios agree:
Therefore, logarithms can be used to
describe the intervals: an interval is
measured in semitones by taking the
base-21/12 logarithm of the frequency ratio,
while the base-21/1200 logarithm of the
frequency ratio expresses the interval in
cents, hundredths of a semitone. The
latter is used for finer encoding, as it is
needed for non-equal temperaments.[89]
Interval
Just major
(the two tones are 1/12 tone Semitone Major third Tritone Octave
third
played at the same play ⓘ play ⓘ ⓘ
play ⓘ play ⓘ play ⓘ
play
time)

Frequency ratio r

Corresponding
number of
semitones

Corresponding
number of cents
Number theory

Natural logarithms are closely linked to


counting prime numbers (2, 3, 5, 7, 11, ...),
an important topic in number theory. For
any integer x, the quantity of prime
numbers less than or equal to x is denoted
π(x). The prime number theorem asserts
that π(x) is approximately given by

in the sense that the ratio of π(x) and that


fraction approaches 1 when x tends to
infinity.[90] As a consequence, the
probability that a randomly chosen number
between 1 and x is prime is inversely
proportional to the number of decimal
digits of x. A far better estimate of π(x) is
given by the offset logarithmic integral
function Li(x), defined by

The Riemann hypothesis, one of the oldest


open mathematical conjectures, can be
stated in terms of comparing π(x) and
Li(x).[91] The Erdős–Kac theorem
describing the number of distinct prime
factors also involves the natural logarithm.

The logarithm of n factorial,


n! = 1 · 2 · ... · n, is given by
This can be used to obtain Stirling's
formula, an approximation of n! for large
n.[92]

Generalizations

Complex logarithm

Polar form of z = x + iy. Both φ and φ'


are arguments of z.

All the complex numbers a that solve the


equation
are called complex logarithms of z, when z
is (considered as) a complex number. A
complex number is commonly represented
as z = x + iy, where x and y are real
numbers and i is an imaginary unit, the
square of which is −1. Such a number can
be visualized by a point in the complex
plane, as shown at the right. The polar
form encodes a non-zero complex
number z by its absolute value, that is, the
(positive, real) distance r to the origin, and
an angle between the real (x) axis Re and
the line passing through both the origin
and z. This angle is called the argument of
z.
The absolute value r of z is given by

Using the geometrical interpretation of


sine and cosine and their periodicity in 2π,
any complex number z may be denoted as

for any integer number k. Evidently the


argument of z is not uniquely specified:
both φ and φ' = φ + 2kπ are valid
arguments of z for all integers k, because
adding 2kπ radians or k⋅360°[nb 7] to φ
corresponds to "winding" around the origin
counter-clock-wise by k turns. The
resulting complex number is always z, as
illustrated at the right for k = 1. One may
select exactly one of the possible
arguments of z as the so-called principal
argument, denoted Arg(z), with a capital A,
by requiring φ to belong to one,
conveniently selected turn, e.g.
−π < φ ≤ π[93] or 0 ≤ φ < 2π.[94] These
regions, where the argument of z is
uniquely determined are called branches of
the argument function.

The principal branch (-π, π) of the


complex logarithm, Log(z). The black
point at z = 1 corresponds to absolute
value zero and brighter colors refer to
bigger absolute values. The hue of the
color encodes the argument of
Log(z).
Euler's formula connects the trigonometric
functions sine and cosine to the complex
exponential:

Using this formula, and again the


periodicity, the following identities hold:[95]

where ln(r) is the unique real natural


logarithm, ak denote the complex
logarithms of z, and k is an arbitrary
integer. Therefore, the complex logarithms
of z, which are all those complex values ak
for which the ak-th power of e equals z,
are the infinitely many values

for
arbitrary integers k.

Taking k such that φ + 2kπ is within the


defined interval for the principal
arguments, then ak is called the principal
value of the logarithm, denoted Log(z),
again with a capital L. The principal
argument of any positive real number x is
0; hence Log(x) is a real number and
equals the real (natural) logarithm.
However, the above formulas for
logarithms of products and powers do not
generalize to the principal value of the
complex logarithm.[96]

The illustration at the right depicts Log(z),


confining the arguments of z to the
interval (−π, π]. This way the
corresponding branch of the complex
logarithm has discontinuities all along the
negative real x axis, which can be seen in
the jump in the hue there. This
discontinuity arises from jumping to the
other boundary in the same branch, when
crossing a boundary, i.e. not changing to
the corresponding k-value of the
continuously neighboring branch. Such a
locus is called a branch cut. Dropping the
range restrictions on the argument makes
the relations "argument of z", and
consequently the "logarithm of z", multi-
valued functions.

Inverses of other exponential


functions

Exponentiation occurs in many areas of


mathematics and its inverse function is
often referred to as the logarithm. For
example, the logarithm of a matrix is the
(multi-valued) inverse function of the
matrix exponential.[97] Another example is
the p-adic logarithm, the inverse function
of the p-adic exponential. Both are defined
via Taylor series analogous to the real
case.[98] In the context of differential
geometry, the exponential map maps the
tangent space at a point of a manifold to a
neighborhood of that point. Its inverse is
also called the logarithmic (or log) map.[99]

In the context of finite groups


exponentiation is given by repeatedly
multiplying one group element b with
itself. The discrete logarithm is the
integer n solving the equation
where x is an element of the group.
Carrying out the exponentiation can be
done efficiently, but the discrete logarithm
is believed to be very hard to calculate in
some groups. This asymmetry has
important applications in public key
cryptography, such as for example in the
Diffie–Hellman key exchange, a routine
that allows secure exchanges of
cryptographic keys over unsecured
information channels.[100] Zech's logarithm
is related to the discrete logarithm in the
multiplicative group of non-zero elements
of a finite field.[101]
Further logarithm-like inverse functions
include the double logarithm ln(ln(x)), the
super- or hyper-4-logarithm (a slight
variation of which is called iterated
logarithm in computer science), the
Lambert W function, and the logit. They
are the inverse functions of the double
exponential function, tetration, of
f(w) = wew,[102] and of the logistic
function, respectively.[103]

Related concepts

From the perspective of group theory, the


identity log(cd) = log(c) + log(d) expresses
a group isomorphism between positive
reals under multiplication and reals under
addition. Logarithmic functions are the
only continuous isomorphisms between
these groups.[104] By means of that
isomorphism, the Haar measure
(Lebesgue measure) dx on the reals
corresponds to the Haar measure dx/x on
the positive reals.[105] The non-negative
reals not only have a multiplication, but
also have addition, and form a semiring,
called the probability semiring; this is in
fact a semifield. The logarithm then takes
multiplication to addition (log
multiplication), and takes addition to log
addition (LogSumExp), giving an
isomorphism of semirings between the
probability semiring and the log semiring.

Logarithmic one-forms df/f appear in


complex analysis and algebraic geometry
as differential forms with logarithmic
poles.[106]

The polylogarithm is the function defined


by

It is related to the natural logarithm by


Li1 (z) = −ln(1 − z). Moreover, Lis (1)
equals the Riemann zeta function ζ(s).[107]
See also

Mathematics
portal
Arithmetic
portal
Chemistry
portal
Geography
portal
Engineering
portal

Decimal exponent (dex)


Exponential function
Index of logarithm articles

Notes

1. The restrictions on x and b are explained in


the section "Analytic properties".

2. Proof: Taking the logarithm to base k of the


defining identity one gets

The formula follows by solving for

3. Some mathematicians disapprove of this


notation. In his 1985 autobiography, Paul
Halmos criticized what he considered the
"childish ln notation," which he said no
mathematician had ever used.[15] The
notation was invented by Irving Stringham,
a mathematician.[16][17]

4. For example C, Java, Haskell, and BASIC.


5. The same series holds for the principal
value of the complex logarithm for complex
numbers z satisfying |z − 1| < 1.
6. The same series holds for the principal
value of the complex logarithm for complex
numbers z with positive real part.

7. See radian for the conversion between 2π


and 360 degree.

References

1. Hobson, Ernest William (1914), John Napier


and the invention of logarithms, 1614; a
lecture (http://archive.org/details/johnnapie
rinvent00hobsiala) , University of California
Libraries, Cambridge : University Press
2. Remmert, Reinhold. (1991), Theory of
complex functions, New York: Springer-
Verlag, ISBN 0387971955, OCLC 21118309
(https://www.worldcat.org/oclc/2111830
9)

3. Kate, S.K.; Bhapkar, H.R. (2009), Basics Of


Mathematics (https://books.google.com/b
ooks?id=v4R0GSJtEQ4C&pg=PA1) , Pune:
Technical Publications, ISBN 978-81-8431-
755-8, chapter 1

4. All statements in this section can be found


in Shailesh Shirali 2002, section 4,
(Douglas Downing 2003, p. 275), or Kate &
Bhapkar 2009, p. 1-1, for example.
5. Bernstein, Stephen; Bernstein, Ruth (1999),
Schaum's outline of theory and problems of
elements of statistics. I, Descriptive
statistics and probability (https://archive.or
g/details/schaumsoutlineof00bern) ,
Schaum's outline series, New York:
McGraw-Hill, ISBN 978-0-07-005023-5, p. 21

6. Downing, Douglas (2003), Algebra the Easy


Way (https://archive.org/details/algebraeas
yway00down_0) , Barron's Educational
Series, Hauppauge, NY: Barron's, ISBN 978-
0-7641-1972-9, chapter 17, p. 275

7. Wegener, Ingo (2005), Complexity theory:


exploring the limits of efficient algorithms,
Berlin, New York: Springer-Verlag, ISBN 978-
3-540-21045-0, p. 20
8. Van der Lubbe, Jan C. A. (1997),
Information Theory (https://books.google.c
om/books?id=tBuI_6MQTcwC&pg=PA3) ,
Cambridge University Press, p. 3, ISBN 978-
0-521-46760-5

9. Allen, Elizabeth; Triantaphillidou, Sophie


(2011), The Manual of Photography (http
s://books.google.com/books?id=IfWivY3mI
gAC&pg=PA228) , Taylor & Francis, p. 228,
ISBN 978-0-240-52037-7
10. Goodrich, Michael T.; Tamassia, Roberto
(2002), Algorithm Design: Foundations,
Analysis, and Internet Examples, John Wiley
& Sons, p. 23, "One of the interesting and
sometimes even surprising aspects of the
analysis of data structures and algorithms
is the ubiquitous presence of logarithms ...
As is the custom in the computing
literature, we omit writing the base b of the
logarithm when b = 2."

11. Parkhurst, David F. (2007), Introduction to


Applied Mathematics for Environmental
Science (https://books.google.com/books?i
d=h6yq_lOr8Z4C&pg=PA288)
(illustrated ed.), Springer Science &
Business Media, p. 288, ISBN 978-0-387-
34228-3
12. Quantities and units – Part 2: Mathematics
(ISO 80000-2:2019); EN ISO 80000-2

13. Gullberg, Jan (1997), Mathematics: from


the birth of numbers. (https://archive.org/d
etails/mathematicsfromb1997gull) , New
York: W. W. Norton & Co, ISBN 978-0-393-
04002-9

14. See footnote 1 in Perl, Yehoshua; Reingold,


Edward M. (December 1977),
"Understanding the complexity of
interpolation search", Information
Processing Letters, 6 (6): 219–22,
doi:10.1016/0020-0190(77)90072-2 (http
s://doi.org/10.1016%2F0020-0190%2877%
2990072-2)
15. Paul Halmos (1985), I Want to Be a
Mathematician: An Automathography,
Berlin, New York: Springer-Verlag, ISBN 978-
0-387-96078-4

16. Irving Stringham (1893), Uniplanar algebra:


being part I of a propædeutic to the higher
mathematical analysis (https://books.googl
e.com/books?id=hPEKAQAAIAAJ&pg=PA1
3) , The Berkeley Press, p. xiii

17. Roy S. Freedman (2006), Introduction to


Financial Technology (https://books.googl
e.com/books?id=APJ7QeR_XPkC&pg=PA
5) , Amsterdam: Academic Press, p. 59,
ISBN 978-0-12-370478-8
18. See Theorem 3.29 in Rudin, Walter (1984),
Principles of mathematical analysis (http
s://archive.org/details/principlesofmath00r
udi) (3rd ed., International student ed.),
Auckland: McGraw-Hill International,
ISBN 978-0-07-085613-4
19. Napier, John (1614), Mirifici Logarithmorum
Canonis Descriptio (http://gdz.sub.uni-goett
ingen.de/dms/load/img/?PPN=PPN527914
568&DMDID=DMDLOG_0001&LOGID=LOG_
0001&PHYSID=PHYS_0001) [The
Description of the Wonderful Canon of
Logarithms] (in Latin), Edinburgh, Scotland:
Andrew Hart

The sequel ... Constructio was published


posthumously:

Napier, John (1619), Mirifici Logarithmorum


Canonis Constructio [The Construction of
the Wonderful Rule of Logarithms] (in
Latin), Edinburgh: Andrew Hart

Ian Bruce has made an annotated


translation of both books (http://17century
maths.com/contents/napiercontents.htm
l) (2012), available from
17centurymaths.com.

20. Hobson, Ernest William (1914), John Napier


and the invention of logarithms, 1614 (http
s://archive.org/details/johnnapierinvent00h
obsiala) , Cambridge: The University Press
21. Folkerts, Menso; Launert, Dieter; Thom,
Andreas (2016), "Jost Bürgi's method for
calculating sines", Historia Mathematica,
43 (2): 133–147, arXiv:1510.03180 (https://
arxiv.org/abs/1510.03180) ,
doi:10.1016/j.hm.2016.03.001 (https://doi.
org/10.1016%2Fj.hm.2016.03.001) ,
MR 3489006 (https://mathscinet.ams.org/
mathscinet-getitem?mr=3489006) ,
S2CID 119326088 (https://api.semanticsch
olar.org/CorpusID:119326088)

22. O'Connor, John J.; Robertson, Edmund F.,


"Jost Bürgi (1552 – 1632)" (https://mathshi
story.st-andrews.ac.uk/Biographies/Burgi.h
tml) , MacTutor History of Mathematics
Archive, University of St Andrews
23. William Gardner (1742) Tables of
Logarithms

24. Pierce, R. C. Jr. (January 1977), "A brief


history of logarithms", The Two-Year
College Mathematics Journal, 8 (1): 22–26,
doi:10.2307/3026878 (https://doi.org/10.2
307%2F3026878) , JSTOR 3026878 (http
s://www.jstor.org/stable/3026878)

25. Enrique Gonzales-Velasco (2011) Journey


through Mathematics – Creative Episodes
in its History, §2.4 Hyperbolic logarithms, p.
117, Springer ISBN 978-0-387-92153-2

26. Florian Cajori (1913) "History of the


exponential and logarithm concepts",
American Mathematical Monthly 20: 5, 35,
75, 107, 148, 173, 205.
27. Stillwell, J. (2010), Mathematics and Its
History (3rd ed.), Springer

28. Bryant, Walter W. (1907), A History of


Astronomy (https://archive.org/stream/ahis
toryastrono01bryagoog#page/n72/mode/2
up) , London: Methuen & Co, p. 44

29. Abramowitz, Milton; Stegun, Irene A., eds.


(1972), Handbook of Mathematical
Functions with Formulas, Graphs, and
Mathematical Tables (10th ed.), New York:
Dover Publications, ISBN 978-0-486-61272-
0, section 4.7., p. 89

30. Campbell-Kelly, Martin (2003), The history


of mathematical tables: from Sumer to
spreadsheets, Oxford scholarship online,
Oxford University Press, ISBN 978-0-19-
850841-0, section 2
31. Spiegel, Murray R.; Moyer, R.E. (2006),
Schaum's outline of college algebra,
Schaum's outline series, New York:
McGraw-Hill, ISBN 978-0-07-145227-4,
p. 264

32. Maor, Eli (2009), E: The Story of a Number,


Princeton University Press, sections 1, 13,
ISBN 978-0-691-14134-3

33. Devlin, Keith (2004), Sets, functions, and


logic: an introduction to abstract
mathematics (https://books.google.com/b
ooks?id=uQHF7bcm4k4C) , Chapman &
Hall/CRC mathematics (3rd ed.), Boca
Raton, Fla: Chapman & Hall/CRC, ISBN 978-
1-58488-449-1, or see the references in
function
34. Lang, Serge (1997), Undergraduate
analysis, Undergraduate Texts in
Mathematics (2nd ed.), Berlin, New York:
Springer-Verlag, doi:10.1007/978-1-4757-
2698-5 (https://doi.org/10.1007%2F978-1-4
757-2698-5) , ISBN 978-0-387-94841-6,
MR 1476913 (https://mathscinet.ams.org/
mathscinet-getitem?mr=1476913) , section
III.3

35. Lang 1997, section IV.2


36. Dieudonné, Jean (1969), Foundations of
Modern Analysis, vol. 1, Academic Press,
p. 84 item (4.3.1)

37. "Calculation of d/dx(Log(b,x))" (http://www.


wolframalpha.com/input/?i=d/dx(Log(b,
x))) , Wolfram Alpha, Wolfram Research,
retrieved 15 March 2011
38. Kline, Morris (1998), Calculus: an intuitive
and physical approach, Dover books on
mathematics, New York: Dover
Publications, ISBN 978-0-486-40453-0,
p. 386

39. "Calculation of Integrate(ln(x))" (http://ww


w.wolframalpha.com/input/?i=Integrate(ln
(x))) , Wolfram Alpha, Wolfram Research,
retrieved 15 March 2011

40. Abramowitz & Stegun, eds. 1972, p. 69


41. Courant, Richard (1988), Differential and
integral calculus. Vol. I, Wiley Classics
Library, New York: John Wiley & Sons,
ISBN 978-0-471-60842-4, MR 1009558 (http
s://mathscinet.ams.org/mathscinet-getite
m?mr=1009558) , section III.6
42. Havil, Julian (2003), Gamma: Exploring
Euler's Constant, Princeton University
Press, ISBN 978-0-691-09983-5, sections
11.5 and 13.8

43. Nomizu, Katsumi (1996), Selected papers


on number theory and algebraic geometry
(https://books.google.com/books?id=uDDx
du0lrWAC&pg=PA21) , vol. 172, Providence,
RI: AMS Bookstore, p. 21, ISBN 978-0-8218-
0445-2

44. Baker, Alan (1975), Transcendental number


theory, Cambridge University Press,
ISBN 978-0-521-20461-3, p. 10

45. Muller, Jean-Michel (2006), Elementary


functions (2nd ed.), Boston, MA: Birkhäuser
Boston, ISBN 978-0-8176-4372-0, sections
4.2.2 (p. 72) and 5.5.2 (p. 95)
46. Hart; Cheney; Lawson; et al. (1968),
Computer Approximations, SIAM Series in
Applied Mathematics, New York: John
Wiley, section 6.3, pp. 105–11

47. Zhang, M.; Delgado-Frias, J.G.; Vassiliadis,


S. (1994), "Table driven Newton scheme for
high precision logarithm generation", IEE
Proceedings - Computers and Digital
Techniques, 141 (5): 281–92,
doi:10.1049/ip-cdt:19941268 (https://doi.or
g/10.1049%2Fip-cdt%3A19941268) ,
ISSN 1350-2387 (https://www.worldcat.or
g/issn/1350-2387) , section 1 for an
overview
48. Meggitt, J.E. (April 1962), "Pseudo Division
and Pseudo Multiplication Processes", IBM
Journal of Research and Development, 6
(2): 210–26, doi:10.1147/rd.62.0210 (http
s://doi.org/10.1147%2Frd.62.0210) ,
S2CID 19387286 (https://api.semanticscho
lar.org/CorpusID:19387286)

49. Kahan, W. (20 May 2001), Pseudo-Division


Algorithms for Floating-Point Logarithms
and Exponentials

50. Abramowitz & Stegun, eds. 1972, p. 68


51. Sasaki, T.; Kanada, Y. (1982), "Practically
fast multiple-precision evaluation of log(x)"
(http://ci.nii.ac.jp/naid/110002673332) ,
Journal of Information Processing, 5 (4):
247–50, retrieved 30 March 2011
52. Ahrendt, Timm (1999), "Fast Computations
of the Exponential Function", Stacs 99,
Lecture notes in computer science,
vol. 1564, Berlin, New York: Springer,
pp. 302–12, doi:10.1007/3-540-49116-3_28
(https://doi.org/10.1007%2F3-540-49116-3
_28) , ISBN 978-3-540-65691-3

53. Hillis, Danny (15 January 1989), "Richard


Feynman and The Connection Machine",
Physics Today, 42 (2): 78,
Bibcode:1989PhT....42b..78H (https://ui.ads
abs.harvard.edu/abs/1989PhT....42b..78
H) , doi:10.1063/1.881196 (https://doi.org/
10.1063%2F1.881196)

54. Maor 2009, p. 135


55. Frey, Bruce (2006), Statistics hacks (https://
books.google.com/books?id=HOPyiNb9Uq
wC&pg=PA275) , Hacks Series, Sebastopol,
CA: O'Reilly, ISBN 978-0-596-10164-0,
chapter 6, section 64

56. Ricciardi, Luigi M. (1990), Lectures in


applied mathematics and informatics (http
s://books.google.com/books?id=Cw4NAQA
AIAAJ) , Manchester: Manchester
University Press, ISBN 978-0-7190-2671-3,
p. 21, section 1.3.2

57. Bakshi, U.A. (2009), Telecommunication


Engineering (https://books.google.com/boo
ks?id=EV4AF0XJO9wC&pg=PAA5) , Pune:
Technical Publications, ISBN 978-81-8431-
725-1, section 5.2
58. Maling, George C. (2007), "Noise", in
Rossing, Thomas D. (ed.), Springer
handbook of acoustics, Berlin, New York:
Springer-Verlag, ISBN 978-0-387-30446-5,
section 23.0.2

59. Tashev, Ivan Jelev (2009), Sound Capture


and Processing: Practical Approaches (http
s://books.google.com/books?id=plll9smnb
OIC&pg=PA48) , New York: John Wiley &
Sons, p. 98, ISBN 978-0-470-31983-3
60. Chui, C.K. (1997), Wavelets: a mathematical
tool for signal processing (https://books.go
ogle.com/books?id=N06Gu433PawC&pg=P
A180) , SIAM monographs on
mathematical modeling and computation,
Philadelphia: Society for Industrial and
Applied Mathematics, ISBN 978-0-89871-
384-8

61. Crauder, Bruce; Evans, Benny; Noell, Alan


(2008), Functions and Change: A Modeling
Approach to College Algebra (4th ed.),
Boston: Cengage Learning, ISBN 978-0-547-
15669-9, section 4.4.
62. Bradt, Hale (2004), Astronomy methods: a
physical approach to astronomical
observations, Cambridge Planetary
Science, Cambridge University Press,
ISBN 978-0-521-53551-9, section 8.3, p. 231

63. Nørby, Jens (2000). "The origin and the


meaning of the little p in pH". Trends in
Biochemical Sciences. 25 (1): 36–37.
doi:10.1016/S0968-0004(99)01517-0 (http
s://doi.org/10.1016%2FS0968-0004%289
9%2901517-0) . PMID 10637613 (https://p
ubmed.ncbi.nlm.nih.gov/10637613) .
64. IUPAC (1997), A. D. McNaught, A. Wilkinson
(ed.), Compendium of Chemical
Terminology ("Gold Book") (http://goldbook.
iupac.org/P04524.html) (2nd ed.), Oxford:
Blackwell Scientific Publications,
doi:10.1351/goldbook (https://doi.org/10.1
351%2Fgoldbook) , ISBN 978-0-9678550-9-
7

65. Bird, J.O. (2001), Newnes engineering


mathematics pocket book (3rd ed.), Oxford:
Newnes, ISBN 978-0-7506-4992-6, section
34

66. Goldstein, E. Bruce (2009), Encyclopedia of


Perception (https://books.google.com/boo
ks?id=Y4TOEN4f5ZMC) , Thousand Oaks,
CA: Sage, ISBN 978-1-4129-4081-8,
pp. 355–56
67. Matthews, Gerald (2000), Human
Performance: Cognition, Stress, and
Individual Differences (https://books.googl
e.com/books?id=0XrpulSM1HUC) , Hove:
Psychology Press, ISBN 978-0-415-04406-6,
p. 48

68. Welford, A.T. (1968), Fundamentals of skill,


London: Methuen, ISBN 978-0-416-03000-6,
OCLC 219156 (https://www.worldcat.org/o
clc/219156) , p. 61
69. Paul M. Fitts (June 1954), "The information
capacity of the human motor system in
controlling the amplitude of movement",
Journal of Experimental Psychology, 47 (6):
381–91, doi:10.1037/h0055392 (https://do
i.org/10.1037%2Fh0055392) ,
PMID 13174710 (https://pubmed.ncbi.nlm.
nih.gov/13174710) , S2CID 501599 (http
s://api.semanticscholar.org/CorpusID:5015
99) , reprinted in Paul M. Fitts (1992), "The
information capacity of the human motor
system in controlling the amplitude of
movement" (http://sing.stanford.edu/cs303
-sp10/papers/1954-Fitts.pdf) (PDF),
Journal of Experimental Psychology:
General, 121 (3): 262–69,
doi:10.1037/0096-3445.121.3.262 (https://
doi.org/10.1037%2F0096-3445.121.3.262) ,
PMID 1402698 (https://pubmed.ncbi.nlm.ni
h.gov/1402698) , retrieved 30 March 2011

70. Banerjee, J.C. (1994), Encyclopaedic


dictionary of psychological terms (https://b
ooks.google.com/books?id=Pwl5U2q5hfcC
&pg=PA306) , New Delhi: M.D. Publications,
p. 304, ISBN 978-81-85880-28-0,
OCLC 33860167 (https://www.worldcat.or
g/oclc/33860167)

71. Nadel, Lynn (2005), Encyclopedia of


cognitive science, New York: John Wiley &
Sons, ISBN 978-0-470-01619-0, lemmas
Psychophysics and Perception: Overview
72. Siegler, Robert S.; Opfer, John E. (2003),
"The Development of Numerical Estimation.
Evidence for Multiple Representations of
Numerical Quantity" (https://web.archive.or
g/web/20110517002232/http://www.psy.c
mu.edu/~siegler/sieglerbooth-cd04.pdf)
(PDF), Psychological Science, 14 (3): 237–
43, CiteSeerX 10.1.1.727.3696 (https://cites
eerx.ist.psu.edu/viewdoc/summary?doi=1
0.1.1.727.3696) , doi:10.1111/1467-
9280.02438 (https://doi.org/10.1111%2F14
67-9280.02438) , PMID 12741747 (https://
pubmed.ncbi.nlm.nih.gov/12741747) ,
S2CID 9583202 (https://api.semanticschola
r.org/CorpusID:9583202) , archived from
the original (http://www.psy.cmu.edu/~sieg
ler/sieglerbooth-cd04.pdf) (PDF) on 17
May 2011, retrieved 7 January 2011
73. Dehaene, Stanislas; Izard, Véronique;
Spelke, Elizabeth; Pica, Pierre (2008), "Log
or Linear? Distinct Intuitions of the Number
Scale in Western and Amazonian Indigene
Cultures", Science, 320 (5880): 1217–20,
Bibcode:2008Sci...320.1217D (https://ui.ad
sabs.harvard.edu/abs/2008Sci...320.1217
D) , CiteSeerX 10.1.1.362.2390 (https://cite
seerx.ist.psu.edu/viewdoc/summary?doi=1
0.1.1.362.2390) ,
doi:10.1126/science.1156540 (https://doi.o
rg/10.1126%2Fscience.1156540) ,
PMC 2610411 (https://www.ncbi.nlm.nih.g
ov/pmc/articles/PMC2610411) ,
PMID 18511690 (https://pubmed.ncbi.nlm.
nih.gov/18511690)
74. Breiman, Leo (1992), Probability, Classics in
applied mathematics, Philadelphia: Society
for Industrial and Applied Mathematics,
ISBN 978-0-89871-296-4, section 12.9

75. Aitchison, J.; Brown, J.A.C. (1969), The


lognormal distribution, Cambridge
University Press, ISBN 978-0-521-04011-2,
OCLC 301100935 (https://www.worldcat.or
g/oclc/301100935)

76. Jean Mathieu and Julian Scott (2000), An


introduction to turbulent flow (https://book
s.google.com/books?id=nVA53NEAx64C&p
g=PA50) , Cambridge University Press,
p. 50, ISBN 978-0-521-77538-0
77. Rose, Colin; Smith, Murray D. (2002),
Mathematical statistics with Mathematica,
Springer texts in statistics, Berlin, New
York: Springer-Verlag, ISBN 978-0-387-
95234-5, section 11.3

78. Tabachnikov, Serge (2005), Geometry and


Billiards, Providence, RI: American
Mathematical Society, pp. 36–40,
ISBN 978-0-8218-3919-5, section 2.1
79. Durtschi, Cindy; Hillison, William; Pacini,
Carl (2004), "The Effective Use of Benford's
Law in Detecting Fraud in Accounting Data"
(https://web.archive.org/web/2017082906
2510/http://faculty.usfsp.edu/gkearns/Arti
cles_Fraud/Benford%20Analysis%20Article.
pdf) (PDF), Journal of Forensic Accounting,
V: 17–34, archived from the original (http://
faculty.usfsp.edu/gkearns/Articles_Fraud/B
enford%20Analysis%20Article.pdf) (PDF)
on 29 August 2017, retrieved 28 May 2018

80. Wegener, Ingo (2005), Complexity theory:


exploring the limits of efficient algorithms,
Berlin, New York: Springer-Verlag, ISBN 978-
3-540-21045-0, pp. 1–2
81. Harel, David; Feldman, Yishai A. (2004),
Algorithmics: the spirit of computing, New
York: Addison-Wesley, ISBN 978-0-321-
11784-7, p. 143

82. Knuth, Donald (1998), The Art of Computer


Programming, Reading, MA: Addison-
Wesley, ISBN 978-0-201-89685-5, section
6.2.1, pp. 409–26

83. Donald Knuth 1998, section 5.2.4, pp. 158–


68

84. Wegener, Ingo (2005), Complexity theory:


exploring the limits of efficient algorithms,
Berlin, New York: Springer-Verlag, p. 20,
ISBN 978-3-540-21045-0
85. Mohr, Hans; Schopfer, Peter (1995), Plant
physiology (https://archive.org/details/plan
tphysiology0000mohr) , Berlin, New York:
Springer-Verlag, ISBN 978-3-540-58016-4,
chapter 19, p. 298

86. Eco, Umberto (1989), The open work,


Harvard University Press, ISBN 978-0-674-
63976-8, section III.I
87. Sprott, Julien Clinton (2010), "Elegant
Chaos: Algebraically Simple Chaotic Flows"
(https://books.google.com/books?id=buILB
Dre9S4C) , Elegant Chaos: Algebraically
Simple Chaotic Flows. Edited by Sprott
Julien Clinton. Published by World Scientific
Publishing Co. Pte. Ltd, New Jersey: World
Scientific, Bibcode:2010ecas.book.....S (htt
ps://ui.adsabs.harvard.edu/abs/2010ecas.
book.....S) , doi:10.1142/7183 (https://doi.o
rg/10.1142%2F7183) , ISBN 978-981-283-
881-0, section 1.9

88. Helmberg, Gilbert (2007), Getting


acquainted with fractals, De Gruyter
Textbook, Berlin, New York: Walter de
Gruyter, ISBN 978-3-11-019092-2
89. Wright, David (2009), Mathematics and
music, Providence, RI: AMS Bookstore,
ISBN 978-0-8218-4873-9, chapter 5

90. Bateman, P.T.; Diamond, Harold G. (2004),


Analytic number theory: an introductory
course, New Jersey: World Scientific,
ISBN 978-981-256-080-3, OCLC 492669517
(https://www.worldcat.org/oclc/49266951
7) , theorem 4.1

91. P. T. Bateman & Diamond 2004, Theorem


8.15

92. Slomson, Alan B. (1991), An introduction to


combinatorics, London: CRC Press,
ISBN 978-0-412-35370-3, chapter 4

93. Ganguly, S. (2005), Elements of Complex


Analysis, Kolkata: Academic Publishers,
ISBN 978-81-87504-86-3, Definition 1.6.3
94. Nevanlinna, Rolf Herman; Paatero, Veikko
(2007), "Introduction to complex analysis",
London: Hilger, Providence, RI: AMS
Bookstore, Bibcode:1974aitc.book.....W (htt
ps://ui.adsabs.harvard.edu/abs/1974aitc.b
ook.....W) , ISBN 978-0-8218-4399-4,
section 5.9

95. Moore, Theral Orvis; Hadlock, Edwin H.


(1991), Complex analysis, Singapore: World
Scientific, ISBN 978-981-02-0246-0, section
1.2

96. Wilde, Ivan Francis (2006), Lecture notes on


complex analysis (https://books.google.co
m/books?id=vrWES2W6vG0C&q=complex+
logarithm&pg=PA97) , London: Imperial
College Press, ISBN 978-1-86094-642-4,
theorem 6.1.
97. Higham, Nicholas (2008), Functions of
Matrices. Theory and Computation,
Philadelphia, PA: SIAM, ISBN 978-0-89871-
646-7, chapter 11.

98. Neukirch, Jürgen (1999), Algebraische


Zahlentheorie, Grundlehren der
mathematischen Wissenschaften, vol. 322,
Berlin: Springer-Verlag, ISBN 978-3-540-
65399-8, MR 1697859 (https://mathscinet.a
ms.org/mathscinet-getitem?mr=1697859) ,
Zbl 0956.11021 (https://zbmath.org/?form
at=complete&q=an:0956.11021) , section
II.5.
99. Hancock, Edwin R.; Martin, Ralph R.; Sabin,
Malcolm A. (2009), Mathematics of
Surfaces XIII: 13th IMA International
Conference York, UK, September 7–9, 2009
Proceedings (https://books.google.com/bo
oks?id=0cqCy9x7V_QC&pg=PA379) ,
Springer, p. 379, ISBN 978-3-642-03595-1

100. Stinson, Douglas Robert (2006),


Cryptography: Theory and Practice
(3rd ed.), London: CRC Press, ISBN 978-1-
58488-508-5

101. Lidl, Rudolf; Niederreiter, Harald (1997),


Finite fields (https://archive.org/details/finit
efields0000lidl_a8r3) , Cambridge
University Press, ISBN 978-0-521-39231-0
102. Corless, R.; Gonnet, G.; Hare, D.; Jeffrey, D.;
Knuth, Donald (1996), "On the Lambert W
function" (https://web.archive.org/web/201
01214110615/http://www.apmaths.uwo.c
a/~djeffrey/Offprints/W-adv-cm.pdf)
(PDF), Advances in Computational
Mathematics, 5: 329–59,
doi:10.1007/BF02124750 (https://doi.org/1
0.1007%2FBF02124750) , ISSN 1019-7168
(https://www.worldcat.org/issn/1019-716
8) , S2CID 29028411 (https://api.semantics
cholar.org/CorpusID:29028411) , archived
from the original (http://www.apmaths.uwo.
ca/~djeffrey/Offprints/W-adv-cm.pdf)
(PDF) on 14 December 2010, retrieved
13 February 2011
103. Cherkassky, Vladimir; Cherkassky, Vladimir
S.; Mulier, Filip (2007), Learning from data:
concepts, theory, and methods, Wiley series
on adaptive and learning systems for signal
processing, communications, and control,
New York: John Wiley & Sons, ISBN 978-0-
471-68182-3, p. 357

104. Bourbaki, Nicolas (1998), General topology.


Chapters 5–10, Elements of Mathematics,
Berlin, New York: Springer-Verlag, ISBN 978-
3-540-64563-4, MR 1726872 (https://maths
cinet.ams.org/mathscinet-getitem?mr=172
6872) , section V.4.1
105. Ambartzumian, R.V. (1990), Factorization
calculus and geometric probability (https://
archive.org/details/factorizationcal0000am
ba) , Cambridge University Press, ISBN 978-
0-521-34535-4, section 1.4

106. Esnault, Hélène; Viehweg, Eckart (1992),


Lectures on vanishing theorems, DMV
Seminar, vol. 20, Basel, Boston: Birkhäuser
Verlag, CiteSeerX 10.1.1.178.3227 (https://
citeseerx.ist.psu.edu/viewdoc/summary?d
oi=10.1.1.178.3227) , doi:10.1007/978-3-
0348-8600-0 (https://doi.org/10.1007%2F9
78-3-0348-8600-0) , ISBN 978-3-7643-2822-
1, MR 1193913 (https://mathscinet.ams.or
g/mathscinet-getitem?mr=1193913) ,
section 2
107. Apostol, T.M. (2010), "Logarithm" (http://dl
mf.nist.gov/25.12) , in Olver, Frank W. J.;
Lozier, Daniel M.; Boisvert, Ronald F.; Clark,
Charles W. (eds.), NIST Handbook of
Mathematical Functions, Cambridge
University Press, ISBN 978-0-521-19225-5,
MR 2723248 (https://mathscinet.ams.org/
mathscinet-getitem?mr=2723248) .

External links

Media related to Logarithm at


Wikimedia Commons
The dictionary definition of logarithm
at Wiktionary
A lesson on logarithms can be found
on Wikiversity
Weisstein, Eric W., "Logarithm" (https://
mathworld.wolfram.com/Logarithm.htm
l) , MathWorld
Khan Academy: Logarithms, free online
micro lectures (https://web.archive.org/
web/20121218200616/http://www.khan
academy.org/math/algebra/logarithms-t
utorial)
"Logarithmic function" (https://www.enc
yclopediaofmath.org/index.php?title=Lo
garithmic_function) , Encyclopedia of
Mathematics, EMS Press, 2001 [1994]

Colin Byfleet, Educational video on


logarithms (http://mediasite.oddl.fsu.ed
u/mediasite/Viewer/?peid=003298f9a0
2f468c8351c50488d6c479) , retrieved
12 October 2010
Edward Wright, Translation of Napier's
work on logarithms (https://web.archive.
org/web/20021203005508/http://www.j
ohnnapier.com/table_of_logarithms_00
1.htm) , archived from the original on 3
December 2002, retrieved 12 October
2010
Glaisher, James Whitbread Lee (1911),
"Logarithm" (https://en.wikisource.org/
wiki/1911_Encyclop%C3%A6dia_Britanni
ca/Logarithm) , in Chisholm, Hugh (ed.),
Encyclopædia Britannica, vol. 16
(11th ed.), Cambridge University Press,
pp. 868–77

Retrieved from
"https://en.wikipedia.org/w/index.php?
title=Logarithm&oldid=1194642087"

This page was last edited on 10 January 2024, at


00:56 (UTC). •
Content is available under CC BY-SA 4.0 unless
otherwise noted.

You might also like